CN115019379A - Man-machine cooperative infrared meibomian gland image quantitative analysis method - Google Patents

Man-machine cooperative infrared meibomian gland image quantitative analysis method Download PDF

Info

Publication number
CN115019379A
CN115019379A CN202210609257.1A CN202210609257A CN115019379A CN 115019379 A CN115019379 A CN 115019379A CN 202210609257 A CN202210609257 A CN 202210609257A CN 115019379 A CN115019379 A CN 115019379A
Authority
CN
China
Prior art keywords
gland
image
point
infrared
meibomian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210609257.1A
Other languages
Chinese (zh)
Inventor
林嘉雯
张云希
林少龙
宋昊阳
陈国霖
林杰龙
林智明
卢峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202210609257.1A priority Critical patent/CN115019379A/en
Publication of CN115019379A publication Critical patent/CN115019379A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ophthalmology & Optometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a man-machine cooperative infrared meibomian gland image quantitative analysis method, which comprises the following steps of; step S1, performing semantic segmentation on meibomian gland glands of the image, and converting the image into a binary gland image; step S2, identifying a single gland in the binary gland image; step S3, automatically acquiring the central line of the gland area; step S4, automatically calculating the length of a single gland based on the central line; step S5, automatically calculating the diameter of the single gland based on the central line; step S6, automatically calculating the area of a single gland area according to the result obtained in the step; step S7, automatically calculating the deformation coefficient of a single gland area according to the result obtained in the step; step S8, automatically calculating the development value of the gland area according to the result obtained in the step; step S9, calculating the percentage of the glandular region in the central region of the image; the invention can efficiently and accurately identify the gland and realize the automatic calculation of the main biological parameters of the meibomian gland.

Description

Man-machine cooperative infrared meibomian gland image quantitative analysis method
Technical Field
The invention relates to the technical field of image recognition and analysis, in particular to a man-machine cooperative infrared meibomian gland image quantitative analysis method.
Background
Meibomian Gland Dysfunction (MGD) is the leading cause of dry eye, and glandular changes can lead to insufficient lipid secretion or duct blockage, affecting lipid secretion quality, leading to excessive tear evaporation and dry eye symptoms. This condition can cause serious discomfort, interfere with daily activities, and even cause blindness. Researches show that an ophthalmic clinician can accurately judge the condition of a patient through relevant dry eye examination, guide the patient to treat dry eye as soon as possible and normatively, can effectively delay the course of disease or cure the disease, and reduce the risk of vision disorder and even blindness of the patient. The meibomian gland infrared image has higher diagnosis sensitivity and specificity to MGD patients, is an effective tool for observing morphological changes of meibomian glands widely accepted by clinicians, and provides an effective and objective basis for clinical diagnosis of MGD.
At present, an infrared meibomian gland image analysis method mostly realizes qualitative analysis of MGD and recognition of an integral gland region, and cannot meet the requirement of quantitative analysis of the gland morphology in an infrared meibomian gland image; a few proposed quantitative analysis methods based on gland segmentation often have the problems of low automatic segmentation accuracy of glands or low efficiency of completely depending on manual labeling. How to efficiently and accurately assist clinicians in MGD and dry eye diagnosis is an important target for developing infrared meibomian gland image analysis and research work.
Disclosure of Invention
The invention provides a man-machine cooperative infrared meibomian gland image quantitative analysis method, which can efficiently and accurately identify glands by fully utilizing an artificial intelligence technology and only combining a small amount of artificial correction, and can accurately extract central lines of all glands including severe bending conditions, thereby realizing automatic calculation of main biological parameters of meibomian gland glands, providing objective basis for clinicians to diagnose MGD and xerophthalmia, and saving labor cost.
The invention adopts the following technical scheme.
A man-machine cooperative infrared meibomian gland image quantitative analysis method is used for identifying a meibomian gland area in an infrared image of a gland and collecting appearance characteristic data of the meibomian gland area, and comprises the following steps;
step S1, reading in an infrared meibomian gland image, performing semantic segmentation on meibomian gland glands of the image by adopting a method combining automatic segmentation and manual fine adjustment correction, and converting the infrared meibomian gland image into a binary gland image;
step S2, identifying a single gland in the binary gland image based on the connected region in the gland image; the specific method is that the image is scanned line by line along the y-axis direction, and the point with color change in the scanning process is marked as the entry point of the gland; for each gland entry point, the following rule was followed: if the entry point does not fall into any previously known communicating region, judging that the point belongs to a new gland, and calculating the maximum communicating region of the point; otherwise, neglecting the entry point, completing the traversal processing of all the entries, and distinguishing and identifying all the non-connected single glands;
step S3, automatically acquiring the central line of the gland area;
step S4, automatically calculating the length of a single gland based on the central line;
step S5, automatically calculating the diameter of the single gland based on the central line;
step S6, automatically calculating the area of a single gland area according to the result obtained in the step;
step S7, automatically calculating the deformation coefficient of a single gland area according to the result obtained in the step;
step S8, automatically calculating the development value of the gland area according to the result obtained in the step;
and step S9, calculating the percentage of the glandular region in the central region of the image.
In step S3, the specific method is: for a single gland area, firstly, preliminarily acquiring a left boundary point set and a right boundary point set of the gland by adopting a central line extraction method based on line scanning, and calculating a central point coordinate; when the calculated central point does not fall in the gland region, judging that the gland is a bent gland, and further adopting a circulating corrosion method to obtain the central line Lc of the gland i
The extraction of the gland region central line based on scanning in the step S3 is realized by scanning the single gland region line by line along the y-axis direction; the specific method comprises the steps of adding the pixel point into a gland left boundary point set when a gland is detected for the first time from left to right, and recording that the pixel point is added into a gland right boundary point set when the gland is detected for the first time from right to left. And taking the centers of the left and right two points of the corresponding row in the gland left and right boundary point set as the central point of the row.
The extraction of the gland centerline based on the corrosion described in step S3 is achieved by cyclic corrosion; the specific method comprises the following steps: repeatedly searching the edge of the single gland image and deleting the edge, namely the erosion, until only one adjacent pixel beside each pixel stops; if scattered pixel points influencing the judgment of the central line are remained in the gland edge region, starting from a certain point P, connecting a point P1 closest to the point, then connecting P1 and a point P2, … … closest to P1 until no point with the distance smaller than the threshold value can be connected, and stopping connectingIn the process, a line is obtained; starting from each rough central point, calculating the length of the line, and taking the longest line as the central line Lc of the gland i
In step S4, the specific method is: sequentially traversing points on the central line of the selected gland, sequentially calculating Euclidean distances between two adjacent points and calculating the cumulative sum to obtain the Length of the gland i The unit is as follows: the number of pixels;
in step S5, the specific method is: sequentially traversing points on the center line of a gland: firstly, the slope and the midpoint of a connecting line of two adjacent points are solved; then starting from the midpoint, respectively extending outwards along two directions of the perpendicular bisector for detection; when the detected pixel is a non-gland part, calculating the distance between the two points, adding the distance into a list width of the gland, and respectively storing intersection points of the perpendicular bisector and two sides of the gland boundary into a LeftPoints list and a RightPoints list; after traversing, averaging the width list to obtain the average width of the gland, namely the diameter D of the gland i The unit is as follows: the number of pixels.
In step S7, the specific method is: respectively calculating the left and right unilateral circumferences P of any gland according to the LeftPoints and RightPoints lists of the gland a And P b The center point according to the first and last points of the two lists is a 1 And a n The standard deviation sigma of the corresponding gland diameter is determined D The deformation coefficient DI of the gland is calculated according to the following formula i
Figure BDA0003671442220000031
In step S8, the specific method is: obtaining gray values of pixel points of the gland part, summing and averaging to obtain AG g And obtaining gray values of pixel points in the area of the eyelid around the gland, summing and averaging the gray values to obtain AG m Then calculating a development value OD, which is specifically disclosed as follows;
Figure BDA0003671442220000041
in step S6, the specific method is: summing the pixel number of the point set of the maximum connected region of any gland to obtain the area S of the gland i The unit is as follows: the number of pixels.
In step S9, the specific method is: automatically identifying 5 glands in the center of the image, and calculating the minimum bounding rectangle of the 5 glands in the center according to the minimum bounding rectangles, assuming that the total area of the glands is S gland The minimum circumscribed rectangle has an area S rectangle The percentage Ratio of the central gland is preliminarily calculated according to the following formula c
Ratio c =S gland /S rectangle And (5) formula III.
In step S1, the method for semantic segmentation of meibomian glands in an image includes an OSTU segmentation method, a watershed segmentation method, a purely manual labeling method, or a deep learning model for semantic segmentation task.
The invention relates to a man-machine cooperative infrared meibomian gland image quantitative analysis method, which comprises the steps of firstly carrying out gland semantic segmentation on an original infrared meibomian gland image to obtain a binary gland image, then realizing the identification of a single gland based on a communication region, generating a central line and a calibration width line for each gland, and further realizing the automatic calculation of biological parameters such as gland length, width, deformation coefficient, development value, central region gland area ratio and the like. The infrared meibomian gland image quantitative analysis method can quantitatively analyze the infrared meibomian gland image, effectively and accurately assist a clinician in diagnosing meibomian gland dysfunction and xerophthalmia, reduce misdiagnosis rate and save labor cost. Compared with the prior art, the algorithm can accurately identify the morphologically abnormal glands and mark the central line and the width line, and can also manually revise the gland marks according to actual needs, thereby realizing more accurate biological parameter calculation and providing reliable objective basis for clinical diagnosis.
Compared with the prior art, the invention has the following beneficial effects:
(1) the method organically combines artificial intelligence with a small amount of user correction, accurately realizes division of the meibomian gland, can assist the user in manual marking, can obviously reduce marking cost of the gland and improve marking precision;
(2) the invention can also extract accurate central lines and width lines for abnormal and bent glands, thereby realizing more accurate index calculation.
(3) The infrared meibomian gland quantitative analysis method can be used for carrying out quantitative analysis on the infrared meibomian gland image, calculating various common biological parameters and assisting a clinician in diagnosing MGD and xerophthalmia.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic view of the cyclic etching treatment method in step 2;
FIG. 3 is a schematic diagram of a method for automatically calculating the diameter of an individual gland in step S5;
FIG. 4 is a schematic diagram comparing the effect of segmenting glands of the prior art method (b) with that of the conventional method (a);
figure 5 is a schematic diagram of the invention applied to the aided analysis of meibomian gland images.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1, a man-machine cooperative infrared meibomian gland image quantitative analysis method for identifying the meibomian gland area in the infrared image of the gland and collecting the appearance characteristic data thereof comprises the following steps;
step S1, reading in an infrared meibomian gland image, performing semantic segmentation on meibomian gland glands of the image by adopting a method combining automatic segmentation and manual fine adjustment correction, and converting the infrared meibomian gland image into a binary gland image;
in the step, a deep learning technology is introduced to realize division of meibomian glands based on UNet + + to obtain a binary glandular image (hereinafter referred to as an image); because the automatic segmentation result has the phenomena that a plurality of adjacent glands are adhered and the gland segmentation close to the edge is incomplete, the user corrects the image by clicking and moving a mouse.
Step S2, identifying a single gland in the binary gland image based on the connected region in the gland image; the specific method is that the image is scanned line by line along the y-axis direction, and the point with color change in the scanning process is marked as the entry point of the gland; for each gland entry point, the following rule was followed: if the entry point does not fall into any previously known communicating region, judging that the point belongs to a new gland, and calculating the maximum communicating region of the point; otherwise, neglecting the entry point, completing the traversal processing of all the entries, and distinguishing and identifying all the non-connected single glands;
in this step, the image is scanned line by line along the y-axis direction with 5 pixels as step length;
step S3, automatically acquiring the central line of the gland area;
step S4, automatically calculating the length of a single gland based on the central line;
step S5, automatically calculating the diameter of the single gland based on the central line;
step S6, automatically calculating the area of a single gland area according to the result obtained in the step;
step S7, automatically calculating the deformation coefficient of a single gland area according to the result obtained in the step;
step S8, automatically calculating the development value of the gland area according to the result obtained in the step;
and step S9, calculating the percentage of the glandular region in the central region of the image.
In step S3, the specific method is: for a single gland area, firstly, preliminarily acquiring a left boundary point set and a right boundary point set of the gland by adopting a central line extraction method based on line scanning, and calculating a central point coordinate; when the calculated central point does not fall in the gland region, judging that the gland is a bent gland, and further adopting a circulating corrosion method to obtain the central line Lc of the gland i
The extraction of the gland region central line based on scanning in the step S3 is realized by scanning the single gland region line by line along the y-axis direction; the specific method is that when the gland is detected for the first time from left to right, the pixel point is added into the gland left boundary point set, and when the gland is detected for the first time from right to left, the point is recorded to be added into the gland right boundary point set. And taking the centers of the left and right two points of the corresponding row in the gland left and right boundary point set as the central point of the row.
As shown in fig. 2, the erosion-based gland centerline extraction described in step S3 is achieved by cyclic erosion; the specific method comprises the following steps: repeatedly searching the edge of the single gland image and deleting the edge, namely the erosion, until only one adjacent pixel beside each pixel stops; if scattered pixel points influencing the judgment of the central line still remain in the gland edge area, starting from a certain point P, connecting a point P1 closest to the point, then connecting P1 with points P2 and … … closest to P1 until no point with the distance smaller than a threshold value can be connected, and stopping the process to obtain a line; starting from each rough central point, calculating the length of the line, and taking the longest line as the central line Lc of the gland i
In step S4, the specific method is: sequentially traversing points on the central line of the selected gland, sequentially calculating Euclidean distances between two adjacent points and calculating the sumObtaining the Length of the gland i The unit is as follows: the number of pixels;
as shown in fig. 3, in step S5, the specific method is as follows: sequentially traversing points on the center line of a gland: firstly, the slope and the midpoint of a connecting line of two adjacent points are solved; starting from the midpoint, respectively extending outwards along two directions of the perpendicular bisector for detection; when the detected pixel is a non-gland part, calculating the distance between the two points, adding the distance into a list width of the gland, and respectively storing intersection points of the perpendicular bisector and two sides of the gland boundary into a LeftPoints list and a RightPoints list; after traversing, averaging the width list to obtain the average width of the gland, namely the diameter D of the gland i The unit is as follows: the number of pixels.
As shown in fig. 4, through steps S1-S5, the present invention in an embodiment preferably performs gland segmentation, identifies all independent glands, and can accurately extract the center line and the nominal width line of the gland in any shape (including the severely curved gland in the square area, for example), which is superior to the performance of the existing method.
In step S7, the specific method is: respectively calculating the left and right unilateral circumferences P of any gland according to the LeftPoints and RightPoints lists of the gland a And P b The center point according to the first and last points of the two lists is a 1 And a n The standard deviation sigma of the corresponding gland diameter is determined D The deformation coefficient DI of the gland is calculated according to the following formula i
Figure BDA0003671442220000081
In step S8, the specific method is: obtaining gray values of pixel points of the gland part, summing and averaging to obtain AG g And obtaining gray values of pixel points in the area of the eyelid around the gland, summing and averaging to obtain AG m Then calculating a development value OD, which is specifically disclosed as follows;
Figure BDA0003671442220000082
in step S6, the specific method is: summing the pixel number of the point set of the maximum connected region of any gland to obtain the area S of the gland i The unit is as follows: the number of pixels.
In step S9, the specific method is: automatically identifying 5 glands in the center of the image, and calculating the minimum bounding rectangle of the 5 glands in the center according to the minimum bounding rectangles, assuming that the total area of the glands is S gland The minimum circumscribed rectangle has an area S rectangle The percentage Ratio of the central gland is preliminarily calculated according to the following formula c
Ratio c =S gland /S rectangle And (5) formula III.
As shown in fig. 5, the embodiment of the present invention can be applied to the auxiliary analysis of the meibomian gland image.
In step S1, the method for semantic segmentation of meibomian glands in an image includes an OSTU segmentation method, a watershed segmentation method, a purely manual labeling method, or a deep learning model for semantic segmentation task.
In summary, the embodiment of the invention provides a man-machine cooperative infrared meibomian gland image quantitative analysis method, which aims to solve the problems that the existing meibomian gland image analysis method is often low in gland automatic segmentation accuracy, low in efficiency due to complete dependence on manual labeling, incapable of accurately realizing quantitative analysis, and incapable of meeting the requirement of follow-up clinical analysis on gland morphological change. The embodiment makes full use of artificial intelligence technique, combines a small amount of convenient manual correction, and the gland is discerned to high efficiency and accurately to can accurately draw the central line to all glands including the severe bending condition, and then realize the automatic calculation of the main biological parameter of meibomian gland, provide objective basis for the clinician to carry out the diagnosis of MGD and xerophthalmia.
Compared with the existing method, the method can accurately identify and extract the central line and the width line of various forms of glands, realizes accurate calculation of various common biological parameters, and assists clinicians in diagnosis of MGD and xerophthalmia. In addition, the embodiment can also assist the user in manual labeling, obviously reduce the labeling cost of the gland and improve the labeling precision.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (10)

1. A man-machine cooperative infrared meibomian gland image quantitative analysis method is used for identifying a meibomian gland area in an infrared image of a gland and collecting appearance characteristic data of the meibomian gland area, and is characterized in that: the analysis method comprises the following steps;
step S1, reading in an infrared meibomian gland image, performing semantic segmentation on the meibomian gland by adopting a method combining automatic segmentation and manual fine adjustment correction, and converting the infrared meibomian gland image into a binary gland image;
step S2, identifying a single gland in the binary gland image based on the connected region in the gland image; the specific method is that the image is scanned line by line along the y-axis direction, and the point with color change in the scanning process is marked as the entry point of the gland; for each gland entry point, the following rule was followed: if the entry point does not fall into any one of the previously known communication areas, judging that the point belongs to a new gland, and calculating the maximum communication area of the new gland; otherwise, the entry point is ignored; traversing all the inlets, and distinguishing and identifying all the non-communicated single glands;
step S3, automatically acquiring the central line of the gland area;
step S4, automatically calculating the length of a single gland based on the central line;
step S5, automatically calculating the diameter of the single gland based on the central line;
step S6, automatically calculating the area of a single gland area according to the result obtained in the step;
step S7, automatically calculating the deformation coefficient of a single gland area according to the result obtained in the step;
step S8, automatically calculating the development value of the gland area according to the result obtained in the step;
and step S9, calculating the percentage of the gland area in the central area of the image.
2. The human-computer cooperative infrared meibomian gland image quantitative analysis method of claim 1, wherein: in step S3, the specific method is: for a single gland area, firstly, preliminarily acquiring a left boundary point set and a right boundary point set of the gland by adopting a central line extraction method based on line scanning, and calculating a central point coordinate; when the calculated central point does not fall in the gland area, judging that the gland is a bent gland, and further adopting a circulating corrosion method to obtain a central line Lc of the gland i
3. The human-computer cooperative infrared meibomian gland image quantitative analysis method of claim 2, wherein: the extraction of the gland region central line based on scanning in the step S3 is realized by scanning the single gland region line by line along the y-axis direction; the specific method is that when the gland is detected for the first time from left to right, the pixel point is added into the gland left boundary point set, and when the gland is detected for the first time from right to left, the point is recorded to be added into the gland right boundary point set. And taking the centers of the left and right two points of the corresponding row in the gland left and right boundary point set as the central point of the row.
4. The human-computer cooperative infrared meibomian gland image quantitative analysis method of claim 2, wherein: the corrosion-based gland centerline extraction described in step S3 is achieved by cyclic corrosion; the specific method comprises the following steps: repeatedly searching the edge of the single gland image and deleting the edge, namely the erosion, until only one adjacent pixel beside each pixel stops; if scattered pixel points influencing the judgment of the central line are still remained in the gland edge region, starting from a certain point P, connecting a point P1 closest to the point, then connecting a point P1 with points P2 and … … closest to the point P1 until no points with distances smaller than a threshold value can be connected, and stopping the process to obtain a line; from each of the coarse centersStarting from the point, the length of the line is calculated, and the longest line is taken as the central line Lc of the gland i
5. The human-computer cooperative infrared meibomian gland image quantitative analysis method of claim 2, wherein: in step S4, the specific method is: sequentially traversing points on the central line of the selected gland, sequentially calculating Euclidean distances between two adjacent points and calculating the cumulative sum to obtain the Length of the gland i The unit is as follows: the number of pixels;
in step S5, the specific method is: sequentially traversing points on the center line of a gland: firstly, the slope and the midpoint of a connecting line of two adjacent points are solved; then starting from the midpoint, respectively extending outwards along two directions of the perpendicular bisector for detection; when the detected pixel is a non-gland part, calculating the distance between the two points, adding the distance into a list width of the gland, and respectively storing intersection points of the perpendicular bisector and two sides of the gland boundary into a LeftPoints list and a RightPoints list; after traversing, averaging the width list to obtain the average width of the gland, namely the diameter D of the gland i The unit is as follows: the number of pixels.
6. The human-computer cooperative infrared meibomian gland image quantitative analysis method of claim 5, wherein: in step S7, the specific method is: respectively calculating the left and right unilateral circumferences P of any gland according to the LeftPoints and RightPoints lists of the gland a And P b The center point of the head and end points according to the two lists is a 1 And a n The standard deviation sigma of the corresponding gland diameter is determined D The deformation coefficient DI of the gland is calculated according to the following formula i
Figure FDA0003671442210000031
7. The human-computer cooperative infrared meibomian gland map of claim 1An image quantitative analysis method, characterized by: in step S8, the specific method is: obtaining gray values of pixel points of the gland part, summing and averaging to obtain AG g And obtaining gray values of pixel points in the area of the eyelid around the gland, summing and averaging to obtain AG m Then calculating a development value OD, which is specifically disclosed as follows;
Figure FDA0003671442210000032
8. the human-computer cooperative infrared meibomian gland image quantitative analysis method of claim 1, wherein: in step S6, the specific method is: summing the pixel number of the point set of the maximum connected region of any gland to obtain the area S of the gland i The unit is as follows: the number of pixels.
9. The human-computer cooperative infrared meibomian gland image quantitative analysis method of claim 1, wherein: in step S9, the specific method is: automatically identifying 5 glands in the center of the image, and calculating the minimum bounding rectangle of the 5 glands in the center according to the minimum bounding rectangles, assuming that the total area of the glands is S gland The minimum circumscribed rectangle has an area S rectangle The percentage Ratio of the central gland is preliminarily calculated according to the following formula c
Ratio c =S gland /S rectangle And (5) formula III.
10. The human-computer cooperative infrared meibomian gland image quantitative analysis method of claim 1, wherein: in step S1, the method for semantic segmentation of meibomian glands in an image includes an OSTU segmentation method, a watershed segmentation method, a purely manual labeling method, or a deep learning model for semantic segmentation task.
CN202210609257.1A 2022-05-31 2022-05-31 Man-machine cooperative infrared meibomian gland image quantitative analysis method Pending CN115019379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210609257.1A CN115019379A (en) 2022-05-31 2022-05-31 Man-machine cooperative infrared meibomian gland image quantitative analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210609257.1A CN115019379A (en) 2022-05-31 2022-05-31 Man-machine cooperative infrared meibomian gland image quantitative analysis method

Publications (1)

Publication Number Publication Date
CN115019379A true CN115019379A (en) 2022-09-06

Family

ID=83071695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210609257.1A Pending CN115019379A (en) 2022-05-31 2022-05-31 Man-machine cooperative infrared meibomian gland image quantitative analysis method

Country Status (1)

Country Link
CN (1) CN115019379A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013109193A1 (en) * 2012-01-18 2013-07-25 Agency For Science, Technology And Research Computational methods and apparatus for meiboqraphy
CN106530294A (en) * 2016-11-04 2017-03-22 中山大学中山眼科中心 Method for carrying out processing on meibomian gland image to obtain gland parameter information
CN109064468A (en) * 2018-08-23 2018-12-21 上海市儿童医院 A method of using MATLAB quantitative analysis eyelid Meibomian gland form and area
CN109785321A (en) * 2019-01-30 2019-05-21 杭州又拍云科技有限公司 Meibomian gland method for extracting region based on deep learning and Gabor filter
CN111145155A (en) * 2019-12-26 2020-05-12 上海美沃精密仪器股份有限公司 Meibomian gland recognition method
CN113080843A (en) * 2021-03-25 2021-07-09 中山大学中山眼科中心 Meibomian gland image-based gland extraction method and quantitative analysis method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013109193A1 (en) * 2012-01-18 2013-07-25 Agency For Science, Technology And Research Computational methods and apparatus for meiboqraphy
CN106530294A (en) * 2016-11-04 2017-03-22 中山大学中山眼科中心 Method for carrying out processing on meibomian gland image to obtain gland parameter information
CN109064468A (en) * 2018-08-23 2018-12-21 上海市儿童医院 A method of using MATLAB quantitative analysis eyelid Meibomian gland form and area
CN109785321A (en) * 2019-01-30 2019-05-21 杭州又拍云科技有限公司 Meibomian gland method for extracting region based on deep learning and Gabor filter
CN111145155A (en) * 2019-12-26 2020-05-12 上海美沃精密仪器股份有限公司 Meibomian gland recognition method
CN113080843A (en) * 2021-03-25 2021-07-09 中山大学中山眼科中心 Meibomian gland image-based gland extraction method and quantitative analysis method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ROBERT KOPROWSKI 等: "A quantitative method for assessing the quality of meibomian glands", COMPUTERS IN BIOLOGY AND MEDICINE, vol. 75, 1 August 2016 (2016-08-01), pages 130 - 138, XP029629874, DOI: 10.1016/j.compbiomed.2016.06.001 *
李萌萌;张晓峰;阙丽娟;殷越聪;王英明;: "睑板腺红外线成像装置两种图像分析方法的比较", 中国现代医药杂志, no. 02, 25 February 2016 (2016-02-25), pages 12 - 15 *

Similar Documents

Publication Publication Date Title
CN109859203B (en) Defect tooth image identification method based on deep learning
CN110120042B (en) Crop image pest and disease damage area extraction method based on SLIC super-pixel and automatic threshold segmentation
US10234442B2 (en) Device and method for finding cell nucleus of target cell from cell image
CN112967285B (en) Chloasma image recognition method, system and device based on deep learning
CN109087310B (en) Meibomian gland texture region segmentation method and system, storage medium and intelligent terminal
CN111291701B (en) Sight tracking method based on image gradient and ellipse fitting algorithm
CN110889846A (en) Diabetes retina image optic disk segmentation method based on FCM
CN109035227A (en) The system that lung tumors detection and diagnosis is carried out to CT image
CN115862819B (en) Medical image management method based on image processing
CN110767293A (en) Brain auxiliary diagnosis system
CN108670297B (en) Multi-mode transcranial ultrasound-based Parkinson's disease auxiliary analysis system and method
CN114972272A (en) Grad-CAM-based segmentation method for new coronary pneumonia lesions
CN108682011B (en) Sub-pixel-level real-time dynamic tumor image positioning and matching method
CN111986157B (en) Digital pathological image quality evaluation system
CN114202795A (en) Method for quickly positioning pupils of old people
CN115019379A (en) Man-machine cooperative infrared meibomian gland image quantitative analysis method
CN112927282A (en) Automatic livestock and poultry foot parameter measuring method based on machine vision
CN115937085B (en) Nuclear cataract image processing method based on neural network learning
CN112069953B (en) Automatic identification method and device for rice seedling growth period
CN111077153A (en) Multifunctional sperm quality analysis system
CN114511512A (en) Blood vessel image segmentation method based on interactive guidance
CN111815613B (en) Liver cirrhosis disease stage identification method based on envelope line morphological feature analysis
CN110766680B (en) Leukocyte image segmentation method based on geometric constraint
CN114332858A (en) Focus detection method and device and focus detection model acquisition method
CN110751064B (en) Blink frequency analysis method and system based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination