CN117197064A - Automatic non-contact eye red degree analysis method - Google Patents

Automatic non-contact eye red degree analysis method Download PDF

Info

Publication number
CN117197064A
CN117197064A CN202311115420.XA CN202311115420A CN117197064A CN 117197064 A CN117197064 A CN 117197064A CN 202311115420 A CN202311115420 A CN 202311115420A CN 117197064 A CN117197064 A CN 117197064A
Authority
CN
China
Prior art keywords
image
eye
sclera
region
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311115420.XA
Other languages
Chinese (zh)
Inventor
刘艳
余彬
徐嘉璐
赵越
李庆武
霍冠英
梅力文
唐志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202311115420.XA priority Critical patent/CN117197064A/en
Publication of CN117197064A publication Critical patent/CN117197064A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a non-contact eye red degree automatic analysis method, which comprises the following steps: aiming at the front face of the face to shoot eyes, performing human eye region detection and image preprocessing on the shot image, scaling and preprocessing, and then performing sclera region detection by using a U-Net++ model to obtain a binary image, and performing AND operation on the binary image and the scaling image to obtain a sclera region color image; performing image smoothing and self-adaptive histogram equalization to enhance image contrast, performing filtering treatment on the image enhancement map by using a B-COSFIE filter, simultaneously converting the image enhancement map into an LAB color model to obtain an image mask, and performing AND operation on the filtering treatment map and the image mask to obtain a blood silk binary map; and respectively calculating the number of non-zero pixel points in the color image of the sclera area and the binary image of the blood silk, and then calculating the ratio to obtain the eye red duty ratio. The automatic analysis method for the non-contact eye redness degree provided by the invention has high adaptability and accuracy in automatic judgment, and can realize intelligent auxiliary diagnosis of the eye redness degree of the patient.

Description

Automatic non-contact eye red degree analysis method
Technical Field
The invention relates to a non-contact eye red degree automatic analysis method, and belongs to the technical field of image processing and medical auxiliary diagnosis.
Background
The eye red analysis can judge the severity of ocular surface inflammation, and is one of important indicators for dry eye detection, and has an important role in the field of ophthalmic medical diagnosis. Traditional eye red analysis is determined empirically by doctors, and has the defect of larger error. In addition, although a digital image processing method appears, the current digital image processing method is not high in automatic judgment adaptability and accuracy, and intelligent auxiliary diagnosis of the degree of redness of eyes of a patient cannot be realized.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides the automatic analysis method for the non-contact eye redness degree, has high automatic judgment adaptability and accuracy, and can realize intelligent auxiliary diagnosis of the eye redness degree of a patient.
In order to solve the technical problems, the invention adopts the following technical scheme:
a non-contact eye red degree automatic analysis method comprises the following steps:
using visible light imaging equipment to shoot eyes aiming at the front face of the face, and carrying out human eye region detection and image preprocessing on the shot image to obtain a preprocessed image;
scaling the preprocessed image to obtain a scaling image, preprocessing the scaling image, detecting a sclera region by using a U-Net++ model to obtain a binary image of a sclera region of the image to be detected, and performing AND operation on the binary image and the scaling image to obtain a color image of the sclera region;
performing image smoothing and self-adaptive histogram equalization on the color image of the sclera region to enhance the image contrast, obtaining an image enhancement image, performing filtering treatment on the image enhancement image by using a B-COSFIE filter to obtain a filtering treatment image, converting the image enhancement image into an LAB color model to obtain an image mask, and performing AND operation on the filtering treatment image and the image mask to obtain a blood silk binary image;
and respectively calculating the number of non-zero pixel points in the color image of the sclera area and the blood silk binary image, and then calculating the ratio of the number of the non-zero pixel points of the blood silk binary image to the number of the non-zero pixel points of the color image of the sclera area to obtain the eye red duty ratio.
The human eye region detection comprises the following steps:
when the shot image is detected, the trained classifier is utilized to search from the upper left corner to the region in the shot image, and a similarity criterion is adopted to judge whether the shot image is human eyes or not;
if the judgment result is that the eyes are found, selecting the range of the eye area by the frame, judging whether the range of the eye area is not lower than 256 multiplied by 256, if yes, entering the next flow, otherwise, prompting that the eyes are too small, and requiring to be shot again;
if the judgment result is non-human eyes, prompting that the input image is wrong, and requesting to re-shoot;
the training process of the human eye classifier comprises the following steps: by adopting a deep learning method, a large number of human eyes and non-human eyes samples are collected from a network to perform pre-training, and deep learning model parameters are obtained to construct a human eye classifier.
The preprocessing of the photographed image comprises the following steps:
according to the eye region frame selection result, the length and width of the eye region are expanded by 1.2 times according to the original proportion, so that the eye region finally selected by frame contains eye information with complete corners, the expanded length is taken as a reference, the parallel central line is taken as a base line, and 5:4, the length-width ratio is cut, so that all eye images meet the size requirement and the effective area is maximized.
The zoom map preprocessing comprises the following steps: and if the color threshold method is used for judging that the whole color of the image is red, extracting R channels in the RGB three channels of the image, and continuing to perform sclera segmentation operation, otherwise, performing sclera segmentation operation by using the RGB three-channel color map after performing image enhancement by using a histogram equalization method.
The U-Net++ model is an improved U-Net++ model, an input image is a zoom image, an output image is a single-channel binary image, and the shapes of the input image and the output image are 512 multiplied by 512; normalization operations are added between each layer of convolution operations of U-Net++ and ReLU activation function operations; between the upper and lower layers, a dropout operation is added, and after that an attention mechanism is added, followed by a pooling operation.
The binary image and the zoom image are specifically: and setting the pixel value of the corresponding scaling image to 0 for all pixels with the pixel value of 0 in the binary image, and only reserving the sclera area in the image to obtain a sclera area image.
The specific calculation mode of the eye red ratio is as follows:
wherein Degree represents the eye red duty ratio, B pixels Representing the number of non-0 pixel points in the blood silk binary image, S pixels The number of non-0 pixels in the sclera region map is shown.
And multiplying the eye redness duty ratio calculation result by 100, rounding downwards to enable the final result to be in the range of [0,100], and finally obtaining the corresponding eye redness degree level by using a fractional level system.
A non-contact eye redness degree automatic analysis device, comprising:
the eye image acquisition and processing module is used for shooting eyes by aiming at the front face of the face by using the visible light imaging equipment, and carrying out human eye region detection and image preprocessing on the shot image to obtain a preprocessed image;
the sclera region detection module is used for obtaining a zoom image after zooming the preprocessed image, detecting the sclera region of the zoom image by using a U-Net++ model after preprocessing the zoom image to obtain a binary image of the sclera region of the image to be detected, and performing AND operation on the binary image and the zoom image to obtain a color image of the sclera region;
the eye red region extraction module is used for extracting to carry out image smoothing treatment and self-adaptive histogram equalization on the color image of the sclera region to enhance the image contrast, so as to obtain an image enhancement image, then carrying out filtering treatment on the image enhancement image by using a B-COSFIR filter, so as to obtain a filtering treatment image, simultaneously converting the image enhancement image into an LAB color model, obtaining an image mask, and finally carrying out AND operation on the filtering treatment image and the image mask, so as to obtain a blood silk binary image;
and the eye red duty ratio calculation module is used for calculating the number of non-zero pixel points in the color image of the sclera area and the blood silk binary image respectively, and then calculating the ratio of the number of the non-zero pixel points of the blood silk binary image to the number of the non-zero pixel points of the color image of the sclera area to obtain the eye red duty ratio.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the contact eye redness degree automatic analysis method.
The invention has the beneficial effects that: the invention provides a non-contact eye red degree automatic analysis method, which uses visible light imaging equipment to shoot eyes on the front face of the face, so as to realize non-contact eye image acquisition by using a common intelligent mobile phone photographing function or a camera photographing, and has the advantages of no auxiliary light source, simple structure and no discomfort; the method can avoid errors caused by congestion of eyes in the eye red display area, has accurate positions and areas of the eyes, provides accurate data, and has high adaptability and accuracy in automatic judgment.
Drawings
FIG. 1 is a flow chart of a method for automatically analyzing the degree of redness of eyes without contact according to the present invention.
FIG. 2 is a flowchart of an eye image acquisition and processing procedure according to the present invention;
FIG. 3 is a flowchart of a scleral region detection method according to the present invention;
FIG. 4 is a flow chart of the red eye region extraction according to the present invention;
fig. 5 is a flowchart of the calculation of the eye redness duty ratio and the judgment of the eye redness degree according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and the following examples are only for more clearly illustrating the technical aspects of the present invention, and are not to be construed as limiting the scope of the present invention. As shown in fig. 1, the invention discloses a non-contact eye red degree automatic analysis method, which comprises the following steps:
step one, obtaining and processing eye images.
A visible light imaging device is used for shooting eyes aiming at the front face of the face, and the image is required to contain a complete single eye. Because of different resolutions of imaging devices, the method can influence the video memory occupied by the sclera area detection neural network training, the sclera area detection result and the algorithm execution efficiency, and needs advanced pedestrian eye area detection and image preprocessing. The visible light imaging device is used for shooting eyes of a person aiming at the front face of the face, and the imaging can be realized by using a common smart phone camera function or a camera, so that the imaging size of an eye area is not less than 256 multiplied by 256, and no additional device or special ophthalmic instrument is needed.
And detecting the human eye region, and pre-training a large number of collected human eye and non-human eye samples to obtain model parameters so as to construct a human eye classifier. When the shot image is detected, the trained classifier is utilized to search from the upper left corner to the region in the shot image, and a similarity criterion is adopted to judge whether the shot image is human eyes. The area-by-area search is to divide an input image into a plurality of small rectangular areas and to detect and judge human eyes respectively for each small area. The similarity criterion refers to that in the process of searching region by region, the similarity characteristic of the detected object in each small region is calculated by using an eye classifier model, and the calculated result represents the similarity degree of the detected object and eyes.
If the judgment result is that the human eyes are found, the range of the eye area is selected by a frame, the range of the eye area is not lower than 256 multiplied by 256, the next image preprocessing is continued, otherwise, the human eyes are prompted to be too small, and the re-shooting is required. If the judgment result is that the image is not human eyes, prompting that the input image is wrong and requesting to re-shoot. The image preprocessing is to expand the length and width of the eye area by 1.2 times according to the original proportion according to the eye area frame selection result, so that the eye area finally selected by frame contains complete eye information such as the eye corners. Based on the extended length, the parallel centerline is baseline, 5:4, the length-width ratio is cut, so that all eye images meet the size requirement and the effective area is maximized.
The human eye classifier is a deep learning model which is pre-trained by a large number of human eye and non-human eye image samples and is used for distinguishing human eyes and non-human eyes in an input image, carrying out eye region frame selection on an image with an input result detected as human eyes, and continuing to carry out subsequent image preprocessing, sclera region detection, eye red region extraction and eye red duty ratio calculation analysis on a frame selection result meeting the requirement of the eye region range. And selecting an image with an unsatisfactory eye area frame selection result, detecting an input result as an image of a non-human eye, terminating the algorithm and giving corresponding prompt information. Pre-training refers to initial training of a model using a large-scale dataset prior to a target task, allowing the model to learn some generic features and representations. And the eye region frame selection is to use a labeling frame to frame-select the region determined to be the human eye according to the human eye determination result, and return the image coordinates of the frame-selected region and the length-width ratio of the frame-selected result. The frame selection result should contain complete eye information such as inner and outer corners of eyes, and other part information irrelevant to human eyes in the input image is reduced as much as possible. The eye region range is that the aspect ratio of the eye region selected by the frame is not less than 256×256, so that the subsequent steps of sclera region detection, eye red region extraction and the like are facilitated. If the aspect ratio of the selected eye region is not less than 256×256, the input eye image meets the requirements, and the subsequent steps can be continued. Otherwise, the human eye image is prompted to be too small, and the re-shooting is required.
Step two, detecting scleral region.
The actual eye image obtained after the input image is cut is scaled to 512×512 size, and the sclera region detection is carried out by using the improved U-Net++ model after pretreatment. The model detection result is a binary image of the sclera region of the image to be detected, and the binary image and the zoom image are subjected to AND operation to obtain a color image of the sclera region.
Image scaling is to perform super resolution with the length of the picture smaller than 512×512 and downsampling with the length larger than 512×512 after performing eye region clipping and preprocessing on an input image, so as to reduce image detail loss caused by scaling. Super-resolution is to improve the image resolution of a target image with actual resolution lower than required resolution by an interpolation method, so that the details and the quality of the image are improved. Downsampling is the averaging of some pixels in a target image with an actual resolution higher than the required resolution, thereby reducing the spatial resolution of the image.
And the image preprocessing is to extract R channels in three channels of the image RGB and continue the sclera segmentation operation if the whole color of the image is judged to be reddish by utilizing a color threshold method. Otherwise, after image enhancement is carried out by using a histogram equalization method, the sclera segmentation operation is still carried out by using an RGB three-channel color chart.
The color thresholding method is to calculate the average value of the red channel in the image to judge whether the image needs to be extracted with the red channel and then to divide the sclera. When the average value of the color channels exceeds a set threshold, it is considered that a red single channel of the image needs to be extracted, otherwise it is not needed.
Histogram equalization, in which the occurrence frequency of each pixel value in an image is counted to obtain a histogram of the pixel value, the histogram is normalized, and then a cumulative distribution function (Cumulative Distribution Function, CDF) of the histogram is calculated to represent the cumulative occurrence probability of each pixel value, and the mapping value of each pixel value in a new histogram is calculated according to the CDF to map the original pixel value to the new pixel value. And replacing the original pixel value with the new pixel value to obtain an equalized image. The cumulative distribution function CDF is expressed using the following formula:
where x is the pixel value and P (i) is the frequency at which the pixel value in the normalized image is i.
Histogram normalization is the normalization of the frequency values in the histogram such that the frequency values are between 0 and 1. The normalization method employed here divides each frequency value by the total number of pixels to ensure that the sum of the normalized frequency values is 1. The following formula is used:
wherein N (i) is the normalized frequency value, H (i) is the frequency value with the pixel value i in the original histogram, N pixels Is the total number of pixels. The normalized frequency value represents the relative probability of the pixel value i occurring.
RGB three channels refer to the three color channels red (R), green (G), blue (B) of an image, respectively, each pixel of an image consisting of the values of the three color channels representing the intensity or brightness of red, green, and blue, respectively, each color channel typically being an integer between 0 and 255, the higher the value, the larger the channel color component, the brighter the response to the single channel.
The U-Net++ model is a deep learning model for image segmentation tasks, is an extension and improvement of a classical U-Net model, enhances the expression capacity of the model by introducing a recursive network structure, improves the image segmentation performance, and is widely applied to the field of medical image segmentation. The improved U-Net++ model in the invention mainly adjusts the input and output parts in the original U-Net++ network, the convolution operation of each layer and the partial structure between the upper layer and the lower layer. And an input/output part, wherein the input image is an image subjected to the preprocessing step, the output image is a binary image of a single channel, and the shapes of the input/output images are 512 multiplied by 512. A normalization operation is added between each layer of convolution operation of U-Net++ and the ReLU activation function operation. Between the upper and lower layers, a dropout operation is added, and after that an attention mechanism is added, followed by a pooling operation. After improvement according to the operation, the accuracy rate of 2% -3% can be improved on the test data.
The single-channel binary image refers to a binary image containing only a single channel (gray channel), and each pixel has only two possible values: black and white. A black pixel is typically represented using a gray value of 0 and a white pixel is represented using a gray value of 255 (or 1).
The ReLU activation function is generally known as Rectified Linear Unit, the Chinese name is a linear rectification function, and the ReLU activation function is a commonly used activation function in a neural network. In a general sense, it refers to a ramp function in mathematics.
dropout is a strategy widely used in deep learning to solve the problem of model overfitting, and dropout solves the problem of co-adaptation, so that wider network training is possible.
The co-adaptation problem refers to a co-adaptation problem, some nodes in the network have stronger characterization capability than other nodes, the nodes with stronger characterization capability are continuously strengthened along with continuous training of the network, and the weaker nodes are continuously weakened until the contribution to the network is negligible. Only part of nodes in the network can be trained at this time, the width and depth of the network are wasted, and the effect of the model is limited.
The attention mechanism is a special structure embedded in the machine learning model for automatically learning and calculating the contribution of input data to output data. By introducing an attention mechanism, the neural network can automatically learn and selectively focus on important information in the input, improving the performance and generalization capability of the model.
The pooling operation simulates the visual system of a person to reduce the dimension of data, and mainly proposes key information of a certain area.
The model detection is to detect an input image by using a trained improved U-Net++ model, the input image is required to be detected by a human eye classifier in the first step, the size of the image is adjusted to 512 multiplied by 512 after the human eye image is cut out, the model can accurately detect a sclera region in the input image as the input of the model, and a binary image of a detection image result is provided.
And performing AND operation on the binary image and the zoom image, setting the pixel value of the corresponding zoom image to be 0 for all pixel points with the pixel value of 0 (black part) in the binary image, and only reserving the sclera area in the image to obtain a sclera area image.
And thirdly, extracting an eye red area.
The sclera area color image obtained in the above step is used as an input image, the input image is subjected to image smoothing and self-adaptive histogram equalization to enhance image contrast, a B-COSFERE filter is used for extracting blood filaments from the processed image, a mask image is obtained from the color image after the image enhancement, an RGB color model of the color image is converted into an LAB color model, a proper mask is regenerated according to an L (brightness) component setting threshold value, the mask of a low-brightness area (namely a black gray background area) is set to 0, and the rest of the masks are set to 1, so that the situation that the boundary between black and color is recognized as blood filaments by an algorithm can be effectively avoided. And extracting blood silk by using a B-COSFIE filter on the processed image, wherein the extracted result image is a blood silk binary image.
The image smoothing processing is to use a Gaussian filtering method to carry out convolution operation of a Gaussian kernel function on the image to realize a smoothing effect, remove high-frequency noise in the image and smooth the image, and keep the integral details and edges of the image.
The self-adaptive histogram equalization is to divide the original image into uniform small blocks or divide the original image according to a specific algorithm, perform histogram equalization on each small block part, and perform brightness correction according to the histogram distribution condition of adjacent small blocks at the same time, and finally recombine all small blocks into an enhanced image. The equalized image can improve local contrast by avoiding excessive amplification of noise in a relatively uniform region.
The RGB color model is a model for describing colors, and is based on a combination of three basic color channels of Red (Red), green (Green), and Blue (Blue), where each color channel has a value of typically between 0 and 255, where 0 represents the minimum luminance and 255 represents the maximum luminance.
The LAB color model is a model for describing colors, and is composed of two channels of luminance (L) and color oppositivity (a and B), the a channel representing oppositivity from green to red, and the B channel representing oppositivity from blue to yellow.
The B-COSFIE filter is simply referred to as a Bar-select (Bar-select) shift filter response combination (Combination Of Shifted FIlterREsponses). The B-cosfie filter achieves directional selectivity by computing a weighted geometric average of the outputs of a set of gaussian differential filters whose support regions are aligned in a linear fashion. Segmentation of the ocular vessel can be achieved by summing the responses of the two rotation-invariant B-cosfie filters and thresholding.
Rotation invariance means that the algorithm or method has the same recognition or analysis result for the object at different rotation angles when processing the image. I.e. the algorithm is able to correctly recognize or process the object, regardless of its rotation, without being affected by the rotation.
The B-cosfie filter response function is:
in the above equation, σ is the standard deviation of the gaussian function that determines the response range.
And step four, calculating the eye red duty ratio and judging the eye red degree.
And aiming at the operation steps, the obtained blood silk binary image and the sclera area image respectively calculate the number of non-zero pixel points in the blood silk binary image and the sclera area image, and then calculate the ratio, wherein the ratio of the two is the red eye ratio. The eye red duty cycle calculation is multiplied by 100 and rounded down so that the final result lies within the range of [0,100 ]. Finally, the following score scale was used to obtain the corresponding redness level, as shown in Table 1.
Wherein Degree represents the Degree of redness of eyes, B pixels Representing the number of non-0 pixel points in the blood silk binary image, S pixels The number of non-0 pixels in the sclera region map is shown.
Table 1 eye redness level
Degree 81~100 61~80 41~60 21~40 0~20
Grade A B C D E
The invention also discloses a device for automatically analyzing the degree of the non-contact eye red, which comprises:
the eye image acquisition and processing module is used for shooting eyes by aiming at the front face of the face by using the visible light imaging equipment, and carrying out human eye region detection and image preprocessing on the shot image to obtain a preprocessed image;
the sclera region detection module is used for obtaining a zoom image after zooming the preprocessed image, detecting the sclera region of the zoom image by using a U-Net++ model after preprocessing the zoom image to obtain a binary image of the sclera region of the image to be detected, and performing AND operation on the binary image and the zoom image to obtain a color image of the sclera region;
the eye red region extraction module is used for extracting to carry out image smoothing treatment and self-adaptive histogram equalization on the color image of the sclera region to enhance the image contrast, so as to obtain an image enhancement image, then carrying out filtering treatment on the image enhancement image by using a B-COSFIR filter, so as to obtain a filtering treatment image, simultaneously converting the image enhancement image into an LAB color model, obtaining an image mask, and finally carrying out AND operation on the filtering treatment image and the image mask, so as to obtain a blood silk binary image;
and the eye red duty ratio calculation module is used for calculating the number of non-zero pixel points in the color image of the sclera area and the blood silk binary image respectively, and then calculating the ratio of the number of the non-zero pixel points of the blood silk binary image to the number of the non-zero pixel points of the color image of the sclera area to obtain the eye red duty ratio.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the contact eye redness degree automatic analysis method of the present invention.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (10)

1. A non-contact eye red degree automatic analysis method is characterized in that: the method comprises the following steps:
using visible light imaging equipment to shoot eyes aiming at the front face of the face, and carrying out human eye region detection and image preprocessing on the obtained shooting image to obtain a preprocessed image;
scaling the preprocessed image to obtain a scaling image, preprocessing the scaling image, detecting a sclera region by using a U-Net++ model to obtain a binary image of a sclera region of the image to be detected, and performing AND operation on the binary image and the scaling image to obtain a color image of the sclera region;
performing image smoothing and self-adaptive histogram equalization on the color image of the sclera region to enhance the image contrast, obtaining an image enhancement image, performing filtering treatment on the image enhancement image by using a B-COSFIE filter to obtain a filtering treatment image, converting the image enhancement image into an LAB color model to obtain an image mask, and performing AND operation on the filtering treatment image and the image mask to obtain a blood silk binary image;
and respectively calculating the number of non-zero pixel points in the color image of the sclera area and the blood silk binary image, and then calculating the ratio of the number of the non-zero pixel points of the blood silk binary image to the number of the non-zero pixel points of the color image of the sclera area to obtain the eye red duty ratio.
2. The automatic analysis method for non-contact eye redness degree according to claim 1, wherein: visible light imaging devices include smartphones and cameras, and it is necessary to satisfy an eye region imaging size of not less than 256×256.
3. The automatic analysis method for non-contact eye redness degree according to claim 1, wherein: the human eye region detection comprises the following steps:
when the shot image is detected, the trained classifier is utilized to search from the upper left corner to the region in the shot image, and a similarity criterion is adopted to judge whether the shot image is human eyes or not;
if the judgment result is that the eyes are found, selecting the range of the eye area by the frame, judging whether the range of the eye area is not lower than 256 multiplied by 256, if yes, entering the next flow, otherwise, prompting that the eyes are too small, and requiring to be shot again;
if the judgment result is non-human eyes, prompting that the input image is wrong, and requesting to re-shoot;
the training process of the human eye classifier comprises the following steps: by adopting a deep learning method, a large number of human eyes and non-human eyes samples are collected from a network to perform pre-training, and deep learning model parameters are obtained to construct a human eye classifier.
4. A method for automatically analyzing the degree of redness of a non-contact eye according to claim 3, wherein: the preprocessing of the photographed image comprises the following steps:
according to the eye region frame selection result, the length and width of the eye region are expanded by 1.2 times according to the original proportion, so that the eye region finally selected by frame contains eye information with complete corners, the expanded length is taken as a reference, the parallel central line is taken as a base line, and 5:4, the length-width ratio is cut, so that all eye images meet the size requirement and the effective area is maximized.
5. The automatic analysis method for non-contact eye redness degree according to claim 1, wherein: the zoom map preprocessing comprises the following steps: and if the color threshold method is used for judging that the whole color of the image is red, extracting R channels in the RGB three channels of the image, and continuing to perform sclera segmentation operation, otherwise, performing sclera segmentation operation by using the RGB three-channel color map after performing image enhancement by using a histogram equalization method.
6. The automatic analysis method for non-contact eye redness degree according to claim 1, wherein: the U-Net++ model is an improved U-Net++ model, an input image is a zoom image, an output image is a single-channel binary image, and the shapes of the input image and the output image are 512 multiplied by 512; normalization operations are added between each layer of convolution operations of U-Net++ and ReLU activation function operations; between the upper and lower layers, a dropout operation is added, and after that an attention mechanism is added, followed by a pooling operation.
7. The automatic analysis method for non-contact eye redness degree according to claim 1, wherein: the binary image and the zoom image are specifically: and setting the pixel value of the corresponding scaling image to 0 for all pixels with the pixel value of 0 in the binary image, and only reserving the sclera area in the image to obtain a sclera area image.
8. The automatic analysis method for non-contact eye redness degree according to claim 1, wherein: the specific calculation mode of the eye red ratio is as follows:
wherein Degree represents the eye red duty ratio, B pixels Representing the number of non-0 pixel points in the blood silk binary image, S pixels The number of non-0 pixels in the sclera region map is shown.
9. The automatic analysis method for non-contact eye redness degree according to claim 8, wherein: and multiplying the eye redness duty ratio calculation result by 100, rounding downwards to enable the final result to be in the range of [0,100], and finally obtaining the corresponding eye redness degree level by using a fractional level system.
10. The utility model provides a contactless eye red degree automatic analysis device which characterized in that: comprising the following steps:
the eye image acquisition and processing module is used for shooting eyes by aiming at the front face of the face by using the visible light imaging equipment, and carrying out human eye region detection and image preprocessing on the shot image to obtain a preprocessed image;
the sclera region detection module is used for obtaining a zoom image after zooming the preprocessed image, detecting the sclera region of the zoom image by using a U-Net++ model after preprocessing the zoom image to obtain a binary image of the sclera region of the image to be detected, and performing AND operation on the binary image and the zoom image to obtain a color image of the sclera region;
the eye red region extraction module is used for extracting to carry out image smoothing treatment and self-adaptive histogram equalization on the color image of the sclera region to enhance the image contrast, so as to obtain an image enhancement image, then carrying out filtering treatment on the image enhancement image by using a B-COSFIR filter, so as to obtain a filtering treatment image, simultaneously converting the image enhancement image into an LAB color model, obtaining an image mask, and finally carrying out AND operation on the filtering treatment image and the image mask, so as to obtain a blood silk binary image;
and the eye red duty ratio calculation module is used for calculating the number of non-zero pixel points in the color image of the sclera area and the blood silk binary image respectively, and then calculating the ratio of the number of the non-zero pixel points of the blood silk binary image to the number of the non-zero pixel points of the color image of the sclera area to obtain the eye red duty ratio.
CN202311115420.XA 2023-08-30 2023-08-30 Automatic non-contact eye red degree analysis method Pending CN117197064A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311115420.XA CN117197064A (en) 2023-08-30 2023-08-30 Automatic non-contact eye red degree analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311115420.XA CN117197064A (en) 2023-08-30 2023-08-30 Automatic non-contact eye red degree analysis method

Publications (1)

Publication Number Publication Date
CN117197064A true CN117197064A (en) 2023-12-08

Family

ID=89002774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311115420.XA Pending CN117197064A (en) 2023-08-30 2023-08-30 Automatic non-contact eye red degree analysis method

Country Status (1)

Country Link
CN (1) CN117197064A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649347A (en) * 2024-01-30 2024-03-05 宁乡爱尔眼科医院有限公司 Remote eye examination method and system based on ultra-wide-angle fundus imaging

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649347A (en) * 2024-01-30 2024-03-05 宁乡爱尔眼科医院有限公司 Remote eye examination method and system based on ultra-wide-angle fundus imaging
CN117649347B (en) * 2024-01-30 2024-04-19 宁乡爱尔眼科医院有限公司 Remote eye examination method and system based on ultra-wide-angle fundus imaging

Similar Documents

Publication Publication Date Title
CN109800824B (en) Pipeline defect identification method based on computer vision and machine learning
CN110276356B (en) Fundus image microaneurysm identification method based on R-CNN
CN107220624A (en) A kind of method for detecting human face based on Adaboost algorithm
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN106683080B (en) A kind of retinal fundus images preprocess method
CN104318262A (en) Method and system for replacing skin through human face photos
CN105445277A (en) Visual and intelligent detection method for surface quality of FPC (Flexible Printed Circuit)
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
CN106846339A (en) A kind of image detecting method and device
CN111611907B (en) Image-enhanced infrared target detection method
US20240046632A1 (en) Image classification method, apparatus, and device
CN112750106A (en) Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium
CN111223110B (en) Microscopic image enhancement method and device and computer equipment
CN117197064A (en) Automatic non-contact eye red degree analysis method
CN112883824A (en) Finger vein feature recognition device for intelligent blood sampling and recognition method thereof
JP4901229B2 (en) Red-eye detection method, apparatus, and program
WO2022088856A1 (en) Fundus image recognition method and apparatus, and device
CN113177564B (en) Computer vision pig key point identification method
CN110930358B (en) Solar panel image processing method based on self-adaptive algorithm
CN116434920A (en) Gastrointestinal epithelial metaplasia progression risk prediction method and device
CN110543802A (en) Method and device for identifying left eye and right eye in fundus image
CN114092441A (en) Product surface defect detection method and system based on dual neural network
KR20180064064A (en) Method for Detecting Edges on Color Image Based on Fuzzy Theory
CN112418085A (en) Facial expression recognition method under partial shielding working condition
Mei et al. Optic disc segmentation method based on low rank matrix recovery theory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination