WO2021120753A1 - Procédé et appareil pour la reconnaissance d'une zone luminale dans des vaisseaux choroïdiens, dispositif et support - Google Patents
Procédé et appareil pour la reconnaissance d'une zone luminale dans des vaisseaux choroïdiens, dispositif et support Download PDFInfo
- Publication number
- WO2021120753A1 WO2021120753A1 PCT/CN2020/116743 CN2020116743W WO2021120753A1 WO 2021120753 A1 WO2021120753 A1 WO 2021120753A1 CN 2020116743 W CN2020116743 W CN 2020116743W WO 2021120753 A1 WO2021120753 A1 WO 2021120753A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- fundus
- image
- lumen
- choroid
- region
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Definitions
- This application relates to the field of artificial intelligence image processing, and in particular to a method, device, equipment and medium for identifying the lumen region of choroidal blood vessels.
- the fundus choroid is located between the retina and the sclera. It is a soft, smooth, elastic and blood vessel-rich brown film. It starts from the serrated edge at the front and ends around the optic nerve at the back. The inner surface is connected by a layer of very smooth glass membrane and the retina. The pigment epithelial layer is connected, and the outside is connected with the sclera through a potential gap. The microfibrous platelets of the perichoroidal layer stretch and mix into the brown plate of the sclera, and blood vessels and nerves pass through it.
- the choroid is mainly composed of blood vessels, which provide oxygen and blood to the retina.
- the inventor realizes that in the field of medicine, doctors often need to manually identify the lumen area of the fundus choroid blood vessels in the collected fundus photos based on experience, to determine the characteristics of the fundus choroid, and then perform other medical treatments based on the identified characteristics. Due to the fact that manual recognition has high requirements for doctors’ experience, low resolution of acquisition equipment, light ghosting, and other objective factors, the manual recognition of the characteristics of the fundus choroidal vessel lumen region is biased and the recognition accuracy is low.
- This application provides a method, device, computer equipment, and storage medium for identifying the lumen region of choroidal blood vessels, which realizes automatic identification of the lumen region of the fundus choroidal blood vessels in fundus images.
- This application is suitable for fields such as smart transportation or smart medical care. It can further promote the construction of smart cities, reduce the cost of manual identification, and improve the accuracy and reliability of identification.
- a method for identifying the lumen region of choroidal blood vessels including:
- the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
- a device for identifying the lumen region of choroidal blood vessels comprising:
- the receiving module is configured to receive the fundus lumen identification request, and obtain the fundus image to be identified in the fundus lumen identification request;
- An input module for inputting the fundus image to be recognized into a U-Net-based fundus segmentation model, and performing choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmentation image;
- the interception module is used to perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and cut out from the fundus segmented image according to the fovea area
- the binary module is used to binarize the first fundus choroid image by the Niblack local threshold algorithm to obtain the first choroid binary image, and extract the first tube from the first choroid binary image Cavity area
- the recognition module is used to recognize the first lumen area image containing the lumen area of the fundus choroidal blood vessels from the fundus image to be identified according to the first lumen area.
- a computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, and the processor implements the following steps when the processor executes the computer-readable instructions:
- the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
- One or more readable storage media storing computer readable instructions, when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
- the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
- the method, device, computer equipment, and storage medium for identifying the lumen region of choroidal blood vessels acquire the fundus image to be identified in the fundus lumen identification request by receiving the fundus lumen identification request;
- the fundus image input is based on the U-Net fundus segmentation model, and the fundus image to be identified is subjected to choroidal feature extraction and edge segmentation through the fundus segmentation model to obtain the fundus segmentation image;
- the fundus segmentation image is obtained through the fundus fovea recognition model Perform image recognition, identify the foveal area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area; use the Niblack local threshold algorithm to determine the first fundus choroidal image.
- the fundus choroid image is binarized to obtain a first choroid binary image, and a first lumen area is extracted from the first choroid binary image; according to the first lumen area, from the to-be-identified
- the first lumen region image containing the lumen region of the fundus choroidal blood vessels is recognized in the fundus image, so the automatic recognition of the lumen region of the fundus choroidal blood vessels in the fundus image is realized through the fundus segmentation model based on U-Net, the fundus
- the fovea recognition model and the Niblack local threshold algorithm can quickly and accurately identify the luminal area of the fundus choroidal blood vessels to determine the characteristics of the fundus choroid. This reduces the cost of manual identification and improves the accuracy and reliability of recognition.
- FIG. 1 is a schematic diagram of the application environment of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
- FIG. 2 is a flowchart of a method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
- FIG. 3 is a flowchart of a method for identifying the lumen region of choroidal blood vessels in another embodiment of the present application
- step S20 is a flowchart of step S20 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
- step S203 is a flowchart of step S203 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
- step S30 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
- FIG. 7 is a flowchart of step S70 of the method for identifying the lumen region of the choroidal blood vessel in an embodiment of the present application
- Fig. 8 is a schematic block diagram of a device for recognizing the lumen region of choroidal blood vessels in an embodiment of the present application
- Fig. 9 is a schematic diagram of a computer device in an embodiment of the present application.
- the method for identifying the lumen region of choroidal blood vessels can be applied in the application environment as shown in FIG. 1, in which the client (computer equipment) communicates with the server through the network.
- the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
- the server can be implemented as an independent server or a server cluster composed of multiple servers.
- a method for identifying the lumen region of choroidal blood vessels is provided, and the technical solution mainly includes the following steps S10-S50:
- a fundus lumen identification request is received, and a fundus image to be identified in the fundus lumen identification request is acquired.
- the OCT scan image of the fundus collected by the OCT device and the OCT scan image of the fundus collected by the enhanced mode of the OCT device can collect more morphological features of the fundus choroid.
- the fundus lumen recognition request is triggered, wherein the fundus lumen recognition request includes the fundus image to be recognized, and the fundus to be recognized
- the image is the captured OCT scan image of the fundus and the image of the luminal area of the fundus choroidal blood vessel needs to be recognized.
- the trigger mode can be set according to requirements, such as automatically triggering after the fundus image to be recognized is collected, or after all the fundus images are collected. After the fundus image is recognized, it is triggered by clicking the OK button.
- the fundus image to be recognized is a multi-channel color fundus photograph or a black-and-white fundus photograph.
- acquiring the fundus image to be recognized in the fundus lumen recognition request includes an OCT scan image of the acquired fundus After preprocessing (filter denoising or/and image enhancement), such as Gaussian filter denoising, gamma transform algorithm correction, Laplace algorithm correction, etc., the preprocessed OCT scan image of the fundus is obtained as the waiting Recognizing the fundus image, in this way, can more reflect the blood vessel information of the fundus choroid.
- filter denoising or/and image enhancement such as Gaussian filter denoising, gamma transform algorithm correction, Laplace algorithm correction, etc.
- S20 Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmentation image.
- the fundus segmentation model is a trained convolutional neural network model based on the U-Net model, that is, the network structure of the fundus segmentation model includes the network structure of the U-Net model, that is, the fundus
- the network structure of the segmentation model is the network structure of the model improved on the basis of the network structure of the U-Net model.
- the U-Net model is conducive to image segmentation and requires less training set to achieve end-to-end
- the network structure of the end training, the fundus segmentation model extracts the choroid feature extraction of the fundus image to be recognized, and the choroid feature is the feature of the choroid layer and surrounding texture and shape information in the fundus choroid, and the choroid is extracted
- the feature is the use of continuous convolution and pooling down-sampling layers to extract the feature information in the fundus image to be recognized, and gradually map the feature information to high dimensions to obtain the highest dimensional and richest fundus image corresponding to the fundus image to be recognized
- the feature vector array of the feature information is to obtain a high-dimensional feature map.
- the edge segmentation process is as follows: first, the high-dimensional feature map is continuously deconvolved and the up-sampling layer is up-sampled to the fundus to be identified.
- the method before the step S20, that is, before the input of the fundus image to be recognized into the U-Net-based fundus segmentation model, the method includes:
- S201 Obtain a fundus image sample; the fundus image sample is associated with an edge line label and an area label.
- the fundus image sample is a collected historical OCT scan image containing the fundus choroid layer or an OCT scan image after preprocessing, and one fundus image sample is associated with one edge line label, and
- the edge line label is a set of manually labeled coordinate positions of points corresponding to the upper edge line and the lower edge line of the fundus choroid layer contained in the fundus image sample, and one fundus image sample is associated with one area label, so
- the area label is a set of manually labeled coordinate positions corresponding to the area range of the fundus choroid layer contained in the fundus image sample.
- S202 Input the fundus image sample into a U-Net-based convolutional neural network model containing initial parameters.
- the fundus image sample is input to the convolutional neural network model
- the convolutional neural network model is a model constructed based on the U-Net model
- the convolutional neural network model includes the initial parameters
- the initial parameters include the network structure of the U-Net model.
- the transfer learning Transfer Learning, TL
- TL Transfer Learning
- the choroidal feature in the fundus image sample is extracted through the convolutional neural network model, and the convolutional neural network model includes at least four of the down-sampling layers, and the down-sampling layers include Convolutional layer and pooling layer, the down-sampling layer is the level of multi-dimensional extraction of the choroidal features through different convolution kernels and pooling parameters, the convolution of the convolutional layer of each down-sampling layer The nuclei are different, and the pooling parameters of the pooling layer of each down-sampling layer are also different. Each down-sampling layer will output a choroidal feature map corresponding to the down-sampling layer.
- the choroid feature map output by the sampling layer is convolved to obtain the high-dimensional feature map, that is, the high-dimensional feature map is a feature vector array of the highest dimension and richest feature information corresponding to the fundus image to be recognized , Determining the choroid feature map and the high-dimensional feature map corresponding to each of the down-sampling layers as the fundus choroid feature map corresponding to the fundus image sample;
- the convolutional neural network model includes an upsampling layer corresponding to the downsampling layer, that is, the number of downsampling layers and the number of upsampling layers included in the convolutional neural network model
- the fundus choroidal feature map is subjected to continuous deconvolution processing, that is, up-sampling processing, until the fundus output image is output, each of the up-sampling layers will output a fundus feature map, and each of the upsampling layers will output a fundus feature map.
- the fundus feature map output by the sampling layer is determined to be the fundus feature vector map, and the fundus feature maps output by the two consecutive long sampling layers are fused to obtain a fused feature vector map.
- the region recognition result includes the recognition probability value corresponding to each pixel in the fundus output image, and all the recognition probability values greater than the preset probability threshold correspond to the Pixels are marked in the fundus output image to obtain the recognition area in the area recognition result, and the recognition area and all the recognition probability values are determined as the area recognition result.
- the choroidal feature in the fundus image sample is extracted by the convolutional neural network model to obtain the convolutional neural network model
- the region recognition result, fundus feature vector map, and fusion feature vector map output according to the choroid features include:
- S2031 Extract the choroidal feature from the fundus image sample by using the convolutional neural network model to obtain a fundus choroidal feature map.
- the convolutional neural network model includes a first down-sampling layer, a second down-sampling layer, a third down-sampling layer, and a fourth down-sampling layer.
- the first down-sampling layer and the second down-sampling layer The layer, the third down-sampling layer and the fourth down-sampling layer all include two convolutional layers of 3 ⁇ 3 convolution kernels, two activation layers, and a pooling layer with a maximum pooling parameter of 2 ⁇ 2,
- the fundus image sample is input to the first down-sampling layer for convolution to obtain a 64-dimensional first choroid feature map; the first choroidal feature map is input to the second down-sampling layer for convolution to obtain 128 dimensions The second choroid feature map; the second choroid feature map is input to the third down-sampling layer for convolution to obtain a 256-dimensional third choroid feature map; the third choroid feature map is input to the first Four down-sampling layers are convolve
- S2032 Up-sampling and splicing the fundus choroid feature map through the convolutional neural network model to obtain the fundus feature vector map.
- the up-sampling and splicing is to deconvolve the fundus choroid feature map to generate an intermediate fundus feature map with the same dimension as the adjacent choroid feature map, and to splice it with the choroid feature map.
- the dimension will become twice the original dimension.
- it needs to be convolved again to obtain the fundus feature map to ensure that the dimension of the processed fundus feature map is the same as the dimension before the splicing operation.
- the next up-sampling and splicing are performed until the final dimension of the fundus image sample is the same, the fundus choroid feature map is up-sampled and spliced, and the fundus feature vector map is finally output.
- the convolutional neural network model includes a first upsampling layer, a second upsampling layer, a third upsampling layer, and a fourth upsampling layer, and the fourth upsampling layer reverses the high-dimensional feature map.
- Convolution to obtain a 512-dimensional fourth intermediate fundus feature map, which is spliced with the 512-dimensional fourth choroidal feature map, and then passes through a 3 ⁇ 3 convolution kernel convolution layer And an activation layer to obtain a 512-dimensional fourth fundus feature map;
- the third up-sampling layer deconvolves the fourth fundus feature map to obtain a 256-dimensional third middle fundus feature map, and the second The three middle fundus feature maps are spliced with the 256-dimensional third choroidal feature map, and then through a 3 ⁇ 3 convolution kernel convolution layer and an activation layer, a 256-dimensional third fundus feature map is obtained;
- the second up-sampling layer deconvolves the third fundus feature map to obtain a 128-dimensional second intermediate fundus feature map, and stitches the second intermediate fundus feature map with the 128-dimensional second choroidal feature map , And then go through a convolutional layer of a 3 ⁇ 3 convolution kernel and an activation layer to obtain
- S2033 Perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map to obtain the fused feature vector map.
- the recognition probability value corresponding to each pixel in the fundus output image is calculated, and all the pixels corresponding to the recognition probability value greater than the preset probability threshold are calculated. Marking is performed on the fundus output image to obtain the recognition area in the area recognition result, and the recognition area and all the recognition probability values are determined as the area recognition result.
- the first fundus feature map and the second fundus feature map are superimposed to obtain a first feature map to be fused, and the second fundus feature map is superimposed with the third fundus feature map to obtain a first fundus feature map.
- the fusion is feature fusion, that is, the extracted feature information is analyzed, processed and integrated to obtain commonality
- a fusion feature vector map with very obvious features, and the size of the fusion feature vector map is consistent with the size of the fundus output image.
- the present application realizes that the choroidal feature is extracted from the fundus image sample through the convolutional neural network model to obtain a fundus choroidal feature map; and the fundus choroidal feature map is upsampled through the convolutional neural network model And splicing to obtain the fundus feature vector map; perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map,
- the fusion feature vector graph is obtained, therefore, the accuracy of recognition can be improved, the number of training iterations can be reduced, and the training efficiency can be improved.
- S204 Perform edge detection on the fundus feature vector map through the convolutional neural network model to obtain an edge result, and simultaneously perform region segmentation on the fused feature vector map to obtain a region segmentation result.
- the edge detection is to identify points with obvious changes in feature vectors in the fundus feature vector map, and by performing the edge detection processing on the fundus feature vector map, it is possible to identify points with the fundus feature vector map.
- the coordinate point of the upper edge line and the coordinate point of the lower edge line in the vector diagram, and the probability value of each coordinate point with the upper edge line in the fundus feature vector diagram (confirmed as the probability value of the upper edge line) is calculated, and The probability value of each coordinate point corresponding to the bottom edge line in the fundus feature vector diagram (confirmed as the probability value of the bottom edge line) will be compared with the coordinate point of the top edge line and the coordinate point of the bottom edge line in the fundus feature vector diagram.
- Points, and the probability values of the coordinate points of the upper edge line and the coordinate points of the lower edge line in the fundus feature vector map are determined together as the edge result, and at the same time, the fusion feature vector map is segmented, the The region segmentation is based on the feature vector corresponding to each pixel in the fusion feature vector map, and the probability value of whether the coordinate of each pixel point in the fusion feature vector map corresponds to the area of the fundus choroid layer is calculated, and it will be confirmed The coordinates of the region of the fundus choroid layer are segmented, and the region segmentation result is obtained.
- S205 Determine a classification loss value according to the area recognition result and the area label; determine an edge loss value according to the edge result and the edge line label; according to the area segmentation result and the area label, Determine the segmentation loss value.
- the region recognition result and the region label are input into the classification loss function in the convolutional neural network model, and the classification loss value is calculated by the classification loss function, and the classification loss function can be based on Demand setting, such as cross entropy loss function, input the edge result and the edge line label into the edge loss function in the convolutional neural network model, calculate the edge loss value through the edge loss function, and set
- the region segmentation result and the region label are input to the segmentation loss function in the convolutional neural network model, and the segmentation loss value is calculated by the segmentation loss function.
- S206 Determine a total loss value according to the classification loss value, the edge loss value, and the segmentation loss value.
- the classification loss value, the edge loss value, and the segmentation loss value are input into the total loss function in the convolutional neural network model, and the total loss value is calculated by the total loss function ;
- the medium loss value is:
- ⁇ 1 is the weight of the classification loss value
- L 1 is the classification loss value
- ⁇ 2 is the weight of the edge loss value
- L 2 is the edge loss value
- ⁇ 3 is the segmentation loss value
- the weight of; L 3 is the segmentation loss value.
- the convergence condition may be a condition that the value of the total loss value is small and will not drop after 1000 calculations, that is, the value of the total loss value is small and will not drop after 1000 calculations.
- the convergence condition may also be a condition that the total loss value is less than a set threshold , That is, when the total loss value is less than the set threshold, the training is stopped, and the convolutional neural network model after convergence is recorded as the fundus segmentation model after training.
- the initial parameters of the iterative convolutional neural network model are continuously updated, and the convolutional neural network model is triggered to extract the choroidal features in the fundus image sample to obtain the convolutional neural network.
- the network model can continuously move closer to the accurate result based on the regional recognition result, fundus feature vector map and the step of fusing the feature vector map outputted by the choroidal features, so that the accuracy of recognition becomes higher and higher.
- S30 Perform image recognition on the fundus segmented image using the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and cut out the first fundus from the fundus segmented image according to the fovea area Choroidal image.
- the fundus fovea recognition model is a trained neural network model.
- the network structure of the fundus fovea recognition model can be based on requirements.
- the fundus fovea recognition model can be the network structure of the YOLO (You Only Look Once) model, the network structure of the SSD (Single Shot MultiBox Detector) model, etc., because the fovea area is in the fundus segmented image
- the area of the fovea recognition model is smaller, so the network structure of the fundus fovea recognition model is preferably the network structure of the SSD (Single Shot MultiBox Detector) model, because the network structure of the SSD model is conducive to the identification of small objects, and according to the
- the obtained foveal area is intercepted from the fundus segmented image according to preset size parameters to obtain the first fundus choroid image.
- the size parameters can be set according to requirements, for example, the size of the fo
- the image recognition is performed on the fundus segmented image through the fundus fovea recognition model to identify the fovea area in the fundus segmented image, And extracting a first fundus choroid image from the fundus segmented image according to the fovea area, including:
- S301 Input the segmented image of the fundus into an SSD-based fundus fovea recognition model.
- the fundus fovea recognition model is a trained neural network model based on an SSD model, and the fundus segmented image is input into the fundus fovea recognition model.
- S302 Using the SSD algorithm, extract the fovea feature through the fundus fovea recognition model, and perform target detection according to the fovea feature to obtain the fovea area.
- the SSD algorithm extracts the fovea feature of the fundus segmented image through feature maps of different scales, and the fovea feature is the same as that of the fundus.
- the characteristics of the foveal area of the choroid layer, and target detection is performed based on the fovea features, that is, the area is predicted by the method of clearly distinguishing the aspect ratio of the extracted fovea features, and finally the choroidal fovea containing the fundus is identified The foveal area.
- the size parameter can be set according to requirements.
- the size parameter is 6000 ⁇ 1500
- the second is cut from the fundus segmented image according to a preset size parameter with the fovea region as the center.
- a fundus choroid image for example, taking the size parameters of the center of the fovea region of 6000 ⁇ m in length and 1500 ⁇ m in width to intercept the first fundus choroid image from the fundus segmented image.
- this application realizes that by inputting the fundus segmented image into an SSD-based fundus fovea recognition model; using the SSD algorithm, extracting the fovea features from the fundus fovea recognition model, and performing target detection based on the fovea features , Obtain the foveal area; take the foveal area as the center, and intercept the first fundus choroid image from the fundus segmented image according to preset size parameters, so that the foveal area can be automatically identified, and The fundus choroid image of the same size is cut out to facilitate subsequent identification and improve the accuracy of identification.
- S40 Binarize the first fundus choroid image by using the Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image.
- binarization is performed on the first fundus choroid image to obtain a value corresponding to the first choroidal binary image, and the value of each pixel in the first choroidal binary image is 0 or 1, or It is displayed in two colors of white (corresponding to 0) or black (corresponding to 1), and extracting the black area in the first choroidal binary image to obtain the first lumen area.
- the binarization processing includes a processing operation of performing binarization calculation on each pixel in the first fundus choroid image by the Niblack local threshold algorithm, and the Niblack local threshold algorithm is for each pixel in the image. Each pixel is compared with the threshold calculated by the local area for binarization.
- S50 identify a first lumen area image containing a lumen area of the fundus choroidal blood vessel from the fundus image to be identified.
- marking is performed from the fundus image to be identified according to the coordinate position of the first lumen area, so that the first lumen area image can be obtained, and the first lumen area image is the mark fundus choroid Image of the lumen area of the blood vessel.
- this application realizes that by receiving the fundus lumen recognition request, the fundus image to be recognized in the fundus lumen recognition request is obtained; the fundus image to be recognized is input into the U-Net-based fundus segmentation model, and the fundus segmentation model is based on U-Net.
- the fundus segmentation model performs choroid feature extraction and edge segmentation on the fundus image to be recognized to obtain a fundus segmented image; image recognition is performed on the fundus segmented image through the fundus fovea recognition model, and the fovea in the fundus segmentation image is identified Region, and extract the first fundus choroid image from the fundus segmentation image according to the fovea region; use the Niblack local threshold algorithm to binarize the first fundus choroid image to obtain the first choroid binary value Image, and extract the first lumen area from the first choroidal binary image; according to the first lumen area, identify the first lumen area containing the fundus choroidal blood vessels from the fundus image to be identified A lumen region image, therefore, realizes the automatic recognition of the lumen region of the fundus choroidal blood vessels in the fundus image.
- the fundus fovea recognition model and the Niblack local threshold algorithm can be quickly and accurately
- the luminal area of the fundus choroidal blood vessels is accurately recognized to determine the characteristics of the fundus choroid. In this way, the cost of manual identification is reduced, and the accuracy and reliability of identification are improved.
- the method further includes:
- S60 Perform grayscale processing on the fundus image to be identified to obtain a grayscale image of the fundus.
- the multi-channel fundus image to be identified is subjected to grayscale processing to obtain the single-channel fundus grayscale image.
- the fundus image to be identified includes RGB (Red Green Blue, red, green, and blue). (Color) three-channel image, the gray-scale binarization of the value corresponding to the same pixel in each channel image is performed to obtain the gray-scale value of the pixel, and a channel image is obtained, which is the fundus gray-scale image.
- the adaptive threshold method is a method that uses a local threshold value in an image to replace a global threshold value for image calculation. It is specifically aimed at images with excessively large changes in light and shadow, or images with less obvious color differences within a range.
- the normalization process is to calculate the average value of all gray values corresponding to pixels in the first lumen area by the adaptive threshold method, record it as the lumen adaptive threshold, and obtain
- the preset maximum grayscale value, the maximum grayscale value can be set according to requirements, preferably 255, and then calculated according to the grayscale normalization function in the adaptive threshold method in the fundus grayscale image and The normalized value of the gray level corresponding to each pixel, and finally output the first fundus image, and the first fundus image has the same size as the fundus image to be identified.
- the adaptive threshold method is used to obtain the adaptive threshold value of the lumen in the first lumen region, and automatically The adaptive threshold normalizes the gray-scale image of the fundus to obtain the first fundus image, including:
- S701 Acquire the adaptive threshold of the lumen through an adaptive threshold method.
- the adaptive threshold method is a method that uses local thresholds in the image to replace global thresholds for image calculation, and calculates the average value of all gray values corresponding to pixels in the first lumen region, Record this as the adaptive threshold of the lumen.
- S702 Acquire a gray value corresponding to each pixel in the fundus gray image and a preset maximum gray value.
- the fundus gray-scale image includes a gray-scale value corresponding to each pixel, and a preset maximum gray-scale value is obtained.
- the maximum gray-scale value can be set according to requirements, and is preferably 255.
- the gray-level normalization model includes a gray-level normalization function, and the gray-level normalization value corresponding to each pixel point can be calculated by the gray-level normalization function.
- obtaining the normalized gray value corresponding to each pixel includes:
- f(x, y) is the gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image
- F(x, y) is the normalized gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image
- A is the adaptive threshold of the lumen
- B is the maximum gray value.
- S704 Join all the normalized gray values according to the positions of the pixels to obtain the first fundus image.
- each of the gray-scale normalized values is spliced according to the positions of the corresponding pixel points to form a new image, and the image is determined as the first fundus image, so as to correct the gray fundus The degree image is corrected.
- the present application realizes that through the adaptive threshold method, the fundus gray-scale image can be corrected, and the lumen area of the fundus choroidal blood vessel can be corrected, so that the lumen area of the fundus choroidal blood vessel can be more prominent, which is easy to identify and improve Improved recognition accuracy and reliability.
- extracting from the first fundus image according to the coordinate area of the fovea region can obtain the second fundus choroid image.
- S90 Perform binarization processing on the second fundus choroid image according to the Niblack local threshold method to obtain a second choroid binary image, and extract a second lumen region from the second choroid binary image.
- the second fundus choroid image is binarized to obtain the value corresponding to the second choroid binary image.
- the value of each pixel in the second choroid binary image is The value is 0 or 1, and can also be displayed in two colors of white (corresponding to 0) or black (corresponding to 1).
- the black area in the second choroidal binary image is extracted to obtain the second lumen area.
- the binarization processing further includes a processing operation of performing binarization calculation on each pixel in the second fundus choroid image by the Niblack local threshold algorithm.
- S100 identify a second lumen area image containing the lumen area of the fundus choroidal blood vessel from the fundus image to be identified.
- marking is performed from the fundus image to be identified according to the coordinate position of the second lumen area, so that the second lumen area image can be obtained, and the second lumen area image is the mark fundus choroid
- the image of the lumen area of the blood vessel in this way, can more accurately determine the lumen area of the fundus choroidal blood vessel.
- This application realizes that by receiving the fundus lumen recognition request, the fundus image to be recognized in the fundus lumen recognition request is obtained; the fundus image to be recognized is input into the U-Net-based fundus segmentation model, and the fundus segmentation is performed
- the model performs choroid feature extraction and edge segmentation on the fundus image to be recognized to obtain a fundus segmented image; image recognition is performed on the fundus segmented image through the fundus fovea recognition model to identify the fovea area in the fundus segmented image, And cut out the first fundus choroid image from the fundus segmented image according to the fovea region; use the Niblack local threshold algorithm to binarize the first fundus choroid image to obtain the first choroidal binary image, And extract the first lumen area from the first choroidal binary image; according to the first lumen area, identify the first tube containing the lumen area of the fundus choroidal blood vessel from the fundus image to be identified Cavity region image, performing grayscale processing on the fundus
- the U-Net-based fundus segmentation model, the fundus fovea recognition model, and two Niblack local Threshold algorithm processing and adaptive threshold method can correct the fundus image to be identified, so as to more accurately identify the lumen region of the fundus choroidal blood vessel, thus further improving the recognition accuracy and reliability.
- step S100 that is, after the second lumen area image containing the lumen area of the fundus choroidal blood vessels is identified from the fundus image to be identified according to the second lumen area ,include:
- S110 Calculate the area of the luminal area of the fundus choroidal blood vessel in the second luminal area image to obtain the area of the luminal area, and calculate the area of the first fundus choroidal image to obtain the area of the choroidal area;
- S120 Calculate the ratio of the area of the lumen region to the area of the choroid region to obtain a choroidal blood vessel index.
- the lumen region area of the lumen region containing the fundus choroidal blood vessels is calculated, and the choroidal region area is calculated at the same time, and the ratio of the lumen region area and the choroidal region area is calculated to obtain the choroidal blood vessel Index, so that the doctor can carry out the next medical action, provides data indicators related to the fundus choroid.
- a device for identifying the lumen region of choroidal blood vessels is provided.
- the device for identifying the lumen region of choroidal blood vessels corresponds to the method for identifying the lumen region of choroidal blood vessels in the above-mentioned embodiment.
- the device for identifying the lumen region of choroidal blood vessels includes a receiving module 11, an input module 12, an intercepting module 13, a binary module 14 and an identifying module 15. The detailed description of each functional module is as follows:
- the receiving module 11 is configured to receive the fundus lumen identification request, and obtain the fundus image to be identified in the fundus lumen identification request;
- the input module 12 is configured to input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
- the intercepting module 13 is configured to perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and intercept the fundus segmented image according to the fovea area Get the first fundus choroid image;
- the binary module 14 is used to binarize the first fundus choroid image by using the Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first choroid image from the first choroid image.
- the recognition module 15 is configured to recognize, from the fundus image to be recognized, a first lumen area image containing a lumen area of the fundus choroidal blood vessel according to the first lumen area.
- the various modules in the above-mentioned choroidal vessel lumen region recognition device can be implemented in whole or in part by software, hardware, and a combination thereof.
- the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
- a computer device is provided.
- the computer device may be a server, and its internal structure diagram may be as shown in FIG. 9.
- the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus.
- the processor of the computer device is used to provide calculation and control capabilities.
- the memory of the computer device includes a readable storage medium and an internal memory.
- the readable storage medium stores an operating system, computer readable instructions, and a database.
- the internal memory provides an environment for the operation of the operating system and computer readable instructions in the readable storage medium.
- the network interface of the computer device is used to communicate with an external terminal through a network connection. When the computer-readable instructions are executed by the processor, a method for identifying the lumen region of choroidal blood vessels is realized.
- the readable storage medium provided in this embodiment includes a non-volatile readable storage medium and a volatile readable storage medium.
- a computer device including a memory, a processor, and computer-readable instructions stored in the memory and running on the processor.
- the processor executes the computer-readable instructions, the choroid in the above-mentioned embodiment is implemented.
- one or more readable storage media storing computer readable instructions are provided.
- the readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage. Medium; the readable storage medium stores computer readable instructions, and when the computer readable instructions are executed by one or more processors, the one or more processors implement the method for identifying the lumen region of the choroidal blood vessel in the above-mentioned embodiment .
- Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory may include random access memory (RAM) or external cache memory.
- RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Eye Examination Apparatus (AREA)
Abstract
La présente invention se rapporte au domaine de l'intelligence artificielle et concerne un procédé et un appareil pour la reconnaissance d'une zone luminale dans des vaisseaux choroïdiens, un dispositif et un support, le procédé comprenant les étapes consistant à : acquérir une image de fond d'œil à reconnaître ; entrer l'image dans un modèle de segmentation de fond d'œil basé sur l'U-net (réseau entièrement convolutionnel) et au moyen du modèle de segmentation de fond d'œil, réaliser une extraction de caractéristique de choroïde et une segmentation de bord sur l'image de fond d'œil à reconnaître, pour obtenir une image de fond d'œil segmentée ; au moyen d'un modèle de reconnaissance de la fovéa centralis de fond d'œil, reconnaître une zone de fovéa centralis dans l'image de fond d'œil segmentée et, sur la base de la zone de fovéa centralis, couper une première image de choroïde de fond d'œil à partir de l'image de fond d'œil segmentée ; au moyen d'un algorithme de seuil local de Niblack, effectuer un traitement de binarisation sur la première image de choroïde de fond d'œil, pour obtenir une première image binaire de choroïde et extraire une première zone luminale de la première image binaire de choroïde ; et reconnaître une première image de zone luminale. La présente invention permet la reconnaissance automatique d'une zone luminale dans des vaisseaux choroïdaux de fond d'œil dans une image de fond d'œil. La présente invention est appropriée pour des domaines tels que la médecine intelligente et peut en outre favoriser la construction de villes intelligentes.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010761238.1 | 2020-07-31 | ||
CN202010761238.1A CN111899247B (zh) | 2020-07-31 | 2020-07-31 | 脉络膜血管的管腔区域识别方法、装置、设备及介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021120753A1 true WO2021120753A1 (fr) | 2021-06-24 |
Family
ID=73184124
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/116743 WO2021120753A1 (fr) | 2020-07-31 | 2020-09-22 | Procédé et appareil pour la reconnaissance d'une zone luminale dans des vaisseaux choroïdiens, dispositif et support |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111899247B (fr) |
WO (1) | WO2021120753A1 (fr) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541924B (zh) * | 2020-12-08 | 2023-07-18 | 北京百度网讯科技有限公司 | 眼底图像生成方法、装置、设备以及存储介质 |
CN112529906B (zh) * | 2021-02-07 | 2021-05-14 | 南京景三医疗科技有限公司 | 一种软件层面的血管内oct三维图像管腔分割方法和装置 |
CN112949585A (zh) * | 2021-03-30 | 2021-06-11 | 北京工业大学 | 一种眼底图像血管的识别方法、装置、电子设备及存储介质 |
CN116309549B (zh) * | 2023-05-11 | 2023-10-03 | 爱尔眼科医院集团股份有限公司 | 一种眼底区域检测方法、装置、设备及可读存储介质 |
CN117994509B (zh) * | 2023-12-26 | 2024-07-12 | 徐州市第一人民医院 | 一种基于交互式的眼底图像无灌注区域智能识别方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104768446A (zh) * | 2012-09-10 | 2015-07-08 | 俄勒冈健康科学大学 | 用光学相干断层扫描血管造影对局部循环进行量化 |
US20160278627A1 (en) * | 2015-03-25 | 2016-09-29 | Oregon Health & Science University | Optical coherence tomography angiography methods |
CN106599804A (zh) * | 2016-11-30 | 2017-04-26 | 哈尔滨工业大学 | 基于多特征模型的视网膜中央凹检测方法 |
CN109509178A (zh) * | 2018-10-24 | 2019-03-22 | 苏州大学 | 一种基于改进的U-net网络的OCT图像脉络膜分割方法 |
CN110599480A (zh) * | 2019-09-18 | 2019-12-20 | 上海鹰瞳医疗科技有限公司 | 多源输入的眼底图像分类方法和设备 |
CN111345775A (zh) * | 2018-12-21 | 2020-06-30 | 伟伦公司 | 眼底图像的评估 |
CN111402243A (zh) * | 2020-03-20 | 2020-07-10 | 林晨 | 黄斑中心凹识别方法及终端 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106683080B (zh) * | 2016-12-15 | 2019-09-27 | 广西师范大学 | 一种视网膜眼底图像预处理方法 |
CN109472781B (zh) * | 2018-10-29 | 2022-02-11 | 电子科技大学 | 一种基于串行结构分割的糖尿病视网膜病变检测系统 |
CN111292338B (zh) * | 2020-01-22 | 2023-04-21 | 苏州大学 | 一种从眼底oct图像中分割脉络膜新生血管的方法及系统 |
-
2020
- 2020-07-31 CN CN202010761238.1A patent/CN111899247B/zh active Active
- 2020-09-22 WO PCT/CN2020/116743 patent/WO2021120753A1/fr active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104768446A (zh) * | 2012-09-10 | 2015-07-08 | 俄勒冈健康科学大学 | 用光学相干断层扫描血管造影对局部循环进行量化 |
US20160278627A1 (en) * | 2015-03-25 | 2016-09-29 | Oregon Health & Science University | Optical coherence tomography angiography methods |
CN106599804A (zh) * | 2016-11-30 | 2017-04-26 | 哈尔滨工业大学 | 基于多特征模型的视网膜中央凹检测方法 |
CN109509178A (zh) * | 2018-10-24 | 2019-03-22 | 苏州大学 | 一种基于改进的U-net网络的OCT图像脉络膜分割方法 |
CN111345775A (zh) * | 2018-12-21 | 2020-06-30 | 伟伦公司 | 眼底图像的评估 |
CN110599480A (zh) * | 2019-09-18 | 2019-12-20 | 上海鹰瞳医疗科技有限公司 | 多源输入的眼底图像分类方法和设备 |
CN111402243A (zh) * | 2020-03-20 | 2020-07-10 | 林晨 | 黄斑中心凹识别方法及终端 |
Also Published As
Publication number | Publication date |
---|---|
CN111899247B (zh) | 2024-05-24 |
CN111899247A (zh) | 2020-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021120753A1 (fr) | Procédé et appareil pour la reconnaissance d'une zone luminale dans des vaisseaux choroïdiens, dispositif et support | |
WO2020143309A1 (fr) | Procédé d'apprentissage de modèle de segmentation, procédé de segmentation d'image oct et appareil, dispositif et support | |
CN110120047B (zh) | 图像分割模型训练方法、图像分割方法、装置、设备及介质 | |
US20200372648A1 (en) | Image processing method and device, computer apparatus, and storage medium | |
US11842487B2 (en) | Detection model training method and apparatus, computer device and storage medium | |
CN110796161B (zh) | 识别模型训练、眼底特征的识别方法、装置、设备及介质 | |
CN109492643B (zh) | 基于ocr的证件识别方法、装置、计算机设备及存储介质 | |
Dharmawan et al. | A new hybrid algorithm for retinal vessels segmentation on fundus images | |
US20200257879A1 (en) | Systems and methods for enhancement of retinal images | |
CN110569756B (zh) | 人脸识别模型构建方法、识别方法、设备和存储介质 | |
Saha et al. | Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: A review | |
WO2022088665A1 (fr) | Procédé et appareil de segmentation de lésion et support de stockage | |
WO2020140370A1 (fr) | Procédé et dispositif de détection automatiquement de pétéchie dans un fond d'œil, et support de stockage lisible par ordinateur | |
WO2019174276A1 (fr) | Procédé, dispositif, équipement et support pour localiser le centre d'une région d'objet cible | |
WO2020248848A1 (fr) | Procédé et dispositif de détermination intelligente de cellule anormale, et support d'informations lisible par ordinateur | |
WO2021114623A1 (fr) | Procédé, appareil, dispositif informatique et support de stockage pour identifier des personnes ayant une déformation de la colonne vertébrale | |
WO2023130648A1 (fr) | Procédé et appareil d'amélioration de données d'image, dispositif informatique et support de stockage | |
WO2021136368A1 (fr) | Procédé et appareil pour détecter automatiquement une zone principale pectorale dans une image cible de molybdène | |
KR20220100812A (ko) | 안면 생체 검출 방법, 장치, 전자 기기 및 저장 매체 | |
WO2021159811A1 (fr) | Appareil et procédé de diagnostic auxiliaire pour le glaucome, et support d'information | |
CN109919179A (zh) | 微动脉瘤自动检测方法、装置及计算机可读存储介质 | |
CN112686855A (zh) | 一种眼象与症状信息的信息关联方法 | |
Zhou et al. | Color fundus photograph registration based on feature and intensity for longitudinal evaluation of diabetic retinopathy progression | |
Revathy | Revelation of diabetics by inadequate balanced SVM | |
Ni et al. | Fast iris segmentation under partly occlusion based on MTCNN and weighted FCN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20903553 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20903553 Country of ref document: EP Kind code of ref document: A1 |