WO2021120753A1 - Method and apparatus for recognition of luminal area in choroidal vessels, device, and medium - Google Patents

Method and apparatus for recognition of luminal area in choroidal vessels, device, and medium Download PDF

Info

Publication number
WO2021120753A1
WO2021120753A1 PCT/CN2020/116743 CN2020116743W WO2021120753A1 WO 2021120753 A1 WO2021120753 A1 WO 2021120753A1 CN 2020116743 W CN2020116743 W CN 2020116743W WO 2021120753 A1 WO2021120753 A1 WO 2021120753A1
Authority
WO
WIPO (PCT)
Prior art keywords
fundus
image
lumen
choroid
region
Prior art date
Application number
PCT/CN2020/116743
Other languages
French (fr)
Chinese (zh)
Inventor
周侠
王玥
张成奋
吕彬
吕传峰
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021120753A1 publication Critical patent/WO2021120753A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • This application relates to the field of artificial intelligence image processing, and in particular to a method, device, equipment and medium for identifying the lumen region of choroidal blood vessels.
  • the fundus choroid is located between the retina and the sclera. It is a soft, smooth, elastic and blood vessel-rich brown film. It starts from the serrated edge at the front and ends around the optic nerve at the back. The inner surface is connected by a layer of very smooth glass membrane and the retina. The pigment epithelial layer is connected, and the outside is connected with the sclera through a potential gap. The microfibrous platelets of the perichoroidal layer stretch and mix into the brown plate of the sclera, and blood vessels and nerves pass through it.
  • the choroid is mainly composed of blood vessels, which provide oxygen and blood to the retina.
  • the inventor realizes that in the field of medicine, doctors often need to manually identify the lumen area of the fundus choroid blood vessels in the collected fundus photos based on experience, to determine the characteristics of the fundus choroid, and then perform other medical treatments based on the identified characteristics. Due to the fact that manual recognition has high requirements for doctors’ experience, low resolution of acquisition equipment, light ghosting, and other objective factors, the manual recognition of the characteristics of the fundus choroidal vessel lumen region is biased and the recognition accuracy is low.
  • This application provides a method, device, computer equipment, and storage medium for identifying the lumen region of choroidal blood vessels, which realizes automatic identification of the lumen region of the fundus choroidal blood vessels in fundus images.
  • This application is suitable for fields such as smart transportation or smart medical care. It can further promote the construction of smart cities, reduce the cost of manual identification, and improve the accuracy and reliability of identification.
  • a method for identifying the lumen region of choroidal blood vessels including:
  • the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  • a device for identifying the lumen region of choroidal blood vessels comprising:
  • the receiving module is configured to receive the fundus lumen identification request, and obtain the fundus image to be identified in the fundus lumen identification request;
  • An input module for inputting the fundus image to be recognized into a U-Net-based fundus segmentation model, and performing choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmentation image;
  • the interception module is used to perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and cut out from the fundus segmented image according to the fovea area
  • the binary module is used to binarize the first fundus choroid image by the Niblack local threshold algorithm to obtain the first choroid binary image, and extract the first tube from the first choroid binary image Cavity area
  • the recognition module is used to recognize the first lumen area image containing the lumen area of the fundus choroidal blood vessels from the fundus image to be identified according to the first lumen area.
  • a computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, and the processor implements the following steps when the processor executes the computer-readable instructions:
  • the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  • One or more readable storage media storing computer readable instructions, when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
  • the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  • the method, device, computer equipment, and storage medium for identifying the lumen region of choroidal blood vessels acquire the fundus image to be identified in the fundus lumen identification request by receiving the fundus lumen identification request;
  • the fundus image input is based on the U-Net fundus segmentation model, and the fundus image to be identified is subjected to choroidal feature extraction and edge segmentation through the fundus segmentation model to obtain the fundus segmentation image;
  • the fundus segmentation image is obtained through the fundus fovea recognition model Perform image recognition, identify the foveal area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area; use the Niblack local threshold algorithm to determine the first fundus choroidal image.
  • the fundus choroid image is binarized to obtain a first choroid binary image, and a first lumen area is extracted from the first choroid binary image; according to the first lumen area, from the to-be-identified
  • the first lumen region image containing the lumen region of the fundus choroidal blood vessels is recognized in the fundus image, so the automatic recognition of the lumen region of the fundus choroidal blood vessels in the fundus image is realized through the fundus segmentation model based on U-Net, the fundus
  • the fovea recognition model and the Niblack local threshold algorithm can quickly and accurately identify the luminal area of the fundus choroidal blood vessels to determine the characteristics of the fundus choroid. This reduces the cost of manual identification and improves the accuracy and reliability of recognition.
  • FIG. 1 is a schematic diagram of the application environment of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
  • FIG. 2 is a flowchart of a method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
  • FIG. 3 is a flowchart of a method for identifying the lumen region of choroidal blood vessels in another embodiment of the present application
  • step S20 is a flowchart of step S20 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
  • step S203 is a flowchart of step S203 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
  • step S30 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application
  • FIG. 7 is a flowchart of step S70 of the method for identifying the lumen region of the choroidal blood vessel in an embodiment of the present application
  • Fig. 8 is a schematic block diagram of a device for recognizing the lumen region of choroidal blood vessels in an embodiment of the present application
  • Fig. 9 is a schematic diagram of a computer device in an embodiment of the present application.
  • the method for identifying the lumen region of choroidal blood vessels can be applied in the application environment as shown in FIG. 1, in which the client (computer equipment) communicates with the server through the network.
  • the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a method for identifying the lumen region of choroidal blood vessels is provided, and the technical solution mainly includes the following steps S10-S50:
  • a fundus lumen identification request is received, and a fundus image to be identified in the fundus lumen identification request is acquired.
  • the OCT scan image of the fundus collected by the OCT device and the OCT scan image of the fundus collected by the enhanced mode of the OCT device can collect more morphological features of the fundus choroid.
  • the fundus lumen recognition request is triggered, wherein the fundus lumen recognition request includes the fundus image to be recognized, and the fundus to be recognized
  • the image is the captured OCT scan image of the fundus and the image of the luminal area of the fundus choroidal blood vessel needs to be recognized.
  • the trigger mode can be set according to requirements, such as automatically triggering after the fundus image to be recognized is collected, or after all the fundus images are collected. After the fundus image is recognized, it is triggered by clicking the OK button.
  • the fundus image to be recognized is a multi-channel color fundus photograph or a black-and-white fundus photograph.
  • acquiring the fundus image to be recognized in the fundus lumen recognition request includes an OCT scan image of the acquired fundus After preprocessing (filter denoising or/and image enhancement), such as Gaussian filter denoising, gamma transform algorithm correction, Laplace algorithm correction, etc., the preprocessed OCT scan image of the fundus is obtained as the waiting Recognizing the fundus image, in this way, can more reflect the blood vessel information of the fundus choroid.
  • filter denoising or/and image enhancement such as Gaussian filter denoising, gamma transform algorithm correction, Laplace algorithm correction, etc.
  • S20 Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmentation image.
  • the fundus segmentation model is a trained convolutional neural network model based on the U-Net model, that is, the network structure of the fundus segmentation model includes the network structure of the U-Net model, that is, the fundus
  • the network structure of the segmentation model is the network structure of the model improved on the basis of the network structure of the U-Net model.
  • the U-Net model is conducive to image segmentation and requires less training set to achieve end-to-end
  • the network structure of the end training, the fundus segmentation model extracts the choroid feature extraction of the fundus image to be recognized, and the choroid feature is the feature of the choroid layer and surrounding texture and shape information in the fundus choroid, and the choroid is extracted
  • the feature is the use of continuous convolution and pooling down-sampling layers to extract the feature information in the fundus image to be recognized, and gradually map the feature information to high dimensions to obtain the highest dimensional and richest fundus image corresponding to the fundus image to be recognized
  • the feature vector array of the feature information is to obtain a high-dimensional feature map.
  • the edge segmentation process is as follows: first, the high-dimensional feature map is continuously deconvolved and the up-sampling layer is up-sampled to the fundus to be identified.
  • the method before the step S20, that is, before the input of the fundus image to be recognized into the U-Net-based fundus segmentation model, the method includes:
  • S201 Obtain a fundus image sample; the fundus image sample is associated with an edge line label and an area label.
  • the fundus image sample is a collected historical OCT scan image containing the fundus choroid layer or an OCT scan image after preprocessing, and one fundus image sample is associated with one edge line label, and
  • the edge line label is a set of manually labeled coordinate positions of points corresponding to the upper edge line and the lower edge line of the fundus choroid layer contained in the fundus image sample, and one fundus image sample is associated with one area label, so
  • the area label is a set of manually labeled coordinate positions corresponding to the area range of the fundus choroid layer contained in the fundus image sample.
  • S202 Input the fundus image sample into a U-Net-based convolutional neural network model containing initial parameters.
  • the fundus image sample is input to the convolutional neural network model
  • the convolutional neural network model is a model constructed based on the U-Net model
  • the convolutional neural network model includes the initial parameters
  • the initial parameters include the network structure of the U-Net model.
  • the transfer learning Transfer Learning, TL
  • TL Transfer Learning
  • the choroidal feature in the fundus image sample is extracted through the convolutional neural network model, and the convolutional neural network model includes at least four of the down-sampling layers, and the down-sampling layers include Convolutional layer and pooling layer, the down-sampling layer is the level of multi-dimensional extraction of the choroidal features through different convolution kernels and pooling parameters, the convolution of the convolutional layer of each down-sampling layer The nuclei are different, and the pooling parameters of the pooling layer of each down-sampling layer are also different. Each down-sampling layer will output a choroidal feature map corresponding to the down-sampling layer.
  • the choroid feature map output by the sampling layer is convolved to obtain the high-dimensional feature map, that is, the high-dimensional feature map is a feature vector array of the highest dimension and richest feature information corresponding to the fundus image to be recognized , Determining the choroid feature map and the high-dimensional feature map corresponding to each of the down-sampling layers as the fundus choroid feature map corresponding to the fundus image sample;
  • the convolutional neural network model includes an upsampling layer corresponding to the downsampling layer, that is, the number of downsampling layers and the number of upsampling layers included in the convolutional neural network model
  • the fundus choroidal feature map is subjected to continuous deconvolution processing, that is, up-sampling processing, until the fundus output image is output, each of the up-sampling layers will output a fundus feature map, and each of the upsampling layers will output a fundus feature map.
  • the fundus feature map output by the sampling layer is determined to be the fundus feature vector map, and the fundus feature maps output by the two consecutive long sampling layers are fused to obtain a fused feature vector map.
  • the region recognition result includes the recognition probability value corresponding to each pixel in the fundus output image, and all the recognition probability values greater than the preset probability threshold correspond to the Pixels are marked in the fundus output image to obtain the recognition area in the area recognition result, and the recognition area and all the recognition probability values are determined as the area recognition result.
  • the choroidal feature in the fundus image sample is extracted by the convolutional neural network model to obtain the convolutional neural network model
  • the region recognition result, fundus feature vector map, and fusion feature vector map output according to the choroid features include:
  • S2031 Extract the choroidal feature from the fundus image sample by using the convolutional neural network model to obtain a fundus choroidal feature map.
  • the convolutional neural network model includes a first down-sampling layer, a second down-sampling layer, a third down-sampling layer, and a fourth down-sampling layer.
  • the first down-sampling layer and the second down-sampling layer The layer, the third down-sampling layer and the fourth down-sampling layer all include two convolutional layers of 3 ⁇ 3 convolution kernels, two activation layers, and a pooling layer with a maximum pooling parameter of 2 ⁇ 2,
  • the fundus image sample is input to the first down-sampling layer for convolution to obtain a 64-dimensional first choroid feature map; the first choroidal feature map is input to the second down-sampling layer for convolution to obtain 128 dimensions The second choroid feature map; the second choroid feature map is input to the third down-sampling layer for convolution to obtain a 256-dimensional third choroid feature map; the third choroid feature map is input to the first Four down-sampling layers are convolve
  • S2032 Up-sampling and splicing the fundus choroid feature map through the convolutional neural network model to obtain the fundus feature vector map.
  • the up-sampling and splicing is to deconvolve the fundus choroid feature map to generate an intermediate fundus feature map with the same dimension as the adjacent choroid feature map, and to splice it with the choroid feature map.
  • the dimension will become twice the original dimension.
  • it needs to be convolved again to obtain the fundus feature map to ensure that the dimension of the processed fundus feature map is the same as the dimension before the splicing operation.
  • the next up-sampling and splicing are performed until the final dimension of the fundus image sample is the same, the fundus choroid feature map is up-sampled and spliced, and the fundus feature vector map is finally output.
  • the convolutional neural network model includes a first upsampling layer, a second upsampling layer, a third upsampling layer, and a fourth upsampling layer, and the fourth upsampling layer reverses the high-dimensional feature map.
  • Convolution to obtain a 512-dimensional fourth intermediate fundus feature map, which is spliced with the 512-dimensional fourth choroidal feature map, and then passes through a 3 ⁇ 3 convolution kernel convolution layer And an activation layer to obtain a 512-dimensional fourth fundus feature map;
  • the third up-sampling layer deconvolves the fourth fundus feature map to obtain a 256-dimensional third middle fundus feature map, and the second The three middle fundus feature maps are spliced with the 256-dimensional third choroidal feature map, and then through a 3 ⁇ 3 convolution kernel convolution layer and an activation layer, a 256-dimensional third fundus feature map is obtained;
  • the second up-sampling layer deconvolves the third fundus feature map to obtain a 128-dimensional second intermediate fundus feature map, and stitches the second intermediate fundus feature map with the 128-dimensional second choroidal feature map , And then go through a convolutional layer of a 3 ⁇ 3 convolution kernel and an activation layer to obtain
  • S2033 Perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map to obtain the fused feature vector map.
  • the recognition probability value corresponding to each pixel in the fundus output image is calculated, and all the pixels corresponding to the recognition probability value greater than the preset probability threshold are calculated. Marking is performed on the fundus output image to obtain the recognition area in the area recognition result, and the recognition area and all the recognition probability values are determined as the area recognition result.
  • the first fundus feature map and the second fundus feature map are superimposed to obtain a first feature map to be fused, and the second fundus feature map is superimposed with the third fundus feature map to obtain a first fundus feature map.
  • the fusion is feature fusion, that is, the extracted feature information is analyzed, processed and integrated to obtain commonality
  • a fusion feature vector map with very obvious features, and the size of the fusion feature vector map is consistent with the size of the fundus output image.
  • the present application realizes that the choroidal feature is extracted from the fundus image sample through the convolutional neural network model to obtain a fundus choroidal feature map; and the fundus choroidal feature map is upsampled through the convolutional neural network model And splicing to obtain the fundus feature vector map; perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map,
  • the fusion feature vector graph is obtained, therefore, the accuracy of recognition can be improved, the number of training iterations can be reduced, and the training efficiency can be improved.
  • S204 Perform edge detection on the fundus feature vector map through the convolutional neural network model to obtain an edge result, and simultaneously perform region segmentation on the fused feature vector map to obtain a region segmentation result.
  • the edge detection is to identify points with obvious changes in feature vectors in the fundus feature vector map, and by performing the edge detection processing on the fundus feature vector map, it is possible to identify points with the fundus feature vector map.
  • the coordinate point of the upper edge line and the coordinate point of the lower edge line in the vector diagram, and the probability value of each coordinate point with the upper edge line in the fundus feature vector diagram (confirmed as the probability value of the upper edge line) is calculated, and The probability value of each coordinate point corresponding to the bottom edge line in the fundus feature vector diagram (confirmed as the probability value of the bottom edge line) will be compared with the coordinate point of the top edge line and the coordinate point of the bottom edge line in the fundus feature vector diagram.
  • Points, and the probability values of the coordinate points of the upper edge line and the coordinate points of the lower edge line in the fundus feature vector map are determined together as the edge result, and at the same time, the fusion feature vector map is segmented, the The region segmentation is based on the feature vector corresponding to each pixel in the fusion feature vector map, and the probability value of whether the coordinate of each pixel point in the fusion feature vector map corresponds to the area of the fundus choroid layer is calculated, and it will be confirmed The coordinates of the region of the fundus choroid layer are segmented, and the region segmentation result is obtained.
  • S205 Determine a classification loss value according to the area recognition result and the area label; determine an edge loss value according to the edge result and the edge line label; according to the area segmentation result and the area label, Determine the segmentation loss value.
  • the region recognition result and the region label are input into the classification loss function in the convolutional neural network model, and the classification loss value is calculated by the classification loss function, and the classification loss function can be based on Demand setting, such as cross entropy loss function, input the edge result and the edge line label into the edge loss function in the convolutional neural network model, calculate the edge loss value through the edge loss function, and set
  • the region segmentation result and the region label are input to the segmentation loss function in the convolutional neural network model, and the segmentation loss value is calculated by the segmentation loss function.
  • S206 Determine a total loss value according to the classification loss value, the edge loss value, and the segmentation loss value.
  • the classification loss value, the edge loss value, and the segmentation loss value are input into the total loss function in the convolutional neural network model, and the total loss value is calculated by the total loss function ;
  • the medium loss value is:
  • ⁇ 1 is the weight of the classification loss value
  • L 1 is the classification loss value
  • ⁇ 2 is the weight of the edge loss value
  • L 2 is the edge loss value
  • ⁇ 3 is the segmentation loss value
  • the weight of; L 3 is the segmentation loss value.
  • the convergence condition may be a condition that the value of the total loss value is small and will not drop after 1000 calculations, that is, the value of the total loss value is small and will not drop after 1000 calculations.
  • the convergence condition may also be a condition that the total loss value is less than a set threshold , That is, when the total loss value is less than the set threshold, the training is stopped, and the convolutional neural network model after convergence is recorded as the fundus segmentation model after training.
  • the initial parameters of the iterative convolutional neural network model are continuously updated, and the convolutional neural network model is triggered to extract the choroidal features in the fundus image sample to obtain the convolutional neural network.
  • the network model can continuously move closer to the accurate result based on the regional recognition result, fundus feature vector map and the step of fusing the feature vector map outputted by the choroidal features, so that the accuracy of recognition becomes higher and higher.
  • S30 Perform image recognition on the fundus segmented image using the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and cut out the first fundus from the fundus segmented image according to the fovea area Choroidal image.
  • the fundus fovea recognition model is a trained neural network model.
  • the network structure of the fundus fovea recognition model can be based on requirements.
  • the fundus fovea recognition model can be the network structure of the YOLO (You Only Look Once) model, the network structure of the SSD (Single Shot MultiBox Detector) model, etc., because the fovea area is in the fundus segmented image
  • the area of the fovea recognition model is smaller, so the network structure of the fundus fovea recognition model is preferably the network structure of the SSD (Single Shot MultiBox Detector) model, because the network structure of the SSD model is conducive to the identification of small objects, and according to the
  • the obtained foveal area is intercepted from the fundus segmented image according to preset size parameters to obtain the first fundus choroid image.
  • the size parameters can be set according to requirements, for example, the size of the fo
  • the image recognition is performed on the fundus segmented image through the fundus fovea recognition model to identify the fovea area in the fundus segmented image, And extracting a first fundus choroid image from the fundus segmented image according to the fovea area, including:
  • S301 Input the segmented image of the fundus into an SSD-based fundus fovea recognition model.
  • the fundus fovea recognition model is a trained neural network model based on an SSD model, and the fundus segmented image is input into the fundus fovea recognition model.
  • S302 Using the SSD algorithm, extract the fovea feature through the fundus fovea recognition model, and perform target detection according to the fovea feature to obtain the fovea area.
  • the SSD algorithm extracts the fovea feature of the fundus segmented image through feature maps of different scales, and the fovea feature is the same as that of the fundus.
  • the characteristics of the foveal area of the choroid layer, and target detection is performed based on the fovea features, that is, the area is predicted by the method of clearly distinguishing the aspect ratio of the extracted fovea features, and finally the choroidal fovea containing the fundus is identified The foveal area.
  • the size parameter can be set according to requirements.
  • the size parameter is 6000 ⁇ 1500
  • the second is cut from the fundus segmented image according to a preset size parameter with the fovea region as the center.
  • a fundus choroid image for example, taking the size parameters of the center of the fovea region of 6000 ⁇ m in length and 1500 ⁇ m in width to intercept the first fundus choroid image from the fundus segmented image.
  • this application realizes that by inputting the fundus segmented image into an SSD-based fundus fovea recognition model; using the SSD algorithm, extracting the fovea features from the fundus fovea recognition model, and performing target detection based on the fovea features , Obtain the foveal area; take the foveal area as the center, and intercept the first fundus choroid image from the fundus segmented image according to preset size parameters, so that the foveal area can be automatically identified, and The fundus choroid image of the same size is cut out to facilitate subsequent identification and improve the accuracy of identification.
  • S40 Binarize the first fundus choroid image by using the Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image.
  • binarization is performed on the first fundus choroid image to obtain a value corresponding to the first choroidal binary image, and the value of each pixel in the first choroidal binary image is 0 or 1, or It is displayed in two colors of white (corresponding to 0) or black (corresponding to 1), and extracting the black area in the first choroidal binary image to obtain the first lumen area.
  • the binarization processing includes a processing operation of performing binarization calculation on each pixel in the first fundus choroid image by the Niblack local threshold algorithm, and the Niblack local threshold algorithm is for each pixel in the image. Each pixel is compared with the threshold calculated by the local area for binarization.
  • S50 identify a first lumen area image containing a lumen area of the fundus choroidal blood vessel from the fundus image to be identified.
  • marking is performed from the fundus image to be identified according to the coordinate position of the first lumen area, so that the first lumen area image can be obtained, and the first lumen area image is the mark fundus choroid Image of the lumen area of the blood vessel.
  • this application realizes that by receiving the fundus lumen recognition request, the fundus image to be recognized in the fundus lumen recognition request is obtained; the fundus image to be recognized is input into the U-Net-based fundus segmentation model, and the fundus segmentation model is based on U-Net.
  • the fundus segmentation model performs choroid feature extraction and edge segmentation on the fundus image to be recognized to obtain a fundus segmented image; image recognition is performed on the fundus segmented image through the fundus fovea recognition model, and the fovea in the fundus segmentation image is identified Region, and extract the first fundus choroid image from the fundus segmentation image according to the fovea region; use the Niblack local threshold algorithm to binarize the first fundus choroid image to obtain the first choroid binary value Image, and extract the first lumen area from the first choroidal binary image; according to the first lumen area, identify the first lumen area containing the fundus choroidal blood vessels from the fundus image to be identified A lumen region image, therefore, realizes the automatic recognition of the lumen region of the fundus choroidal blood vessels in the fundus image.
  • the fundus fovea recognition model and the Niblack local threshold algorithm can be quickly and accurately
  • the luminal area of the fundus choroidal blood vessels is accurately recognized to determine the characteristics of the fundus choroid. In this way, the cost of manual identification is reduced, and the accuracy and reliability of identification are improved.
  • the method further includes:
  • S60 Perform grayscale processing on the fundus image to be identified to obtain a grayscale image of the fundus.
  • the multi-channel fundus image to be identified is subjected to grayscale processing to obtain the single-channel fundus grayscale image.
  • the fundus image to be identified includes RGB (Red Green Blue, red, green, and blue). (Color) three-channel image, the gray-scale binarization of the value corresponding to the same pixel in each channel image is performed to obtain the gray-scale value of the pixel, and a channel image is obtained, which is the fundus gray-scale image.
  • the adaptive threshold method is a method that uses a local threshold value in an image to replace a global threshold value for image calculation. It is specifically aimed at images with excessively large changes in light and shadow, or images with less obvious color differences within a range.
  • the normalization process is to calculate the average value of all gray values corresponding to pixels in the first lumen area by the adaptive threshold method, record it as the lumen adaptive threshold, and obtain
  • the preset maximum grayscale value, the maximum grayscale value can be set according to requirements, preferably 255, and then calculated according to the grayscale normalization function in the adaptive threshold method in the fundus grayscale image and The normalized value of the gray level corresponding to each pixel, and finally output the first fundus image, and the first fundus image has the same size as the fundus image to be identified.
  • the adaptive threshold method is used to obtain the adaptive threshold value of the lumen in the first lumen region, and automatically The adaptive threshold normalizes the gray-scale image of the fundus to obtain the first fundus image, including:
  • S701 Acquire the adaptive threshold of the lumen through an adaptive threshold method.
  • the adaptive threshold method is a method that uses local thresholds in the image to replace global thresholds for image calculation, and calculates the average value of all gray values corresponding to pixels in the first lumen region, Record this as the adaptive threshold of the lumen.
  • S702 Acquire a gray value corresponding to each pixel in the fundus gray image and a preset maximum gray value.
  • the fundus gray-scale image includes a gray-scale value corresponding to each pixel, and a preset maximum gray-scale value is obtained.
  • the maximum gray-scale value can be set according to requirements, and is preferably 255.
  • the gray-level normalization model includes a gray-level normalization function, and the gray-level normalization value corresponding to each pixel point can be calculated by the gray-level normalization function.
  • obtaining the normalized gray value corresponding to each pixel includes:
  • f(x, y) is the gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image
  • F(x, y) is the normalized gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image
  • A is the adaptive threshold of the lumen
  • B is the maximum gray value.
  • S704 Join all the normalized gray values according to the positions of the pixels to obtain the first fundus image.
  • each of the gray-scale normalized values is spliced according to the positions of the corresponding pixel points to form a new image, and the image is determined as the first fundus image, so as to correct the gray fundus The degree image is corrected.
  • the present application realizes that through the adaptive threshold method, the fundus gray-scale image can be corrected, and the lumen area of the fundus choroidal blood vessel can be corrected, so that the lumen area of the fundus choroidal blood vessel can be more prominent, which is easy to identify and improve Improved recognition accuracy and reliability.
  • extracting from the first fundus image according to the coordinate area of the fovea region can obtain the second fundus choroid image.
  • S90 Perform binarization processing on the second fundus choroid image according to the Niblack local threshold method to obtain a second choroid binary image, and extract a second lumen region from the second choroid binary image.
  • the second fundus choroid image is binarized to obtain the value corresponding to the second choroid binary image.
  • the value of each pixel in the second choroid binary image is The value is 0 or 1, and can also be displayed in two colors of white (corresponding to 0) or black (corresponding to 1).
  • the black area in the second choroidal binary image is extracted to obtain the second lumen area.
  • the binarization processing further includes a processing operation of performing binarization calculation on each pixel in the second fundus choroid image by the Niblack local threshold algorithm.
  • S100 identify a second lumen area image containing the lumen area of the fundus choroidal blood vessel from the fundus image to be identified.
  • marking is performed from the fundus image to be identified according to the coordinate position of the second lumen area, so that the second lumen area image can be obtained, and the second lumen area image is the mark fundus choroid
  • the image of the lumen area of the blood vessel in this way, can more accurately determine the lumen area of the fundus choroidal blood vessel.
  • This application realizes that by receiving the fundus lumen recognition request, the fundus image to be recognized in the fundus lumen recognition request is obtained; the fundus image to be recognized is input into the U-Net-based fundus segmentation model, and the fundus segmentation is performed
  • the model performs choroid feature extraction and edge segmentation on the fundus image to be recognized to obtain a fundus segmented image; image recognition is performed on the fundus segmented image through the fundus fovea recognition model to identify the fovea area in the fundus segmented image, And cut out the first fundus choroid image from the fundus segmented image according to the fovea region; use the Niblack local threshold algorithm to binarize the first fundus choroid image to obtain the first choroidal binary image, And extract the first lumen area from the first choroidal binary image; according to the first lumen area, identify the first tube containing the lumen area of the fundus choroidal blood vessel from the fundus image to be identified Cavity region image, performing grayscale processing on the fundus
  • the U-Net-based fundus segmentation model, the fundus fovea recognition model, and two Niblack local Threshold algorithm processing and adaptive threshold method can correct the fundus image to be identified, so as to more accurately identify the lumen region of the fundus choroidal blood vessel, thus further improving the recognition accuracy and reliability.
  • step S100 that is, after the second lumen area image containing the lumen area of the fundus choroidal blood vessels is identified from the fundus image to be identified according to the second lumen area ,include:
  • S110 Calculate the area of the luminal area of the fundus choroidal blood vessel in the second luminal area image to obtain the area of the luminal area, and calculate the area of the first fundus choroidal image to obtain the area of the choroidal area;
  • S120 Calculate the ratio of the area of the lumen region to the area of the choroid region to obtain a choroidal blood vessel index.
  • the lumen region area of the lumen region containing the fundus choroidal blood vessels is calculated, and the choroidal region area is calculated at the same time, and the ratio of the lumen region area and the choroidal region area is calculated to obtain the choroidal blood vessel Index, so that the doctor can carry out the next medical action, provides data indicators related to the fundus choroid.
  • a device for identifying the lumen region of choroidal blood vessels is provided.
  • the device for identifying the lumen region of choroidal blood vessels corresponds to the method for identifying the lumen region of choroidal blood vessels in the above-mentioned embodiment.
  • the device for identifying the lumen region of choroidal blood vessels includes a receiving module 11, an input module 12, an intercepting module 13, a binary module 14 and an identifying module 15. The detailed description of each functional module is as follows:
  • the receiving module 11 is configured to receive the fundus lumen identification request, and obtain the fundus image to be identified in the fundus lumen identification request;
  • the input module 12 is configured to input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
  • the intercepting module 13 is configured to perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and intercept the fundus segmented image according to the fovea area Get the first fundus choroid image;
  • the binary module 14 is used to binarize the first fundus choroid image by using the Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first choroid image from the first choroid image.
  • the recognition module 15 is configured to recognize, from the fundus image to be recognized, a first lumen area image containing a lumen area of the fundus choroidal blood vessel according to the first lumen area.
  • the various modules in the above-mentioned choroidal vessel lumen region recognition device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 9.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a readable storage medium and an internal memory.
  • the readable storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer readable instructions in the readable storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection. When the computer-readable instructions are executed by the processor, a method for identifying the lumen region of choroidal blood vessels is realized.
  • the readable storage medium provided in this embodiment includes a non-volatile readable storage medium and a volatile readable storage medium.
  • a computer device including a memory, a processor, and computer-readable instructions stored in the memory and running on the processor.
  • the processor executes the computer-readable instructions, the choroid in the above-mentioned embodiment is implemented.
  • one or more readable storage media storing computer readable instructions are provided.
  • the readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage. Medium; the readable storage medium stores computer readable instructions, and when the computer readable instructions are executed by one or more processors, the one or more processors implement the method for identifying the lumen region of the choroidal blood vessel in the above-mentioned embodiment .
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Abstract

The present application relates to the field of artificial intelligence, and provided in the application are a method and an apparatus for recognition of a luminal area in choroidal vessels, a device, and a medium, the method comprising: acquiring a fundus image to be recognized; inputting the image into a U-Net based fundus segmentation model, and by means of the fundus segmentation model, performing choroid feature extraction and edge segmentation on the fundus image to be recognized, to obtain a segmented fundus image; by means of a fundus fovea centralis recognition model, recognizing a fovea centralis area in the segmented fundus image, and on the basis of the fovea centralis area, cutting a first fundus choroid image from the segmented fundus image; by means of a Niblack local threshold algorithm, performing binarization processing on the first fundus choroid image, to obtain a first choroid binary image, and extracting a first luminal area from the first choroid binary image; and recognizing a first luminal area image. In the present application, automatic recognition of a luminal area in fundus choroidal vessels in a fundus image is made possible. The present application is suitable for such fields as smart medicine, and is able to further promote the construction of smart cities.

Description

脉络膜血管的管腔区域识别方法、装置、设备及介质Method, device, equipment and medium for identifying the lumen region of choroidal blood vessels
本申请要求于2020年7月31日提交中国专利局、申请号为202010761238.1,发明名称为“脉络膜血管的管腔区域识别方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on July 31, 2020, the application number is 202010761238.1, and the invention title is "Choroidal Vascular Lumen Region Recognition Method, Apparatus, Equipment, and Medium". The entire content of the Chinese patent application Incorporated in this application by reference.
技术领域Technical field
本申请涉及人工智能的图像处理领域,尤其涉及一种脉络膜血管的管腔区域识别方法、装置、设备及介质。This application relates to the field of artificial intelligence image processing, and in particular to a method, device, equipment and medium for identifying the lumen region of choroidal blood vessels.
背景技术Background technique
眼底脉络膜位于视网膜和巩膜之间,是一层柔软光滑、具有弹性和富有血管的棕色薄膜,起于前部的锯齿缘,后止于视神经周围;内面借一层十分光滑的玻璃膜与视网膜的色素上皮层相联系,外面借一潜在性间隙与巩膜相接,有脉络膜周层的细微纤维小板伸展而混入巩膜棕黑板中,并有血管和神经由此穿过。脉络膜主要由血管构成,为视网膜提供氧气和血液。The fundus choroid is located between the retina and the sclera. It is a soft, smooth, elastic and blood vessel-rich brown film. It starts from the serrated edge at the front and ends around the optic nerve at the back. The inner surface is connected by a layer of very smooth glass membrane and the retina. The pigment epithelial layer is connected, and the outside is connected with the sclera through a potential gap. The microfibrous platelets of the perichoroidal layer stretch and mix into the brown plate of the sclera, and blood vessels and nerves pass through it. The choroid is mainly composed of blood vessels, which provide oxygen and blood to the retina.
发明人意识到,在医学领域中,医生往往需要凭借经验人工识别出采集的眼底照片中的眼底脉络膜血管的管腔区域,以确定该眼底脉络膜中的特征,进而根据识别得到的特征进行其他医学行为,由于人工识别存在对医生经验要求高、采集设备分辨率低、灯光重影等客观因素的影响,导致眼底脉络膜血管管腔区域的特征的人工识别存在偏差,识别准确率低。The inventor realizes that in the field of medicine, doctors often need to manually identify the lumen area of the fundus choroid blood vessels in the collected fundus photos based on experience, to determine the characteristics of the fundus choroid, and then perform other medical treatments based on the identified characteristics. Due to the fact that manual recognition has high requirements for doctors’ experience, low resolution of acquisition equipment, light ghosting, and other objective factors, the manual recognition of the characteristics of the fundus choroidal vessel lumen region is biased and the recognition accuracy is low.
发明内容Summary of the invention
本申请提供一种脉络膜血管的管腔区域识别方法、装置、计算机设备及存储介质,实现了自动识别眼底图像中的眼底脉络膜血管的管腔区域,本申请适用于智慧交通或智慧医疗等领域,可进一步推动智慧城市的建设,减少了人工识别成本,提高了识别准确性和可靠性。This application provides a method, device, computer equipment, and storage medium for identifying the lumen region of choroidal blood vessels, which realizes automatic identification of the lumen region of the fundus choroidal blood vessels in fundus images. This application is suitable for fields such as smart transportation or smart medical care. It can further promote the construction of smart cities, reduce the cost of manual identification, and improve the accuracy and reliability of identification.
一种脉络膜血管的管腔区域识别方法,包括:A method for identifying the lumen region of choroidal blood vessels, including:
接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;Receiving the fundus lumen identification request, and acquiring the fundus image to be identified in the fundus lumen identification request;
将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;Perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area ;
通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;Binarize the first fundus choroid image by Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image;
根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。According to the first lumen area, the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
一种脉络膜血管的管腔区域识别装置,包括:A device for identifying the lumen region of choroidal blood vessels, comprising:
接收模块,用于接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;The receiving module is configured to receive the fundus lumen identification request, and obtain the fundus image to be identified in the fundus lumen identification request;
输入模块,用于将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;An input module for inputting the fundus image to be recognized into a U-Net-based fundus segmentation model, and performing choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmentation image;
截取模块,用于通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;The interception module is used to perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and cut out from the fundus segmented image according to the fovea area The first fundus choroid image;
二值模块,用于通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;The binary module is used to binarize the first fundus choroid image by the Niblack local threshold algorithm to obtain the first choroid binary image, and extract the first tube from the first choroid binary image Cavity area
识别模块,用于根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。The recognition module is used to recognize the first lumen area image containing the lumen area of the fundus choroidal blood vessels from the fundus image to be identified according to the first lumen area.
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:A computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, and the processor implements the following steps when the processor executes the computer-readable instructions:
接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;Receiving the fundus lumen identification request, and acquiring the fundus image to be identified in the fundus lumen identification request;
将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;Perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area ;
通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;Binarize the first fundus choroid image by Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image;
根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。According to the first lumen area, the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:One or more readable storage media storing computer readable instructions, when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;Receiving the fundus lumen identification request, and acquiring the fundus image to be identified in the fundus lumen identification request;
将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;Perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area ;
通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;Binarize the first fundus choroid image by Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image;
根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。According to the first lumen area, the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
本申请提供的脉络膜血管的管腔区域识别方法、装置、计算机设备及存储介质,通过接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像,因此,实现了自动识别眼底图像中的眼底脉络膜血管的管腔区域,通过基于U-Net的眼底分割模型、眼底中央凹识别模型和Niblack局部阈值算法,能够快速地和准确地识别眼底脉络膜血管的管腔区域,以确定眼底脉络膜的特征,如此,减少了人工识别成本,提高了识别准确性和可靠性。The method, device, computer equipment, and storage medium for identifying the lumen region of choroidal blood vessels provided in this application acquire the fundus image to be identified in the fundus lumen identification request by receiving the fundus lumen identification request; The fundus image input is based on the U-Net fundus segmentation model, and the fundus image to be identified is subjected to choroidal feature extraction and edge segmentation through the fundus segmentation model to obtain the fundus segmentation image; the fundus segmentation image is obtained through the fundus fovea recognition model Perform image recognition, identify the foveal area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area; use the Niblack local threshold algorithm to determine the first fundus choroidal image. The fundus choroid image is binarized to obtain a first choroid binary image, and a first lumen area is extracted from the first choroid binary image; according to the first lumen area, from the to-be-identified The first lumen region image containing the lumen region of the fundus choroidal blood vessels is recognized in the fundus image, so the automatic recognition of the lumen region of the fundus choroidal blood vessels in the fundus image is realized through the fundus segmentation model based on U-Net, the fundus The fovea recognition model and the Niblack local threshold algorithm can quickly and accurately identify the luminal area of the fundus choroidal blood vessels to determine the characteristics of the fundus choroid. This reduces the cost of manual identification and improves the accuracy and reliability of recognition.
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。The details of one or more embodiments of the present application are presented in the following drawings and description, and other features and advantages of the present application will become apparent from the description, drawings and claims.
附图说明Description of the drawings
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments of the present application. Obviously, the drawings in the following description are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative labor.
图1是本申请一实施例中脉络膜血管的管腔区域识别方法的应用环境示意图;FIG. 1 is a schematic diagram of the application environment of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application;
图2是本申请一实施例中脉络膜血管的管腔区域识别方法的流程图;2 is a flowchart of a method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application;
图3是本申请另一实施例中脉络膜血管的管腔区域识别方法的流程图;3 is a flowchart of a method for identifying the lumen region of choroidal blood vessels in another embodiment of the present application;
图4是本申请一实施例中脉络膜血管的管腔区域识别方法的步骤S20的流程图;4 is a flowchart of step S20 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application;
图5是本申请一实施例中脉络膜血管的管腔区域识别方法的步骤S203的流程图;5 is a flowchart of step S203 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application;
图6是本申请一实施例中脉络膜血管的管腔区域识别方法的步骤S30的流程图;6 is a flowchart of step S30 of the method for identifying the lumen region of choroidal blood vessels in an embodiment of the present application;
图7是本申请一实施例中脉络膜血管的管腔区域识别方法的步骤S70的流程图;FIG. 7 is a flowchart of step S70 of the method for identifying the lumen region of the choroidal blood vessel in an embodiment of the present application;
图8是本申请一实施例中脉络膜血管的管腔区域识别装置的原理框图;Fig. 8 is a schematic block diagram of a device for recognizing the lumen region of choroidal blood vessels in an embodiment of the present application;
图9是本申请一实施例中计算机设备的示意图。Fig. 9 is a schematic diagram of a computer device in an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
本申请提供的脉络膜血管的管腔区域识别方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信。其中,客户端(计算机设备)包括但不限于为各种个人计算机、笔记本电脑、智能手机、平板电脑、摄像头和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。The method for identifying the lumen region of choroidal blood vessels provided in this application can be applied in the application environment as shown in FIG. 1, in which the client (computer equipment) communicates with the server through the network. Among them, the client (computer equipment) includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices. The server can be implemented as an independent server or a server cluster composed of multiple servers.
在一实施例中,如图2所示,提供一种脉络膜血管的管腔区域识别方法,其技术方案主要包括以下步骤S10-S50:In one embodiment, as shown in FIG. 2, a method for identifying the lumen region of choroidal blood vessels is provided, and the technical solution mainly includes the following steps S10-S50:
S10,接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像。S10. A fundus lumen identification request is received, and a fundus image to be identified in the fundus lumen identification request is acquired.
可理解地,通过OCT设备采集到眼底的OCT扫描图像,运用OCT设备的增强模式采集眼底的OCT扫描图像能够采集到更多的眼底脉络膜的形态特征,在采集完眼底的OCT扫描图像之后且需要对该眼底的OCT扫描图像进行眼底脉络膜血管的管腔区域识别时,触发所述眼底管腔识别请求,其中所述眼底管腔识别请求中包含有所述待识别眼底图像,所述待识别眼底图像为采集到的眼底的OCT扫描图像且需要识别出眼底脉络膜血管的管腔区域的图像,触发方式可以根据需求设定,比如采集完所述待识别眼底图像后自动触发,或者在采集完所述待识别眼底图像后通过点击确定按键方式触发。Understandably, the OCT scan image of the fundus collected by the OCT device, and the OCT scan image of the fundus collected by the enhanced mode of the OCT device can collect more morphological features of the fundus choroid. After the OCT scan image of the fundus is collected, it is necessary When the fundus choroidal vessel lumen region is recognized on the OCT scan image of the fundus, the fundus lumen recognition request is triggered, wherein the fundus lumen recognition request includes the fundus image to be recognized, and the fundus to be recognized The image is the captured OCT scan image of the fundus and the image of the luminal area of the fundus choroidal blood vessel needs to be recognized. The trigger mode can be set according to requirements, such as automatically triggering after the fundus image to be recognized is collected, or after all the fundus images are collected. After the fundus image is recognized, it is triggered by clicking the OK button.
其中,所述待识别眼底图像为多通道的眼底彩照或者眼底黑白照,在一实施例中,获取所述眼底管腔识别请求中的待识别眼底图像,包括对采集完的眼底的OCT扫描图像进行预处理(滤波去噪或/和图像增强)之后,例如:高斯滤波去噪、伽马变换算法修正、拉普拉斯算法修正等,获取预处理后的眼底的OCT扫描图像作为所述待识别眼底图像,如此,能够更加体现眼底脉络膜的血管信息。Wherein, the fundus image to be recognized is a multi-channel color fundus photograph or a black-and-white fundus photograph. In one embodiment, acquiring the fundus image to be recognized in the fundus lumen recognition request includes an OCT scan image of the acquired fundus After preprocessing (filter denoising or/and image enhancement), such as Gaussian filter denoising, gamma transform algorithm correction, Laplace algorithm correction, etc., the preprocessed OCT scan image of the fundus is obtained as the waiting Recognizing the fundus image, in this way, can more reflect the blood vessel information of the fundus choroid.
S20,将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像。S20: Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmentation image.
可理解地,所述眼底分割模型为训练完成的基于U-Net模型的卷积神经网络模型,即所述眼底分割模型的网络结构包括所述U-Net模型的网络结构,也即所述眼底分割模型的网络结构为在所述U-Net模型的网络结构的基础上进行改进的模型的网络结构,所述U-Net 模型有利于图像分割且仅需要较少的训练集就可以实现端对端的训练的网络结构,所述眼底分割模型提取所述待识别眼底图像的所述脉络膜特征提取,所述脉络膜特征为眼底脉络膜中的脉络膜层及周边的纹理和形状信息的特征,提取所述脉络膜特征为利用连续的卷积及池化的下采样层提取所述待识别眼底图像中的特征信息,逐步将特征信息映射至高维,得到与所述待识别眼底图像对应的最高维、最丰富的特征信息的特征向量数组,即得到高维度特征图,所述边缘分割过程为:首先,将所述高维度特征图进行连续的反卷积的上采样层进行上采样至与所述待识别眼底图像同维度大小的眼底输出图像;其次,在上采样过程中增加了边缘检测并强化边缘的特征信息;最后,对强化了边缘的特征信息的所述眼底输出图像进行图像分割(增强了分割精度),得到所述眼底分割图像。Understandably, the fundus segmentation model is a trained convolutional neural network model based on the U-Net model, that is, the network structure of the fundus segmentation model includes the network structure of the U-Net model, that is, the fundus The network structure of the segmentation model is the network structure of the model improved on the basis of the network structure of the U-Net model. The U-Net model is conducive to image segmentation and requires less training set to achieve end-to-end The network structure of the end training, the fundus segmentation model extracts the choroid feature extraction of the fundus image to be recognized, and the choroid feature is the feature of the choroid layer and surrounding texture and shape information in the fundus choroid, and the choroid is extracted The feature is the use of continuous convolution and pooling down-sampling layers to extract the feature information in the fundus image to be recognized, and gradually map the feature information to high dimensions to obtain the highest dimensional and richest fundus image corresponding to the fundus image to be recognized The feature vector array of the feature information is to obtain a high-dimensional feature map. The edge segmentation process is as follows: first, the high-dimensional feature map is continuously deconvolved and the up-sampling layer is up-sampled to the fundus to be identified. The fundus output image with the same dimension of the image; secondly, edge detection is added in the upsampling process and the edge feature information is enhanced; finally, the fundus output image with the enhanced edge feature information is image segmented (enhanced segmentation accuracy ) To obtain the segmented fundus image.
在一实施例中,如图4所示,所述步骤S20之前,即所述将所述待识别眼底图像输入基于U-Net的眼底分割模型之前,包括:In an embodiment, as shown in FIG. 4, before the step S20, that is, before the input of the fundus image to be recognized into the U-Net-based fundus segmentation model, the method includes:
S201,获取眼底图像样本;所述眼底图像样本与一个边缘线标签及一个区域标签关联。S201: Obtain a fundus image sample; the fundus image sample is associated with an edge line label and an area label.
可理解地,所述眼底图像样本为收集的历史的且含有眼底脉络膜层的OCT扫描图像或者经过预处理之后的OCT扫描图像,一个所述眼底图像样本与一个所述边缘线标签关联,所述边缘线标签为人工标注与所述眼底图像样本中含有的眼底脉络膜层的上边缘线和下边缘线对应的点的坐标位置的集合,一个所述眼底图像样本与一个所述区域标签关联,所述区域标签为人工标注与所述眼底图像样本含有的眼底脉络膜层的区域范围对应的坐标位置的集合。Understandably, the fundus image sample is a collected historical OCT scan image containing the fundus choroid layer or an OCT scan image after preprocessing, and one fundus image sample is associated with one edge line label, and The edge line label is a set of manually labeled coordinate positions of points corresponding to the upper edge line and the lower edge line of the fundus choroid layer contained in the fundus image sample, and one fundus image sample is associated with one area label, so The area label is a set of manually labeled coordinate positions corresponding to the area range of the fundus choroid layer contained in the fundus image sample.
S202,将所述眼底图像样本输入含有初始参数的基于U-Net的卷积神经网络模型。S202: Input the fundus image sample into a U-Net-based convolutional neural network model containing initial parameters.
可理解地,将所述眼底图像样本输入至所述卷积神经网络模型,所述卷积神经网络模型为基于U-Net模型构建的模型,所述卷积神经网络模型包括所述初始参数,所述初始参数包括所述U-Net模型的网络结构,在一实施例中,通过迁移学习,所述迁移学习(Transfer Learning,TL)为利用其他领域已有的训练模型的参数应用在本领域的任务中,获取一个训练完成的U-Net模型中所有参数,将获取的所有所述参数作为所述初始参数,缩短模型的迭代次数,简化训练过程,及提高训练效率。Understandably, the fundus image sample is input to the convolutional neural network model, the convolutional neural network model is a model constructed based on the U-Net model, and the convolutional neural network model includes the initial parameters, The initial parameters include the network structure of the U-Net model. In one embodiment, through transfer learning, the transfer learning (Transfer Learning, TL) is applied in this field by using the parameters of existing training models in other fields. In the task of obtaining all parameters in a trained U-Net model, and using all the obtained parameters as the initial parameters, the number of iterations of the model is shortened, the training process is simplified, and the training efficiency is improved.
S203,通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图。S203. Extract the choroidal feature in the fundus image sample through the convolutional neural network model, and obtain the region recognition result, fundus feature vector map, and fusion feature vector output by the convolutional neural network model according to the choroidal feature Figure.
可理解地,通过所述卷积神经网络模型对所述眼底图像样本中的所述脉络膜特征进行提取,所述卷积神经网络模型包括至少四个所述下采样层,所述下采样层包括卷积层和池化层,所述下采样层为通过不同的卷积核和池化参数进行多维度的提取所述脉络膜特征的层级,每个所述下采样层的卷积层的卷积核都不同,而且每个所述下采样层的池化层的池化参数也不同,每一所述下采样层都会输出一个与该下采样层对应的脉络膜特征图,对最后一个所述下采样层输出的所述脉络膜特征图进行卷积,得到所述高维度特征图,即所述高维度特征图为与所述待识别眼底图像对应的最高维、最丰富的特征信息的特征向量数组,将与各所述下采样层对应的脉络膜特征图和所述高维度特征图确定为与所述眼底图像样本对应的眼底脉络膜特征图;Understandably, the choroidal feature in the fundus image sample is extracted through the convolutional neural network model, and the convolutional neural network model includes at least four of the down-sampling layers, and the down-sampling layers include Convolutional layer and pooling layer, the down-sampling layer is the level of multi-dimensional extraction of the choroidal features through different convolution kernels and pooling parameters, the convolution of the convolutional layer of each down-sampling layer The nuclei are different, and the pooling parameters of the pooling layer of each down-sampling layer are also different. Each down-sampling layer will output a choroidal feature map corresponding to the down-sampling layer. The choroid feature map output by the sampling layer is convolved to obtain the high-dimensional feature map, that is, the high-dimensional feature map is a feature vector array of the highest dimension and richest feature information corresponding to the fundus image to be recognized , Determining the choroid feature map and the high-dimensional feature map corresponding to each of the down-sampling layers as the fundus choroid feature map corresponding to the fundus image sample;
其中,所述卷积神经网络模型包括与所述下采样层一一对应的上采样层,即所述卷积神经网络模型中包含的所述下采样层的数量和所述上采样层的数量相同,将所述眼底脉络膜特征图进行不断的反卷积处理,也即上采样处理,直至输出所述眼底输出图像,每一个所述上采样层都会输出一个眼底特征图,将各所述上采样层输出的所述眼底特征图确定为所述眼底特征向量图,将连续两个所述长采样层输出所述眼底特征图进行融合,得到融合特征向量图,通过对所述眼底特征向量图进行脉络膜区域识别,获得区域识别结果,所述区域识别结果包括所述眼底输出图像中的每个像素点对应的识别概率值,将所有大于预设概率阈值的所述识别概率值对应的所述像素点在所述眼底输出图像中进行标记,从而得到 所述区域识别结果中的识别区域,将所述识别区域和所有所述识别概率值确定为所述区域识别结果。Wherein, the convolutional neural network model includes an upsampling layer corresponding to the downsampling layer, that is, the number of downsampling layers and the number of upsampling layers included in the convolutional neural network model Similarly, the fundus choroidal feature map is subjected to continuous deconvolution processing, that is, up-sampling processing, until the fundus output image is output, each of the up-sampling layers will output a fundus feature map, and each of the upsampling layers will output a fundus feature map. The fundus feature map output by the sampling layer is determined to be the fundus feature vector map, and the fundus feature maps output by the two consecutive long sampling layers are fused to obtain a fused feature vector map. By comparing the fundus feature vector map Perform choroidal region recognition to obtain a region recognition result. The region recognition result includes the recognition probability value corresponding to each pixel in the fundus output image, and all the recognition probability values greater than the preset probability threshold correspond to the Pixels are marked in the fundus output image to obtain the recognition area in the area recognition result, and the recognition area and all the recognition probability values are determined as the area recognition result.
在一实施例中,如图5所示,所述步骤S203中,即所述通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图,包括:In one embodiment, as shown in FIG. 5, in the step S203, the choroidal feature in the fundus image sample is extracted by the convolutional neural network model to obtain the convolutional neural network model The region recognition result, fundus feature vector map, and fusion feature vector map output according to the choroid features include:
S2031,通过所述卷积神经网络模型对所述眼底图像样本提取所述脉络膜特征,得到眼底脉络膜特征图。S2031: Extract the choroidal feature from the fundus image sample by using the convolutional neural network model to obtain a fundus choroidal feature map.
可理解地,所述卷积神经网络模型包括第一下采样层、第二下采样层、第三下采样层和第四下采样层,所述第一下采样层、所述第二下采样层、所述第三下采样层和所述第四下采样层均包括两个3×3卷积核的卷积层、两个激活层和一个2×2最大池化参数的池化层,所述眼底图像样本输入所述第一下采样层进行卷积,得到64维度的第一脉络膜特征图;将所述第一脉络膜特征图输入所述第二下采样层进行卷积,得到128维度的第二脉络膜特征图;将所述第二脉络膜特征图输入所述第三下采样层进行卷积,得到一个256维度的第三脉络膜特征图;将所述第三脉络膜特征图输入所述第四下采样层进行卷积,得到512维度的第四脉络膜特征图;对所述第四脉络膜特征图进行一个3×3卷积核的卷积层和一个激活层,得到1024维度的所述高维度特征图,将所述第一脉络膜特征图、所述第二脉络膜特征图、所述第三脉络膜特征图、所述第四脉络膜特征图和所述高维度特征图,确定为所述眼底脉络膜特征图。Understandably, the convolutional neural network model includes a first down-sampling layer, a second down-sampling layer, a third down-sampling layer, and a fourth down-sampling layer. The first down-sampling layer and the second down-sampling layer The layer, the third down-sampling layer and the fourth down-sampling layer all include two convolutional layers of 3×3 convolution kernels, two activation layers, and a pooling layer with a maximum pooling parameter of 2×2, The fundus image sample is input to the first down-sampling layer for convolution to obtain a 64-dimensional first choroid feature map; the first choroidal feature map is input to the second down-sampling layer for convolution to obtain 128 dimensions The second choroid feature map; the second choroid feature map is input to the third down-sampling layer for convolution to obtain a 256-dimensional third choroid feature map; the third choroid feature map is input to the first Four down-sampling layers are convolved to obtain a 512-dimensional fourth choroidal feature map; a 3×3 convolution kernel convolution layer and an activation layer are performed on the fourth choroidal feature map to obtain the 1024-dimensional high Dimensional feature map, determining the first choroidal feature map, the second choroidal feature map, the third choroidal feature map, the fourth choroidal feature map, and the high-dimensional feature map as the fundus choroid Feature map.
S2032,通过所述卷积神经网络模型对所述眼底脉络膜特征图进行上采样及拼接,得到所述眼底特征向量图。S2032: Up-sampling and splicing the fundus choroid feature map through the convolutional neural network model to obtain the fundus feature vector map.
可理解地,所述上采样及拼接为将对所述眼底脉络膜特征图进行反卷积,生成与相邻的脉络膜特征图相同维度的中间眼底特征图,并与该脉络膜特征图进行拼接,由于在拼接的过程中维度会变成原维度的2倍,此时需要再次卷积,得到所述眼底特征图,保证处理过后的所述眼底特征图的维度与拼接操作之前的维度相同以便于进行下一次的上采样及拼接,直到最终能够与所述眼底图像样本的维度相同,对所述眼底脉络膜特征图进行上采样及拼接,最后输出所述眼底特征向量图。Understandably, the up-sampling and splicing is to deconvolve the fundus choroid feature map to generate an intermediate fundus feature map with the same dimension as the adjacent choroid feature map, and to splice it with the choroid feature map. During the splicing process, the dimension will become twice the original dimension. At this time, it needs to be convolved again to obtain the fundus feature map to ensure that the dimension of the processed fundus feature map is the same as the dimension before the splicing operation. The next up-sampling and splicing are performed until the final dimension of the fundus image sample is the same, the fundus choroid feature map is up-sampled and spliced, and the fundus feature vector map is finally output.
其中,所述卷积神经网络模型包括第一上采样层、第二上采样层、第三上采样层和第四上采样层,所述第四上采样层对所述高维度特征图进行反卷积,得到512维度的第四中间眼底特征图,将所述第四中间眼底特征图与512维度的所述第四脉络膜特征图进行拼接,再经过一个3×3卷积核的卷积层和一个激活层,得到512维度的第四眼底特征图;所述第三上采样层对所述第四眼底特征图进行反卷积,得到256维度的第三中间眼底特征图,将所述第三中间眼底特征图与256维度的所述第三脉络膜特征图进行拼接,再经过一个3×3卷积核的卷积层和一个激活层,得到256维度的第三眼底特征图;所述第二上采样层对所述第三眼底特征图进行反卷积,得到128维度的第二中间眼底特征图,将所述第二中间眼底特征图与128维度的所述第二脉络膜特征图进行拼接,再经过一个3×3卷积核的卷积层和一个激活层,得到128维度的第二眼底特征图;所述第一上采样层对所述第二眼底特征图进行反卷积,得到64维度的第一中间眼底特征图,将所述第一中间眼底特征图与64维度的所述第一脉络膜特征图进行拼接,再经过一个3×3卷积核的卷积层和一个激活层,得到64维度的第一眼底特征图;将所述一眼底特征图进行一个1×1卷积核的卷积层进行卷积得到所述眼底输出图像,将所述第四眼底特征图、所述第三眼底特征图、所述第二眼底特征图、所述第一眼底特征图和所述眼底输出图像,确定为所述眼底特征向量图。Wherein, the convolutional neural network model includes a first upsampling layer, a second upsampling layer, a third upsampling layer, and a fourth upsampling layer, and the fourth upsampling layer reverses the high-dimensional feature map. Convolution to obtain a 512-dimensional fourth intermediate fundus feature map, which is spliced with the 512-dimensional fourth choroidal feature map, and then passes through a 3×3 convolution kernel convolution layer And an activation layer to obtain a 512-dimensional fourth fundus feature map; the third up-sampling layer deconvolves the fourth fundus feature map to obtain a 256-dimensional third middle fundus feature map, and the second The three middle fundus feature maps are spliced with the 256-dimensional third choroidal feature map, and then through a 3×3 convolution kernel convolution layer and an activation layer, a 256-dimensional third fundus feature map is obtained; The second up-sampling layer deconvolves the third fundus feature map to obtain a 128-dimensional second intermediate fundus feature map, and stitches the second intermediate fundus feature map with the 128-dimensional second choroidal feature map , And then go through a convolutional layer of a 3×3 convolution kernel and an activation layer to obtain a second fundus feature map of 128 dimensions; the first up-sampling layer deconvolves the second fundus feature map to obtain A 64-dimensional first intermediate fundus feature map, which is spliced with the 64-dimensional first choroidal feature map, and then passes through a convolutional layer of a 3×3 convolution kernel and an activation layer , Obtain the first fundus feature map of 64 dimensions; convolve the fundus feature map with a 1×1 convolution kernel convolution layer to obtain the fundus output image, and combine the fourth fundus feature map, The third fundus feature map, the second fundus feature map, the first fundus feature map, and the fundus output image are determined to be the fundus feature vector map.
S2033,通过所述卷积神经网络模型对所述眼底特征向量图进行脉络膜区域识别,得到所述区域识别结果,同时对所述眼底特征向量图进行融合处理,得到所述融合特征向量图。S2033: Perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map to obtain the fused feature vector map.
可理解地,根据所述眼底特征向量图,计算得到所述眼底输出图像中的每个像素点对 应的识别概率值,将所有大于预设概率阈值的所述识别概率值对应的所述像素点在所述眼底输出图像中进行标记,从而得到所述区域识别结果中的识别区域,将所述识别区域和所有所述识别概率值确定为所述区域识别结果。其中,将所述第一眼底特征图与所述第二眼底特征图进行叠加,得到第一待融合特征图,将所述第二眼底特征图与所述第三眼底特征图进行叠加,得到第二待融合特征图,将所述第三眼底特征图与所述第四眼底特征图进行叠加,得到第三待融合特征图,将所述第一待融合特征图、所述第二待融合特征图、所述第三待融合特征图和所述第四眼底特征图进行融合,得到所述融合特征向量图,所述融合为特征融合,即将提取的特征信息进行分析、处理与整合从而得到共性特征十分明显的融合特征向量图,所述融合特征向量图的大小与所述眼底输出图像的大小一致。Understandably, according to the fundus feature vector map, the recognition probability value corresponding to each pixel in the fundus output image is calculated, and all the pixels corresponding to the recognition probability value greater than the preset probability threshold are calculated. Marking is performed on the fundus output image to obtain the recognition area in the area recognition result, and the recognition area and all the recognition probability values are determined as the area recognition result. Wherein, the first fundus feature map and the second fundus feature map are superimposed to obtain a first feature map to be fused, and the second fundus feature map is superimposed with the third fundus feature map to obtain a first fundus feature map. Two feature maps to be fused, the third fundus feature map and the fourth fundus feature map are superimposed to obtain a third feature map to be fused, and the first feature map to be fused and the second feature map to be fused Figure, the third to-be-fused feature map and the fourth fundus feature map are fused to obtain the fused feature vector map. The fusion is feature fusion, that is, the extracted feature information is analyzed, processed and integrated to obtain commonality A fusion feature vector map with very obvious features, and the size of the fusion feature vector map is consistent with the size of the fundus output image.
如此,本申请实现了通过所述卷积神经网络模型对所述眼底图像样本提取所述脉络膜特征,得到眼底脉络膜特征图;通过所述卷积神经网络模型对所述眼底脉络膜特征图进行上采样及拼接,得到所述眼底特征向量图;通过所述卷积神经网络模型对所述眼底特征向量图进行脉络膜区域识别,得到所述区域识别结果,同时对所述眼底特征向量图进行融合处理,得到所述融合特征向量图,因此,能够提高识别的准确率,减少了训练迭代次数,提高了训练效率。In this way, the present application realizes that the choroidal feature is extracted from the fundus image sample through the convolutional neural network model to obtain a fundus choroidal feature map; and the fundus choroidal feature map is upsampled through the convolutional neural network model And splicing to obtain the fundus feature vector map; perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map, The fusion feature vector graph is obtained, therefore, the accuracy of recognition can be improved, the number of training iterations can be reduced, and the training efficiency can be improved.
S204,通过所述卷积神经网络模型对所述眼底特征向量图进行边缘检测,得到边缘结果,同时对所述融合特征向量图进行区域分割,得到区域分割结果。S204: Perform edge detection on the fundus feature vector map through the convolutional neural network model to obtain an edge result, and simultaneously perform region segmentation on the fused feature vector map to obtain a region segmentation result.
可理解地,所述边缘检测为对所述眼底特征向量图中特征向量变化明显的点进行识别,通过对所述眼底特征向量图进行所述边缘检测处理,从而可以识别出与所述眼底特征向量图中上边缘线的坐标点和下边缘线的坐标点,并计算出各与所述眼底特征向量图中上边缘线的坐标点的概率值(确认为上边缘线的概率值),和各与所述眼底特征向量图中下边缘线的坐标点的概率值(确认为下边缘线的概率值),将与所述眼底特征向量图中上边缘线的坐标点和下边缘线的坐标点,和与所述眼底特征向量图中上边缘线的坐标点和下边缘线的坐标点的概率值,一起确定为所述边缘结果,同时对所述融合特征向量图进行区域分割,所述区域分割为根据所述融合特征向量图中的各像素点对应的特征向量,计算出所述融合特征向量图中与各像素点的坐标对应的是否为眼底脉络膜层区域范围的概率值,将确认为眼底脉络膜层区域范围的坐标分割出来,得到所述区域分割结果。Understandably, the edge detection is to identify points with obvious changes in feature vectors in the fundus feature vector map, and by performing the edge detection processing on the fundus feature vector map, it is possible to identify points with the fundus feature vector map. The coordinate point of the upper edge line and the coordinate point of the lower edge line in the vector diagram, and the probability value of each coordinate point with the upper edge line in the fundus feature vector diagram (confirmed as the probability value of the upper edge line) is calculated, and The probability value of each coordinate point corresponding to the bottom edge line in the fundus feature vector diagram (confirmed as the probability value of the bottom edge line) will be compared with the coordinate point of the top edge line and the coordinate point of the bottom edge line in the fundus feature vector diagram. Points, and the probability values of the coordinate points of the upper edge line and the coordinate points of the lower edge line in the fundus feature vector map are determined together as the edge result, and at the same time, the fusion feature vector map is segmented, the The region segmentation is based on the feature vector corresponding to each pixel in the fusion feature vector map, and the probability value of whether the coordinate of each pixel point in the fusion feature vector map corresponds to the area of the fundus choroid layer is calculated, and it will be confirmed The coordinates of the region of the fundus choroid layer are segmented, and the region segmentation result is obtained.
S205,根据所述区域识别结果和所述区域标签,确定出分类损失值;根据所述边缘结果和所述边缘线标签,确定出边缘损失值;根据所述区域分割结果和所述区域标签,确定出分割损失值。S205. Determine a classification loss value according to the area recognition result and the area label; determine an edge loss value according to the edge result and the edge line label; according to the area segmentation result and the area label, Determine the segmentation loss value.
可理解地,将所述区域识别结果和所述区域标签输入所述卷积神经网络模型中的分类损失函数,通过所述分类损失函数计算出所述分类损失值,所述分类损失函数可以根据需求设定,比如交叉熵损失函数,将所述边缘结果和所述边缘线标签输入所述卷积神经网络模型中的边缘损失函数,通过所述边缘损失函数计算出所述边缘损失值,将所述区域分割结果和所述区域标签输入所述卷积神经网络模型中的分割损失函数,通过所述分割损失函数计算出所述分割损失值。Understandably, the region recognition result and the region label are input into the classification loss function in the convolutional neural network model, and the classification loss value is calculated by the classification loss function, and the classification loss function can be based on Demand setting, such as cross entropy loss function, input the edge result and the edge line label into the edge loss function in the convolutional neural network model, calculate the edge loss value through the edge loss function, and set The region segmentation result and the region label are input to the segmentation loss function in the convolutional neural network model, and the segmentation loss value is calculated by the segmentation loss function.
S206,根据所述分类损失值、所述边缘损失值和所述分割损失值,确定出总损失值。S206: Determine a total loss value according to the classification loss value, the edge loss value, and the segmentation loss value.
可理解地,将所述分类损失值、所述边缘损失值和所述分割损失值输入所述卷积神经网络模型中的总损失函数,通过所述总损失函数计算得出所述总损失值;所述中损失值为:Understandably, the classification loss value, the edge loss value, and the segmentation loss value are input into the total loss function in the convolutional neural network model, and the total loss value is calculated by the total loss function ; The medium loss value is:
L=λ 1L 12L 23L 3 L=λ 1 L 12 L 23 L 3
其中,λ 1为所述分类损失值的权重;L 1为所述分类损失值;λ 2为所述边缘损失值的权重;L 2为所述边缘损失值;λ 3为所述分割损失值的权重;L 3为所述分割损失值。 Where λ 1 is the weight of the classification loss value; L 1 is the classification loss value; λ 2 is the weight of the edge loss value; L 2 is the edge loss value; λ 3 is the segmentation loss value The weight of; L 3 is the segmentation loss value.
S207,在所述总损失值未达到预设的收敛条件时,迭代更新所述卷积神经网络模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述卷积神经网络模型记录为眼底分割模型。S207: When the total loss value does not reach the preset convergence condition, iteratively update the initial parameters of the convolutional neural network model, until the total loss value reaches the preset convergence condition, the subsequent convergence The convolutional neural network model is recorded as a fundus segmentation model.
可理解地,所述收敛条件可以为所述总损失值经过了1000次计算后值为很小且不会再下降的条件,即在所述总损失值经过1000次计算后值为很小且不会再下降时,停止训练,并将收敛之后的所述卷积神经网络模型记录为训练完成的所述眼底分割模型;所述收敛条件也可以为所述总损失值小于设定阈值的条件,即在所述总损失值小于设定阈值时,停止训练,并将收敛之后的所述卷积神经网络模型记录为训练完成的所述眼底分割模型,如此,在所述总损失值未达到预设的收敛条件时,不断更新迭代所述卷积神经网络模型的初始参数,并触发通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图的步骤,可以不断向准确的结果靠拢,让识别的准确率越来越高。Understandably, the convergence condition may be a condition that the value of the total loss value is small and will not drop after 1000 calculations, that is, the value of the total loss value is small and will not drop after 1000 calculations. When it does not fall anymore, stop training, and record the convolutional neural network model after convergence as the fundus segmentation model after training; the convergence condition may also be a condition that the total loss value is less than a set threshold , That is, when the total loss value is less than the set threshold, the training is stopped, and the convolutional neural network model after convergence is recorded as the fundus segmentation model after training. In this way, when the total loss value does not reach When the preset convergence conditions are set, the initial parameters of the iterative convolutional neural network model are continuously updated, and the convolutional neural network model is triggered to extract the choroidal features in the fundus image sample to obtain the convolutional neural network. The network model can continuously move closer to the accurate result based on the regional recognition result, fundus feature vector map and the step of fusing the feature vector map outputted by the choroidal features, so that the accuracy of recognition becomes higher and higher.
S30,通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像。S30: Perform image recognition on the fundus segmented image using the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and cut out the first fundus from the fundus segmented image according to the fovea area Choroidal image.
可理解地,通过所述眼底中央凹识别模型对所述眼底分割图像进行图像识别,所述眼底中央凹识别模型为训练完成的神经网络模型,所述眼底中央凹识别模型的网络结构可以根据需求设定,比如所述眼底中央凹识别模型可以为YOLO(You Only Look Once)模型的网络结构、SSD(Single Shot MultiBox Detector)模型的网络结构等等,由于中央凹区域在所述眼底分割图像中的区域较小作为优选,所以将所述眼底中央凹识别模型的网络结构优选为SSD(Single Shot MultiBox Detector)模型的网络结构,是由于SSD模型的网络结构有利于小物件的识别,并且根据识别出的所述中央凹区域从所述眼底分割图像中按预设的尺寸参数进行截取,得到所述第一眼底脉络膜图像,所述尺寸参数可以根据需求设定,比如以所述中央凹区域的中心的长为6000μm和宽为1500μm的尺寸参数。Understandably, image recognition is performed on the fundus segmented image through the fundus fovea recognition model. The fundus fovea recognition model is a trained neural network model. The network structure of the fundus fovea recognition model can be based on requirements. Setting, for example, the fundus fovea recognition model can be the network structure of the YOLO (You Only Look Once) model, the network structure of the SSD (Single Shot MultiBox Detector) model, etc., because the fovea area is in the fundus segmented image It is preferred that the area of the fovea recognition model is smaller, so the network structure of the fundus fovea recognition model is preferably the network structure of the SSD (Single Shot MultiBox Detector) model, because the network structure of the SSD model is conducive to the identification of small objects, and according to the The obtained foveal area is intercepted from the fundus segmented image according to preset size parameters to obtain the first fundus choroid image. The size parameters can be set according to requirements, for example, the size of the foveal area The length of the center is 6000μm and the width is 1500μm.
在一实施例中,如图6所示,所述步骤S30中,即所述通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像,包括:In one embodiment, as shown in FIG. 6, in the step S30, the image recognition is performed on the fundus segmented image through the fundus fovea recognition model to identify the fovea area in the fundus segmented image, And extracting a first fundus choroid image from the fundus segmented image according to the fovea area, including:
S301,将所述眼底分割图像输入基于SSD的眼底中央凹识别模型。S301: Input the segmented image of the fundus into an SSD-based fundus fovea recognition model.
可理解地,所述眼底中央凹识别模型为训练完成的基于SSD模型的神经网络模型,将所述眼底分割图像输入至所述眼底中央凹识别模型中。Understandably, the fundus fovea recognition model is a trained neural network model based on an SSD model, and the fundus segmented image is input into the fundus fovea recognition model.
S302,通过SSD算法,通过所述眼底中央凹识别模型提取中央凹特征,并根据所述中央凹特征进行目标检测,得到所述中央凹区域。S302: Using the SSD algorithm, extract the fovea feature through the fundus fovea recognition model, and perform target detection according to the fovea feature to obtain the fovea area.
可理解地,通过所述SSD(Single Shot MultiBox Detector)算法,所述SSD算法为通过不同尺度的特征图对所述眼底分割图像进行所述中央凹特征进行提取,所述中央凹特征为与眼底脉络膜层的中央凹区域的特征,并根据所述中央凹特征进行目标检测,即根据提取的所述中央凹特征进行横纵比明确区分的方法进行预测区域,最终识别出含有眼底脉络膜中央凹的所述中央凹区域。Understandably, through the SSD (Single Shot MultiBox Detector) algorithm, the SSD algorithm extracts the fovea feature of the fundus segmented image through feature maps of different scales, and the fovea feature is the same as that of the fundus. The characteristics of the foveal area of the choroid layer, and target detection is performed based on the fovea features, that is, the area is predicted by the method of clearly distinguishing the aspect ratio of the extracted fovea features, and finally the choroidal fovea containing the fundus is identified The foveal area.
S303,以所述中央凹区域为中心,根据预设的尺寸参数从所述眼底分割图像中截取所述第一眼底脉络膜图像。S303, taking the fovea area as a center, and intercepting the first fundus choroid image from the fundus segmented image according to a preset size parameter.
可理解地,所述尺寸参数可以根据需求设定,比如所述尺寸参数为6000×1500,以所述中央凹区域为中心,根据预设的尺寸参数从所述眼底分割图像中截取所述第一眼底脉络膜图像,例如:以所述中央凹区域的中心的长为6000μm和宽为1500μm的尺寸参数,在眼底分割图像中截取出第一眼底脉络膜图像。Understandably, the size parameter can be set according to requirements. For example, the size parameter is 6000×1500, and the second is cut from the fundus segmented image according to a preset size parameter with the fovea region as the center. A fundus choroid image, for example, taking the size parameters of the center of the fovea region of 6000 μm in length and 1500 μm in width to intercept the first fundus choroid image from the fundus segmented image.
如此,本申请实现了通过将所述眼底分割图像输入基于SSD的眼底中央凹识别模型;通过SSD算法,通过所述眼底中央凹识别模型提取中央凹特征,并根据所述中央凹特征进 行目标检测,得到所述中央凹区域;以所述中央凹区域为中心,根据预设的尺寸参数从所述眼底分割图像中截取所述第一眼底脉络膜图像,因此,能够自动识别出中央凹区域,并截取出相同尺寸的眼底脉络膜图像,便于后续的识别,提高了识别准确率。In this way, this application realizes that by inputting the fundus segmented image into an SSD-based fundus fovea recognition model; using the SSD algorithm, extracting the fovea features from the fundus fovea recognition model, and performing target detection based on the fovea features , Obtain the foveal area; take the foveal area as the center, and intercept the first fundus choroid image from the fundus segmented image according to preset size parameters, so that the foveal area can be automatically identified, and The fundus choroid image of the same size is cut out to facilitate subsequent identification and improve the accuracy of identification.
S40,通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域。S40: Binarize the first fundus choroid image by using the Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image.
可理解地,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像对应的值,所述第一脉络膜二值图像中各像素点的值为0或1,也可显示为白(对应0)或黑(对应1)两种颜色,将所述第一脉络膜二值图像中黑色区域进行提取,得到所述第一管腔区域。Understandably, binarization is performed on the first fundus choroid image to obtain a value corresponding to the first choroidal binary image, and the value of each pixel in the first choroidal binary image is 0 or 1, or It is displayed in two colors of white (corresponding to 0) or black (corresponding to 1), and extracting the black area in the first choroidal binary image to obtain the first lumen area.
其中,所述二值化处理包括通过所述Niblack局部阈值算法对所述第一眼底脉络膜图像中的各像素点进行二值化计算的处理操作,所述Niblack局部阈值算法为对图像中的每个像素点与通过局部区域计算得到的阈值相比较进行二值化。Wherein, the binarization processing includes a processing operation of performing binarization calculation on each pixel in the first fundus choroid image by the Niblack local threshold algorithm, and the Niblack local threshold algorithm is for each pixel in the image. Each pixel is compared with the threshold calculated by the local area for binarization.
S50,根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。S50, according to the first lumen area, identify a first lumen area image containing a lumen area of the fundus choroidal blood vessel from the fundus image to be identified.
可理解地,按照所述第一管腔区域的坐标位置从所述待识别眼底图像中进行标记,从而可以得到所述第一管腔区域图像,所述第一管腔区域图像为标记眼底脉络膜血管的管腔区域的图像。Understandably, marking is performed from the fundus image to be identified according to the coordinate position of the first lumen area, so that the first lumen area image can be obtained, and the first lumen area image is the mark fundus choroid Image of the lumen area of the blood vessel.
如此,本申请实现了通过接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像,因此,实现了自动识别眼底图像中的眼底脉络膜血管的管腔区域,通过基于U-Net的眼底分割模型、眼底中央凹识别模型和Niblack局部阈值算法,能够快速地和准确地识别眼底脉络膜血管的管腔区域,以确定眼底脉络膜的特征,如此,减少了人工识别成本,提高了识别准确性和可靠性。In this way, this application realizes that by receiving the fundus lumen recognition request, the fundus image to be recognized in the fundus lumen recognition request is obtained; the fundus image to be recognized is input into the U-Net-based fundus segmentation model, and the fundus segmentation model is based on U-Net. The fundus segmentation model performs choroid feature extraction and edge segmentation on the fundus image to be recognized to obtain a fundus segmented image; image recognition is performed on the fundus segmented image through the fundus fovea recognition model, and the fovea in the fundus segmentation image is identified Region, and extract the first fundus choroid image from the fundus segmentation image according to the fovea region; use the Niblack local threshold algorithm to binarize the first fundus choroid image to obtain the first choroid binary value Image, and extract the first lumen area from the first choroidal binary image; according to the first lumen area, identify the first lumen area containing the fundus choroidal blood vessels from the fundus image to be identified A lumen region image, therefore, realizes the automatic recognition of the lumen region of the fundus choroidal blood vessels in the fundus image. Through the U-Net-based fundus segmentation model, the fundus fovea recognition model and the Niblack local threshold algorithm, it can be quickly and accurately The luminal area of the fundus choroidal blood vessels is accurately recognized to determine the characteristics of the fundus choroid. In this way, the cost of manual identification is reduced, and the accuracy and reliability of identification are improved.
在一实施例中,如图3所示,所述步骤S50之后,即所述从所述第一脉络膜二值图像中提取出第一管腔区域之后,还包括:In an embodiment, as shown in FIG. 3, after the step S50, that is, after the first lumen region is extracted from the first choroidal binary image, the method further includes:
S60,对所述待识别眼底图像进行灰度处理,得到眼底灰度图像。S60: Perform grayscale processing on the fundus image to be identified to obtain a grayscale image of the fundus.
可理解地,将多通道的所述待识别眼底图像进行灰度处理,得到单通道的所述眼底灰度图像,比如:所述待识别眼底图像包括RGB(Red Green Blue,红色、绿色、蓝色)三通道图像,将各通道图像中的相同像素点对应的值进行灰度二值化处理得到该像素点的灰度值,得到一个通道图像,即为所述眼底灰度图像。Understandably, the multi-channel fundus image to be identified is subjected to grayscale processing to obtain the single-channel fundus grayscale image. For example, the fundus image to be identified includes RGB (Red Green Blue, red, green, and blue). (Color) three-channel image, the gray-scale binarization of the value corresponding to the same pixel in each channel image is performed to obtain the gray-scale value of the pixel, and a channel image is obtained, which is the fundus gray-scale image.
S70,通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像。S70. Obtain a lumen adaptive threshold value in the first lumen region by an adaptive threshold method, and perform normalization processing on the fundus gray-scale image according to the lumen adaptive threshold value to obtain a first fundus image.
可理解地,所述自适应阈值法为利用图像中的局部阈值替换全局阈值进行图像计算的一种方法,具体针对光影变化过大的图像,或者范围内颜色差异不太明显的图像,所述归一化处理过程为通过所述自适应阈值法,计算所述第一管腔区域中所有与像素点对应的灰度值的平均值,将其记录为所述管腔自适应阈值,以及获取预设的最大灰度值,所述最大灰度值可以根据需求设定,优选为255,再根据所述自适应阈值法中的灰度归一化函数计算出所述眼底灰度图像中与各像素点对应的灰度归一值,最后输出所述第一眼底图像,所述第一眼底图像与所述待识别眼底图像的大小一样。Understandably, the adaptive threshold method is a method that uses a local threshold value in an image to replace a global threshold value for image calculation. It is specifically aimed at images with excessively large changes in light and shadow, or images with less obvious color differences within a range. The normalization process is to calculate the average value of all gray values corresponding to pixels in the first lumen area by the adaptive threshold method, record it as the lumen adaptive threshold, and obtain The preset maximum grayscale value, the maximum grayscale value can be set according to requirements, preferably 255, and then calculated according to the grayscale normalization function in the adaptive threshold method in the fundus grayscale image and The normalized value of the gray level corresponding to each pixel, and finally output the first fundus image, and the first fundus image has the same size as the fundus image to be identified.
在一实施例中,如图7所示,所述步骤S70中,即所述通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像,包括:In one embodiment, as shown in FIG. 7, in the step S70, that is, the adaptive threshold method is used to obtain the adaptive threshold value of the lumen in the first lumen region, and automatically The adaptive threshold normalizes the gray-scale image of the fundus to obtain the first fundus image, including:
S701,通过自适应阈值法,获取所述管腔自适应阈值。S701: Acquire the adaptive threshold of the lumen through an adaptive threshold method.
可理解地,所述自适应阈值法为利用图像中的局部阈值替换全局阈值进行图像计算的一种方法,计算所述第一管腔区域中所有与像素点对应的灰度值的平均值,将其记录为所述管腔自适应阈值。Understandably, the adaptive threshold method is a method that uses local thresholds in the image to replace global thresholds for image calculation, and calculates the average value of all gray values corresponding to pixels in the first lumen region, Record this as the adaptive threshold of the lumen.
S702,获取所述眼底灰度图像中与各像素点对应的灰度值和预设的最大灰度值。S702: Acquire a gray value corresponding to each pixel in the fundus gray image and a preset maximum gray value.
可理解地,所述眼底灰度图像包括与各像素点对应的灰度值,获取预设的最大灰度值,所述最大灰度值可以根据需求设定,优选为255。Understandably, the fundus gray-scale image includes a gray-scale value corresponding to each pixel, and a preset maximum gray-scale value is obtained. The maximum gray-scale value can be set according to requirements, and is preferably 255.
S703,将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化模型中,得到与各所述像素点对应的所述灰度归一值。S703. Input the lumen adaptive threshold, the gray value corresponding to each pixel and the maximum gray value into a gray normalized model, and obtain all the corresponding pixels corresponding to each pixel. Said gray normalization value.
可理解地,所述灰度归一化模型包括灰度归一化函数,通过所述灰度归一化函数可以计算得到与各所述像素点对应的所述灰度归一值。Understandably, the gray-level normalization model includes a gray-level normalization function, and the gray-level normalization value corresponding to each pixel point can be calculated by the gray-level normalization function.
在一实施例中,所述步骤S703中,即所述将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化模型中,得到与各所述像素点对应的所述灰度归一值,包括:In one embodiment, in the step S703, that is, the lumen adaptive threshold, the gray value corresponding to each pixel point, and the maximum gray value input gray level are normalized In the model, obtaining the normalized gray value corresponding to each pixel includes:
S7031,将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化函数中,得到与各所述像素点对应的所述灰度归一值;所述灰度归一化函数为:S7031. Input the lumen adaptive threshold, the gray value corresponding to each pixel and the maximum gray value into a gray normalization function to obtain all the corresponding pixel points. The gray scale normalization value; the gray scale normalization function is:
Figure PCTCN2020116743-appb-000001
Figure PCTCN2020116743-appb-000001
其中,among them,
f(x,y)为与所述眼底灰度图像中坐标为(x,y)的像素点对应的灰度值;f(x, y) is the gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image;
F(x,y)为与所述眼底灰度图像中坐标为(x,y)的像素点对应的灰度归一值;F(x, y) is the normalized gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image;
A为所述管腔自适应阈值;A is the adaptive threshold of the lumen;
B为所述最大灰度值。B is the maximum gray value.
S704,按照各所述像素点的位置将所有所述灰度归一值进行拼接,得到所述第一眼底图像。S704: Join all the normalized gray values according to the positions of the pixels to obtain the first fundus image.
可理解地,将各所述灰度归一值按照与其对应的所述像素点的位置进行拼接,组成一个新的图像,将该图像确定为所述第一眼底图像,从而对所述眼底灰度图像进行修正。Understandably, each of the gray-scale normalized values is spliced according to the positions of the corresponding pixel points to form a new image, and the image is determined as the first fundus image, so as to correct the gray fundus The degree image is corrected.
如此,本申请实现了通过自适应阈值法,可以对所述眼底灰度图像进行修正,能够对眼底脉络膜血管的管腔区域进行修正,更加凸显出眼底脉络膜血管的管腔区域,便于识别,提高了识别准确率和可靠性。In this way, the present application realizes that through the adaptive threshold method, the fundus gray-scale image can be corrected, and the lumen area of the fundus choroidal blood vessel can be corrected, so that the lumen area of the fundus choroidal blood vessel can be more prominent, which is easy to identify and improve Improved recognition accuracy and reliability.
S80,根据所述中央凹区域从所述第一眼底图像中提取出第二眼底脉络膜图像。S80, extracting a second fundus choroid image from the first fundus image according to the fovea area.
可理解地,从所述第一眼底图像中按照所述中央凹区域的坐标区域进行提取,可以得到所述第二眼底脉络膜图像。Understandably, extracting from the first fundus image according to the coordinate area of the fovea region can obtain the second fundus choroid image.
S90,根据Niblack局部阈值方法,对所述第二眼底脉络膜图像进行二值化处理,得到第二脉络膜二值图像,并从所述第二脉络膜二值图像中提取出第二管腔区域。S90: Perform binarization processing on the second fundus choroid image according to the Niblack local threshold method to obtain a second choroid binary image, and extract a second lumen region from the second choroid binary image.
可理解地,通过所述Niblack局部阈值方法,对所述第二眼底脉络膜图像进行二值化处理,得到第二脉络膜二值图像对应的值,所述第二脉络膜二值图像中各像素点的值为0或1,也可显示为白(对应0)或黑(对应1)两种颜色,将所述第二脉络膜二值图像中黑色区域进行提取,得到所述第二管腔区域。Understandably, through the Niblack local threshold method, the second fundus choroid image is binarized to obtain the value corresponding to the second choroid binary image. The value of each pixel in the second choroid binary image is The value is 0 or 1, and can also be displayed in two colors of white (corresponding to 0) or black (corresponding to 1). The black area in the second choroidal binary image is extracted to obtain the second lumen area.
其中,所述二值化处理还包括通过所述Niblack局部阈值算法对所述第二眼底脉络膜图像中的各像素点进行二值化计算的处理操作。Wherein, the binarization processing further includes a processing operation of performing binarization calculation on each pixel in the second fundus choroid image by the Niblack local threshold algorithm.
S100,根据所述第二管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第二管腔区域图像。S100, according to the second lumen area, identify a second lumen area image containing the lumen area of the fundus choroidal blood vessel from the fundus image to be identified.
可理解地,按照所述第二管腔区域的坐标位置从所述待识别眼底图像中进行标记,从而可以得到所述第二管腔区域图像,所述第二管腔区域图像为标记眼底脉络膜血管的管腔区域的图像,如此,能够更加准确的确定出眼底脉络膜血管的管腔区域。Understandably, marking is performed from the fundus image to be identified according to the coordinate position of the second lumen area, so that the second lumen area image can be obtained, and the second lumen area image is the mark fundus choroid The image of the lumen area of the blood vessel, in this way, can more accurately determine the lumen area of the fundus choroidal blood vessel.
本申请实现了通过接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像,对所述待识别眼底图像进行灰度处理,得到眼底灰度图像;通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像;根据所述中央凹区域从所述第一眼底图像中提取出第二眼底脉络膜图像;根据Niblack局部阈值方法,对所述第二眼底脉络膜图像进行二值化处理,得到第二脉络膜二值图像,并从所述第二脉络膜二值图像中提取出第二管腔区域;根据所述第二管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第二管腔区域图像,因此,实现了通过基于U-Net的眼底分割模型、眼底中央凹识别模型、两次Niblack局部阈值算法处理和自适应阈值法,能够对所述待识别眼底图像进行修正,以便更加准确地识别眼底脉络膜血管的管腔区域,如此,更加提高了识别准确性和可靠性。This application realizes that by receiving the fundus lumen recognition request, the fundus image to be recognized in the fundus lumen recognition request is obtained; the fundus image to be recognized is input into the U-Net-based fundus segmentation model, and the fundus segmentation is performed The model performs choroid feature extraction and edge segmentation on the fundus image to be recognized to obtain a fundus segmented image; image recognition is performed on the fundus segmented image through the fundus fovea recognition model to identify the fovea area in the fundus segmented image, And cut out the first fundus choroid image from the fundus segmented image according to the fovea region; use the Niblack local threshold algorithm to binarize the first fundus choroid image to obtain the first choroidal binary image, And extract the first lumen area from the first choroidal binary image; according to the first lumen area, identify the first tube containing the lumen area of the fundus choroidal blood vessel from the fundus image to be identified Cavity region image, performing grayscale processing on the fundus image to be identified to obtain a fundus grayscale image; using an adaptive threshold method to obtain the lumen adaptive threshold in the first lumen region, and according to the lumen The adaptive threshold normalizes the fundus grayscale image to obtain the first fundus image; extracts the second fundus choroid image from the first fundus image according to the foveal area; according to the Niblack local threshold method, Binarize the second fundus choroid image to obtain a second choroid binary image, and extract a second lumen area from the second choroid binary image; according to the second lumen area, The second lumen region image containing the lumen region of the fundus choroidal blood vessels is identified from the fundus image to be identified. Therefore, the U-Net-based fundus segmentation model, the fundus fovea recognition model, and two Niblack local Threshold algorithm processing and adaptive threshold method can correct the fundus image to be identified, so as to more accurately identify the lumen region of the fundus choroidal blood vessel, thus further improving the recognition accuracy and reliability.
在一实施例中,所述步骤S100之后,即所述根据所述第二管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第二管腔区域图像之后,包括:In one embodiment, after the step S100, that is, after the second lumen area image containing the lumen area of the fundus choroidal blood vessels is identified from the fundus image to be identified according to the second lumen area ,include:
S110,计算所述第二管腔区域图像中的眼底脉络膜血管的管腔区域的面积,得到管腔区域面积,同时计算第一眼底脉络膜图像的面积,得到脉络膜区域面积;S110: Calculate the area of the luminal area of the fundus choroidal blood vessel in the second luminal area image to obtain the area of the luminal area, and calculate the area of the first fundus choroidal image to obtain the area of the choroidal area;
S120,计算所述管腔区域面积与所述脉络膜区域面积的比值,得到脉络膜血管指数。S120: Calculate the ratio of the area of the lumen region to the area of the choroid region to obtain a choroidal blood vessel index.
如此,通过识别到的第二管腔区域图像,计算出含有眼底脉络膜血管的管腔区域的管腔区域面积,同时计算脉络膜区域面积,将管腔区域面积和脉络膜区域面积进行比值,得到脉络膜血管指数,以便医生进行下一步医学行为,提供了与眼底脉络膜相关的数据指标。In this way, by using the identified second lumen region image, the lumen region area of the lumen region containing the fundus choroidal blood vessels is calculated, and the choroidal region area is calculated at the same time, and the ratio of the lumen region area and the choroidal region area is calculated to obtain the choroidal blood vessel Index, so that the doctor can carry out the next medical action, provides data indicators related to the fundus choroid.
在一实施例中,提供一种脉络膜血管的管腔区域识别装置,该脉络膜血管的管腔区域识别装置与上述实施例中脉络膜血管的管腔区域识别方法一一对应。如图8所示,该脉络膜血管的管腔区域识别装置包括接收模块11、输入模块12、截取模块13、二值模块14和识别模块15。各功能模块详细说明如下:In one embodiment, a device for identifying the lumen region of choroidal blood vessels is provided. The device for identifying the lumen region of choroidal blood vessels corresponds to the method for identifying the lumen region of choroidal blood vessels in the above-mentioned embodiment. As shown in FIG. 8, the device for identifying the lumen region of choroidal blood vessels includes a receiving module 11, an input module 12, an intercepting module 13, a binary module 14 and an identifying module 15. The detailed description of each functional module is as follows:
接收模块11,用于接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;The receiving module 11 is configured to receive the fundus lumen identification request, and obtain the fundus image to be identified in the fundus lumen identification request;
输入模块12,用于将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;The input module 12 is configured to input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
截取模块13,用于通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;The intercepting module 13 is configured to perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and intercept the fundus segmented image according to the fovea area Get the first fundus choroid image;
二值模块14,用于通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;The binary module 14 is used to binarize the first fundus choroid image by using the Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first choroid image from the first choroid image. Luminal area
识别模块15,用于根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。The recognition module 15 is configured to recognize, from the fundus image to be recognized, a first lumen area image containing a lumen area of the fundus choroidal blood vessel according to the first lumen area.
关于脉络膜血管的管腔区域识别装置的具体限定可以参见上文中对于脉络膜血管的管腔区域识别方法的限定,在此不再赘述。上述脉络膜血管的管腔区域识别装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific definition of the device for identifying the lumen region of the choroidal blood vessel, please refer to the above definition of the method for identifying the lumen region of the choroidal blood vessel, which will not be repeated here. The various modules in the above-mentioned choroidal vessel lumen region recognition device can be implemented in whole or in part by software, hardware, and a combination thereof. The above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图9所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括可读存储介质、内存储器。该可读存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为可读存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种脉络膜血管的管腔区域识别方法。本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质。In one embodiment, a computer device is provided. The computer device may be a server, and its internal structure diagram may be as shown in FIG. 9. The computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities. The memory of the computer device includes a readable storage medium and an internal memory. The readable storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operation of the operating system and computer readable instructions in the readable storage medium. The network interface of the computer device is used to communicate with an external terminal through a network connection. When the computer-readable instructions are executed by the processor, a method for identifying the lumen region of choroidal blood vessels is realized. The readable storage medium provided in this embodiment includes a non-volatile readable storage medium and a volatile readable storage medium.
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例中脉络膜血管的管腔区域识别方法。In one embodiment, a computer device is provided, including a memory, a processor, and computer-readable instructions stored in the memory and running on the processor. When the processor executes the computer-readable instructions, the choroid in the above-mentioned embodiment is implemented. Method for identifying the lumen area of blood vessels.
在一个实施例中,提供了一个或多个存储有计算机可读指令的可读存储介质,本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质;该可读存储介质上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现上述实施例中脉络膜血管的管腔区域识别方法。In one embodiment, one or more readable storage media storing computer readable instructions are provided. The readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage. Medium; the readable storage medium stores computer readable instructions, and when the computer readable instructions are executed by one or more processors, the one or more processors implement the method for identifying the lumen region of the choroidal blood vessel in the above-mentioned embodiment .
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质或易失性可读存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the methods of the foregoing embodiments can be implemented by instructing relevant hardware through computer-readable instructions. The computer-readable instructions can be stored in a non-volatile computer. In a readable storage medium or a volatile readable storage medium, when the computer readable instruction is executed, it may include the processes of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database, or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, only the division of the above functional units and modules is used as an example. In practical applications, the above functions can be allocated to different functional units and modules as needed. Module completion, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in Within the scope of protection of this application.

Claims (21)

  1. 一种脉络膜血管的管腔区域识别方法,其中,包括:A method for identifying the lumen region of choroidal blood vessels, which includes:
    接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;Receiving the fundus lumen identification request, and acquiring the fundus image to be identified in the fundus lumen identification request;
    将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
    通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;Perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area ;
    通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;Binarize the first fundus choroid image by Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image;
    根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。According to the first lumen area, the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  2. 如权利要求1所述的脉络膜血管的管腔区域识别方法,其中,所述从所述第一脉络膜二值图像中提取出第一管腔区域之后,还包括:3. The method for identifying the lumen region of choroidal blood vessels according to claim 1, wherein after extracting the first lumen region from the first choroidal binary image, the method further comprises:
    对所述待识别眼底图像进行灰度处理,得到眼底灰度图像;Performing grayscale processing on the fundus image to be identified to obtain a grayscale image of the fundus;
    通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像;Obtain the lumen adaptive threshold value in the first lumen region by the adaptive threshold method, and perform normalization processing on the fundus grayscale image according to the lumen adaptive threshold value to obtain the first fundus image;
    根据所述中央凹区域从所述第一眼底图像中提取出第二眼底脉络膜图像;Extracting a second fundus choroid image from the first fundus image according to the foveal area;
    根据Niblack局部阈值方法,对所述第二眼底脉络膜图像进行二值化处理,得到第二脉络膜二值图像,并从所述第二脉络膜二值图像中提取出第二管腔区域;According to the Niblack local threshold method, binarize the second fundus choroid image to obtain a second choroid binary image, and extract the second lumen region from the second choroid binary image;
    根据所述第二管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第二管腔区域图像。According to the second lumen area, a second lumen area image containing the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  3. 如权利要求1所述的脉络膜血管的管腔区域识别方法,其中,所述将所述待识别眼底图像输入基于U-Net的眼底分割模型之前,包括:The method for recognizing the lumen region of choroidal blood vessels according to claim 1, wherein, before inputting the fundus image to be recognized into a U-Net-based fundus segmentation model, the method comprises:
    获取眼底图像样本;所述眼底图像样本与一个边缘线标签及一个区域标签关联;Obtaining a fundus image sample; the fundus image sample is associated with an edge line label and an area label;
    将所述眼底图像样本输入含有初始参数的基于U-Net的卷积神经网络模型;Input the fundus image sample into a U-Net-based convolutional neural network model containing initial parameters;
    通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图;Extracting the choroidal feature in the fundus image sample by using the convolutional neural network model, and obtaining the region recognition result, fundus feature vector map, and fusion feature vector map output by the convolutional neural network model according to the choroidal feature;
    通过所述卷积神经网络模型对所述眼底特征向量图进行边缘检测,得到边缘结果,同时对所述融合特征向量图进行区域分割,得到区域分割结果;Performing edge detection on the fundus feature vector map by using the convolutional neural network model to obtain an edge result, and at the same time performing region segmentation on the fused feature vector map to obtain a region segmentation result;
    根据所述区域识别结果和所述区域标签,确定出分类损失值;根据所述边缘结果和所述边缘线标签,确定出边缘损失值;根据所述区域分割结果和所述区域标签,确定出分割损失值;Determine the classification loss value based on the region recognition result and the region label; determine the edge loss value based on the edge result and the edge line label; determine the edge loss value based on the region segmentation result and the region label Split loss value;
    根据所述分类损失值、所述边缘损失值和所述分割损失值,确定出总损失值;Determine a total loss value according to the classification loss value, the edge loss value, and the segmentation loss value;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述卷积神经网络模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述卷积神经网络模型记录为眼底分割模型。When the total loss value does not reach the preset convergence condition, iteratively update the initial parameters of the convolutional neural network model, until the total loss value reaches the preset convergence condition, the subsequent convergence The convolutional neural network model is recorded as a fundus segmentation model.
  4. 如权利要求3所述的脉络膜血管的管腔区域识别方法,其中,所述通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图,包括:The method for identifying the lumen region of choroidal blood vessels according to claim 3, wherein said extracting said choroidal features in said fundus image sample through said convolutional neural network model, and obtaining said convolutional neural network model according to The region recognition result, fundus feature vector map, and fusion feature vector map output by the choroid feature include:
    通过所述卷积神经网络模型对所述眼底图像样本提取所述脉络膜特征,得到眼底脉络膜特征图;Extracting the choroidal feature from the fundus image sample by using the convolutional neural network model to obtain a fundus choroidal feature map;
    通过所述卷积神经网络模型对所述眼底脉络膜特征图进行上采样及拼接,得到所述眼 底特征向量图;Up-sampling and splicing the fundus choroid feature map through the convolutional neural network model to obtain the fundus feature vector map;
    通过所述卷积神经网络模型对所述眼底特征向量图进行脉络膜区域识别,得到所述区域识别结果,同时对所述眼底特征向量图进行融合处理,得到所述融合特征向量图。Perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map to obtain the fused feature vector map.
  5. 如权利要求1所述的脉络膜血管的管腔区域识别方法,其中,所述通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像,包括:The method for recognizing the lumen region of choroidal blood vessels according to claim 1, wherein the image recognition is performed on the fundus segmented image through the fundus fovea recognition model to identify the fovea region in the fundus segmented image, and Extracting a first fundus choroid image from the fundus segmented image according to the fovea region includes:
    将所述眼底分割图像输入基于SSD的眼底中央凹识别模型;Inputting the segmented image of the fundus into an SSD-based fundus fovea recognition model;
    通过SSD算法,通过所述眼底中央凹识别模型提取中央凹特征,并根据所述中央凹特征进行目标检测,得到所述中央凹区域;Using an SSD algorithm, extracting a fovea feature from the fundus fovea recognition model, and performing target detection based on the fovea feature to obtain the fovea area;
    以所述中央凹区域为中心,根据预设的尺寸参数从所述眼底分割图像中截取所述第一眼底脉络膜图像。Taking the foveal area as the center, and intercepting the first fundus choroid image from the fundus segmented image according to a preset size parameter.
  6. 如权利要求2所述的脉络膜血管的管腔区域识别方法,其中,所述通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像,包括:The method for identifying the lumen region of choroidal blood vessels according to claim 2, wherein the adaptive threshold method is adopted to obtain the adaptive threshold value of the lumen in the first lumen region, and the adaptive threshold value of the lumen in the first lumen region is acquired according to the adaptive threshold The threshold value normalizes the gray-scale image of the fundus to obtain the first fundus image, including:
    通过自适应阈值法,获取所述管腔自适应阈值;Obtain the adaptive threshold of the lumen by an adaptive threshold method;
    获取所述眼底灰度图像中与各像素点对应的灰度值和预设的最大灰度值;Acquiring a gray value corresponding to each pixel in the fundus gray image and a preset maximum gray value;
    将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化模型中,得到与各所述像素点对应的所述灰度归一值;The lumen adaptive threshold, the gray value corresponding to each pixel and the maximum gray value are input into the gray normalization model to obtain the gray corresponding to each pixel Degree normalization value;
    按照各所述像素点的位置将所有所述灰度归一值进行拼接,得到所述第一眼底图像。All the normalized gray values are spliced according to the positions of the pixels to obtain the first fundus image.
  7. 如权利要求6所述的脉络膜血管的管腔区域识别方法,其中,所述将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化模型中,得到与各所述像素点对应的所述灰度归一值,包括:The method for identifying the lumen region of choroidal blood vessels according to claim 6, wherein said inputting said lumen adaptive threshold value, said gray value corresponding to each said pixel point and said maximum gray value In the gray-scale normalization model, obtaining the gray-scale normalized value corresponding to each pixel point includes:
    将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化函数中,得到与各所述像素点对应的所述灰度归一值;所述灰度归一化函数为:The lumen adaptive threshold, the gray value corresponding to each pixel and the maximum gray value are input into the gray normalization function to obtain the gray corresponding to each pixel Degree normalization value; the gray normalization function is:
    Figure PCTCN2020116743-appb-100001
    Figure PCTCN2020116743-appb-100001
    其中,among them,
    f(x,y)为与所述眼底灰度图像中坐标为(x,y)的像素点对应的灰度值;f(x, y) is the gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image;
    F(x,y)为与所述眼底灰度图像中坐标为(x,y)的像素点对应的灰度归一值;F(x, y) is the normalized gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image;
    A为所述管腔自适应阈值;A is the adaptive threshold of the lumen;
    B为所述最大灰度值。B is the maximum gray value.
  8. 一种脉络膜血管的管腔区域识别装置,其中,包括:A device for identifying the lumen region of choroidal blood vessels, which comprises:
    接收模块,用于接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;The receiving module is configured to receive the fundus lumen identification request, and obtain the fundus image to be identified in the fundus lumen identification request;
    输入模块,用于将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;An input module for inputting the fundus image to be recognized into a U-Net-based fundus segmentation model, and performing choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmentation image;
    截取模块,用于通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;The interception module is used to perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and cut out from the fundus segmented image according to the fovea area The first fundus choroid image;
    二值模块,用于通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;The binary module is used to binarize the first fundus choroid image by the Niblack local threshold algorithm to obtain the first choroid binary image, and extract the first tube from the first choroid binary image Cavity area
    识别模块,用于根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。The recognition module is used to recognize the first lumen area image containing the lumen area of the fundus choroidal blood vessels from the fundus image to be identified according to the first lumen area.
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:A computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, wherein the processor implements the following steps when the processor executes the computer-readable instructions:
    接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;Receiving the fundus lumen identification request, and acquiring the fundus image to be identified in the fundus lumen identification request;
    将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
    通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;Perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area ;
    通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;Binarize the first fundus choroid image by Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image;
    根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。According to the first lumen area, the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  10. 如权利要求9所述的计算机设备,其中,所述从所述第一脉络膜二值图像中提取出第一管腔区域之后,所述处理器执行所述计算机可读指令时还实现如下步骤:9. The computer device of claim 9, wherein after the first lumen region is extracted from the first choroidal binary image, the processor further implements the following steps when executing the computer readable instruction:
    对所述待识别眼底图像进行灰度处理,得到眼底灰度图像;Performing grayscale processing on the fundus image to be identified to obtain a grayscale image of the fundus;
    通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像;Obtain a lumen adaptive threshold value in the first lumen region by using an adaptive threshold method, and perform normalization processing on the fundus grayscale image according to the lumen adaptive threshold value to obtain a first fundus image;
    根据所述中央凹区域从所述第一眼底图像中提取出第二眼底脉络膜图像;Extracting a second fundus choroid image from the first fundus image according to the foveal area;
    根据Niblack局部阈值方法,对所述第二眼底脉络膜图像进行二值化处理,得到第二脉络膜二值图像,并从所述第二脉络膜二值图像中提取出第二管腔区域;According to the Niblack local threshold method, binarize the second fundus choroid image to obtain a second choroid binary image, and extract the second lumen region from the second choroid binary image;
    根据所述第二管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第二管腔区域图像。According to the second lumen area, a second lumen area image containing the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  11. 如权利要求9所述的计算机设备,其中,所述将所述待识别眼底图像输入基于U-Net的眼底分割模型之前,所述处理器执行所述计算机可读指令时还实现如下步骤:9. The computer device according to claim 9, wherein before the input of the fundus image to be recognized into the U-Net-based fundus segmentation model, the processor further implements the following steps when executing the computer-readable instructions:
    获取眼底图像样本;所述眼底图像样本与一个边缘线标签及一个区域标签关联;Obtaining a fundus image sample; the fundus image sample is associated with an edge line label and an area label;
    将所述眼底图像样本输入含有初始参数的基于U-Net的卷积神经网络模型;Input the fundus image sample into a U-Net-based convolutional neural network model containing initial parameters;
    通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图;Extracting the choroidal feature in the fundus image sample by using the convolutional neural network model, and obtaining the region recognition result, fundus feature vector map, and fusion feature vector map output by the convolutional neural network model according to the choroidal feature;
    通过所述卷积神经网络模型对所述眼底特征向量图进行边缘检测,得到边缘结果,同时对所述融合特征向量图进行区域分割,得到区域分割结果;Performing edge detection on the fundus feature vector map by using the convolutional neural network model to obtain an edge result, and at the same time performing region segmentation on the fused feature vector map to obtain a region segmentation result;
    根据所述区域识别结果和所述区域标签,确定出分类损失值;根据所述边缘结果和所述边缘线标签,确定出边缘损失值;根据所述区域分割结果和所述区域标签,确定出分割损失值;Determine the classification loss value based on the region recognition result and the region label; determine the edge loss value based on the edge result and the edge line label; determine the edge loss value based on the region segmentation result and the region label Split loss value;
    根据所述分类损失值、所述边缘损失值和所述分割损失值,确定出总损失值;Determine a total loss value according to the classification loss value, the edge loss value, and the segmentation loss value;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述卷积神经网络模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述卷积神经网络模型记录为眼底分割模型。When the total loss value does not reach the preset convergence condition, iteratively update the initial parameters of the convolutional neural network model, until the total loss value reaches the preset convergence condition, the subsequent convergence The convolutional neural network model is recorded as a fundus segmentation model.
  12. 如权利要求11所述的计算机设备,其中,所述通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图,包括:The computer device according to claim 11, wherein said extracting said choroidal feature in said fundus image sample by said convolutional neural network model, and obtaining the output of said convolutional neural network model according to said choroidal feature Region recognition results, fundus feature vector maps and fusion feature vector maps, including:
    通过所述卷积神经网络模型对所述眼底图像样本提取所述脉络膜特征,得到眼底脉络膜特征图;Extracting the choroidal feature from the fundus image sample by using the convolutional neural network model to obtain a fundus choroidal feature map;
    通过所述卷积神经网络模型对所述眼底脉络膜特征图进行上采样及拼接,得到所述眼底特征向量图;Up-sampling and splicing the fundus choroid feature map through the convolutional neural network model to obtain the fundus feature vector map;
    通过所述卷积神经网络模型对所述眼底特征向量图进行脉络膜区域识别,得到所述区域识别结果,同时对所述眼底特征向量图进行融合处理,得到所述融合特征向量图。Perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map to obtain the fused feature vector map.
  13. 如权利要求11所述的计算机设备,其中,所述通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像,包括:The computer device according to claim 11, wherein the image recognition is performed on the fundus segmented image through the fundus fovea recognition model, and the fovea area in the fundus segmented image is recognized, and the fovea area is determined based on the fovea area. The interception of the first fundus choroid image from the fundus segmented image includes:
    将所述眼底分割图像输入基于SSD的眼底中央凹识别模型;Inputting the segmented image of the fundus into an SSD-based fundus fovea recognition model;
    通过SSD算法,通过所述眼底中央凹识别模型提取中央凹特征,并根据所述中央凹特征进行目标检测,得到所述中央凹区域;Using an SSD algorithm, extracting a fovea feature from the fundus fovea recognition model, and performing target detection based on the fovea feature to obtain the fovea area;
    以所述中央凹区域为中心,根据预设的尺寸参数从所述眼底分割图像中截取所述第一眼底脉络膜图像。Taking the foveal area as the center, and intercepting the first fundus choroid image from the fundus segmented image according to a preset size parameter.
  14. 如权利要求10所述的计算机设备,其中,所述通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像,包括:The computer device according to claim 10, wherein the adaptive threshold value of the lumen in the first lumen region is obtained by the adaptive threshold method, and the retinal gray is determined according to the adaptive threshold value of the lumen. The degree image is normalized to obtain the first fundus image, including:
    通过自适应阈值法,获取所述管腔自适应阈值;Obtain the adaptive threshold of the lumen by an adaptive threshold method;
    获取所述眼底灰度图像中与各像素点对应的灰度值和预设的最大灰度值;Acquiring a gray value corresponding to each pixel in the fundus gray image and a preset maximum gray value;
    将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化模型中,得到与各所述像素点对应的所述灰度归一值;The lumen adaptive threshold, the gray value corresponding to each pixel and the maximum gray value are input into the gray normalization model to obtain the gray corresponding to each pixel Degree normalization value;
    按照各所述像素点的位置将所有所述灰度归一值进行拼接,得到所述第一眼底图像。All the normalized gray values are spliced according to the positions of the pixels to obtain the first fundus image.
  15. 如权利要求14所述的计算机设备,其中,所述将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化模型中,得到与各所述像素点对应的所述灰度归一值,包括:The computer device according to claim 14, wherein said inputting said lumen adaptive threshold value, said gray value corresponding to each said pixel and said maximum gray value into a gray normalization model In the step, obtaining the normalized gray value corresponding to each pixel includes:
    将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化函数中,得到与各所述像素点对应的所述灰度归一值;所述灰度归一化函数为:The lumen adaptive threshold, the gray value corresponding to each pixel and the maximum gray value are input into the gray normalization function to obtain the gray corresponding to each pixel Degree normalization value; the gray normalization function is:
    Figure PCTCN2020116743-appb-100002
    Figure PCTCN2020116743-appb-100002
    其中,among them,
    f(x,y)为与所述眼底灰度图像中坐标为(x,y)的像素点对应的灰度值;f(x, y) is the gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image;
    F(x,y)为与所述眼底灰度图像中坐标为(x,y)的像素点对应的灰度归一值;F(x, y) is the normalized gray value corresponding to the pixel with the coordinate (x, y) in the fundus gray image;
    A为所述管腔自适应阈值;A is the adaptive threshold of the lumen;
    B为所述最大灰度值。B is the maximum gray value.
  16. 一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:One or more readable storage media storing computer readable instructions, where when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
    接收到眼底管腔识别请求,获取所述眼底管腔识别请求中的待识别眼底图像;Receiving the fundus lumen identification request, and acquiring the fundus image to be identified in the fundus lumen identification request;
    将所述待识别眼底图像输入基于U-Net的眼底分割模型,通过所述眼底分割模型对所述待识别眼底图像进行脉络膜特征提取及边缘分割,得到眼底分割图像;Input the fundus image to be recognized into a U-Net-based fundus segmentation model, and perform choroid feature extraction and edge segmentation on the fundus image to be recognized through the fundus segmentation model to obtain a fundus segmented image;
    通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像;Perform image recognition on the fundus segmented image through the fundus fovea recognition model, identify the fovea area in the fundus segmented image, and extract the first fundus choroid image from the fundus segmented image according to the fovea area ;
    通过Niblack局部阈值算法,对所述第一眼底脉络膜图像进行二值化处理,得到第一脉络膜二值图像,并从所述第一脉络膜二值图像中提取出第一管腔区域;Binarize the first fundus choroid image by Niblack local threshold algorithm to obtain a first choroid binary image, and extract the first lumen region from the first choroid binary image;
    根据所述第一管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第一管腔区域图像。According to the first lumen area, the first lumen area image including the lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  17. 如权利要求16所述的可读存储介质,其中,所述从所述第一脉络膜二值图像中提取出第一管腔区域之后,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:The readable storage medium according to claim 16, wherein after the first lumen region is extracted from the first choroidal binary image, when the computer-readable instructions are executed by one or more processors , So that the one or more processors further execute the following steps:
    对所述待识别眼底图像进行灰度处理,得到眼底灰度图像;Performing grayscale processing on the fundus image to be identified to obtain a grayscale image of the fundus;
    通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自 适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像;Obtain the lumen adaptive threshold value in the first lumen region by the adaptive threshold method, and perform normalization processing on the fundus grayscale image according to the lumen adaptive threshold value to obtain the first fundus image;
    根据所述中央凹区域从所述第一眼底图像中提取出第二眼底脉络膜图像;Extracting a second fundus choroid image from the first fundus image according to the foveal area;
    根据Niblack局部阈值方法,对所述第二眼底脉络膜图像进行二值化处理,得到第二脉络膜二值图像,并从所述第二脉络膜二值图像中提取出第二管腔区域;According to the Niblack local threshold method, binarize the second fundus choroid image to obtain a second choroid binary image, and extract the second lumen region from the second choroid binary image;
    根据所述第二管腔区域,从所述待识别眼底图像中识别出含有眼底脉络膜血管的管腔区域的第二管腔区域图像。According to the second lumen area, a second lumen area image containing a lumen area of the fundus choroidal blood vessel is identified from the fundus image to be identified.
  18. 如权利要求16所述的可读存储介质,其中,所述将所述待识别眼底图像输入基于U-Net的眼底分割模型之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:The readable storage medium according to claim 16, wherein, before the input of the fundus image to be recognized into the U-Net-based fundus segmentation model, when the computer-readable instructions are executed by one or more processors, So that the one or more processors further execute the following steps:
    获取眼底图像样本;所述眼底图像样本与一个边缘线标签及一个区域标签关联;Obtaining a fundus image sample; the fundus image sample is associated with an edge line label and an area label;
    将所述眼底图像样本输入含有初始参数的基于U-Net的卷积神经网络模型;Input the fundus image sample into a U-Net-based convolutional neural network model containing initial parameters;
    通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图;Extracting the choroidal feature in the fundus image sample by using the convolutional neural network model, and obtaining the region recognition result, fundus feature vector map, and fusion feature vector map output by the convolutional neural network model according to the choroidal feature;
    通过所述卷积神经网络模型对所述眼底特征向量图进行边缘检测,得到边缘结果,同时对所述融合特征向量图进行区域分割,得到区域分割结果;Performing edge detection on the fundus feature vector map through the convolutional neural network model to obtain an edge result, and at the same time performing region segmentation on the fused feature vector map to obtain a region segmentation result;
    根据所述区域识别结果和所述区域标签,确定出分类损失值;根据所述边缘结果和所述边缘线标签,确定出边缘损失值;根据所述区域分割结果和所述区域标签,确定出分割损失值;Determine the classification loss value based on the region recognition result and the region label; determine the edge loss value based on the edge result and the edge line label; determine the edge loss value based on the region segmentation result and the region label Split loss value;
    根据所述分类损失值、所述边缘损失值和所述分割损失值,确定出总损失值;Determine a total loss value according to the classification loss value, the edge loss value, and the segmentation loss value;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述卷积神经网络模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述卷积神经网络模型记录为眼底分割模型。When the total loss value does not reach the preset convergence condition, iteratively update the initial parameters of the convolutional neural network model, until the total loss value reaches the preset convergence condition, the subsequent convergence The convolutional neural network model is recorded as a fundus segmentation model.
  19. 如权利要求18所述的可读存储介质,其中,所述通过所述卷积神经网络模型提取所述眼底图像样本中的所述脉络膜特征,获取所述卷积神经网络模型根据所述脉络膜特征输出的区域识别结果、眼底特征向量图和融合特征向量图,包括:The readable storage medium according to claim 18, wherein said extracting said choroidal feature in said fundus image sample through said convolutional neural network model, and acquiring said convolutional neural network model according to said choroidal feature The output area recognition result, fundus feature vector map and fusion feature vector map include:
    通过所述卷积神经网络模型对所述眼底图像样本提取所述脉络膜特征,得到眼底脉络膜特征图;Extracting the choroidal feature from the fundus image sample by using the convolutional neural network model to obtain a fundus choroidal feature map;
    通过所述卷积神经网络模型对所述眼底脉络膜特征图进行上采样及拼接,得到所述眼底特征向量图;Up-sampling and splicing the fundus choroid feature map through the convolutional neural network model to obtain the fundus feature vector map;
    通过所述卷积神经网络模型对所述眼底特征向量图进行脉络膜区域识别,得到所述区域识别结果,同时对所述眼底特征向量图进行融合处理,得到所述融合特征向量图。Perform choroidal region recognition on the fundus feature vector map through the convolutional neural network model to obtain the region recognition result, and at the same time perform fusion processing on the fundus feature vector map to obtain the fused feature vector map.
  20. 如权利要求18所述的可读存储介质,其中,所述通过眼底中央凹识别模型对所述眼底分割图像进行图像识别,识别出所述眼底分割图像中的中央凹区域,并根据所述中央凹区域从所述眼底分割图像中截取出第一眼底脉络膜图像,包括:The readable storage medium according to claim 18, wherein the image recognition is performed on the fundus segmented image by the fundus fovea recognition model, the fovea area in the fundus segmented image is recognized, and the center The concave region intercepts the first fundus choroid image from the fundus segmented image, including:
    将所述眼底分割图像输入基于SSD的眼底中央凹识别模型;Inputting the segmented image of the fundus into an SSD-based fundus fovea recognition model;
    通过SSD算法,通过所述眼底中央凹识别模型提取中央凹特征,并根据所述中央凹特征进行目标检测,得到所述中央凹区域;Using an SSD algorithm, extracting a fovea feature from the fundus fovea recognition model, and performing target detection based on the fovea feature to obtain the fovea area;
    以所述中央凹区域为中心,根据预设的尺寸参数从所述眼底分割图像中截取所述第一眼底脉络膜图像。Taking the foveal area as the center, and intercepting the first fundus choroid image from the fundus segmented image according to a preset size parameter.
  21. 如权利要求18所述的可读存储介质,其中,所述通过自适应阈值法,获取所述第一管腔区域中的管腔自适应阈值,并根据所述管腔自适应阈值对所述眼底灰度图像进行归一化处理,得到第一眼底图像,包括:The readable storage medium according to claim 18, wherein the adaptive threshold method is used to obtain a lumen adaptive threshold in the first lumen region, and the lumen adaptive threshold is adjusted according to the lumen adaptive threshold. The gray-scale image of the fundus is normalized to obtain the first fundus image, including:
    通过自适应阈值法,获取所述管腔自适应阈值;Obtain the adaptive threshold of the lumen by an adaptive threshold method;
    获取所述眼底灰度图像中与各像素点对应的灰度值和预设的最大灰度值;Acquiring a gray value corresponding to each pixel in the fundus gray image and a preset maximum gray value;
    将所述管腔自适应阈值、与各所述像素点对应的所述灰度值和所述最大灰度值输入灰度归一化模型中,得到与各所述像素点对应的所述灰度归一值;The lumen adaptive threshold, the gray value corresponding to each pixel and the maximum gray value are input into the gray normalization model to obtain the gray corresponding to each pixel Degree normalization value;
    按照各所述像素点的位置将所有所述灰度归一值进行拼接,得到所述第一眼底图像。All the normalized gray values are spliced according to the positions of the pixels to obtain the first fundus image.
PCT/CN2020/116743 2020-07-31 2020-09-22 Method and apparatus for recognition of luminal area in choroidal vessels, device, and medium WO2021120753A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010761238.1 2020-07-31
CN202010761238.1A CN111899247A (en) 2020-07-31 2020-07-31 Method, device, equipment and medium for identifying lumen region of choroidal blood vessel

Publications (1)

Publication Number Publication Date
WO2021120753A1 true WO2021120753A1 (en) 2021-06-24

Family

ID=73184124

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116743 WO2021120753A1 (en) 2020-07-31 2020-09-22 Method and apparatus for recognition of luminal area in choroidal vessels, device, and medium

Country Status (2)

Country Link
CN (1) CN111899247A (en)
WO (1) WO2021120753A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541924B (en) * 2020-12-08 2023-07-18 北京百度网讯科技有限公司 Fundus image generation method, fundus image generation device, fundus image generation apparatus, and fundus image storage medium
CN112529906B (en) * 2021-02-07 2021-05-14 南京景三医疗科技有限公司 Software-level intravascular oct three-dimensional image lumen segmentation method and device
CN116309549B (en) * 2023-05-11 2023-10-03 爱尔眼科医院集团股份有限公司 Fundus region detection method, fundus region detection device, fundus region detection equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104768446A (en) * 2012-09-10 2015-07-08 俄勒冈健康科学大学 Quantification of local circulation with OCT angiography
US20160278627A1 (en) * 2015-03-25 2016-09-29 Oregon Health & Science University Optical coherence tomography angiography methods
CN106599804A (en) * 2016-11-30 2017-04-26 哈尔滨工业大学 Retina fovea centralis detection method based on multi-feature model
CN109509178A (en) * 2018-10-24 2019-03-22 苏州大学 A kind of OCT image choroid dividing method based on improved U-net network
CN110599480A (en) * 2019-09-18 2019-12-20 上海鹰瞳医疗科技有限公司 Multi-source input fundus image classification method and device
CN111345775A (en) * 2018-12-21 2020-06-30 伟伦公司 Evaluation of fundus images
CN111402243A (en) * 2020-03-20 2020-07-10 林晨 Macular fovea identification method and terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683080B (en) * 2016-12-15 2019-09-27 广西师范大学 A kind of retinal fundus images preprocess method
CN111292338B (en) * 2020-01-22 2023-04-21 苏州大学 Method and system for segmenting choroidal neovascularization from fundus OCT image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104768446A (en) * 2012-09-10 2015-07-08 俄勒冈健康科学大学 Quantification of local circulation with OCT angiography
US20160278627A1 (en) * 2015-03-25 2016-09-29 Oregon Health & Science University Optical coherence tomography angiography methods
CN106599804A (en) * 2016-11-30 2017-04-26 哈尔滨工业大学 Retina fovea centralis detection method based on multi-feature model
CN109509178A (en) * 2018-10-24 2019-03-22 苏州大学 A kind of OCT image choroid dividing method based on improved U-net network
CN111345775A (en) * 2018-12-21 2020-06-30 伟伦公司 Evaluation of fundus images
CN110599480A (en) * 2019-09-18 2019-12-20 上海鹰瞳医疗科技有限公司 Multi-source input fundus image classification method and device
CN111402243A (en) * 2020-03-20 2020-07-10 林晨 Macular fovea identification method and terminal

Also Published As

Publication number Publication date
CN111899247A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
WO2021120753A1 (en) Method and apparatus for recognition of luminal area in choroidal vessels, device, and medium
US11373305B2 (en) Image processing method and device, computer apparatus, and storage medium
WO2020143309A1 (en) Segmentation model training method, oct image segmentation method and apparatus, device and medium
CN110120047B (en) Image segmentation model training method, image segmentation method, device, equipment and medium
US20230036134A1 (en) Systems and methods for automated processing of retinal images
Dharmawan et al. A new hybrid algorithm for retinal vessels segmentation on fundus images
CN109492643B (en) Certificate identification method and device based on OCR, computer equipment and storage medium
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
US11967071B2 (en) Method, device, apparatus, and medium for training recognition model and recognizing fundus features
CN110569756B (en) Face recognition model construction method, recognition method, device and storage medium
WO2018201647A1 (en) Method for detecting retinopathy degree level, device and storage medium
WO2021082691A1 (en) Segmentation method and apparatus for lesion area of eye oct image, and terminal device
Saha et al. Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: A review
WO2020151307A1 (en) Automatic lesion recognition method and device, and computer-readable storage medium
WO2022088665A1 (en) Lesion segmentation method and apparatus, and storage medium
WO2019174276A1 (en) Method, device, equipment and medium for locating center of target object region
WO2020140370A1 (en) Method and device for automatically detecting petechia in fundus, and computer-readable storage medium
WO2021114623A1 (en) Method, apparatus, computer device, and storage medium for identifying persons having deformed spinal columns
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
WO2023130648A1 (en) Image data enhancement method and apparatus, computer device, and storage medium
WO2020248848A1 (en) Intelligent abnormal cell determination method and device, and computer readable storage medium
WO2021159811A1 (en) Auxiliary diagnostic apparatus and method for glaucoma, and storage medium
CN112686855A (en) Information correlation method for elephant and symptom information
KR20220100812A (en) Facial biometric detection method, device, electronics and storage media
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20903553

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20903553

Country of ref document: EP

Kind code of ref document: A1