WO2020211530A1 - Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium - Google Patents

Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium Download PDF

Info

Publication number
WO2020211530A1
WO2020211530A1 PCT/CN2020/076501 CN2020076501W WO2020211530A1 WO 2020211530 A1 WO2020211530 A1 WO 2020211530A1 CN 2020076501 W CN2020076501 W CN 2020076501W WO 2020211530 A1 WO2020211530 A1 WO 2020211530A1
Authority
WO
WIPO (PCT)
Prior art keywords
superpixels
fundus
pixel
network
neural network
Prior art date
Application number
PCT/CN2020/076501
Other languages
French (fr)
Chinese (zh)
Inventor
张梦蕾
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2020211530A1 publication Critical patent/WO2020211530A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present disclosure relates to the field of computer vision information, and in particular to a model training method for detecting fundus pictures, a method and device for detecting fundus pictures.
  • the fundus is the tissue in the back of the eyeball, and the picture of the fundus is the fundus picture.
  • Fundus pictures can be used to diagnose fundus diseases such as glaucoma and fundus macular degeneration, and can also provide reference for the diagnosis of diabetes, hypertension and other diseases.
  • the embodiments of the present disclosure provide a model training method for detecting fundus pictures, a method and device for detecting fundus pictures.
  • an embodiment of the present disclosure provides a model training method for detecting fundus pictures, including: dividing each of the N fundus pictures in the fundus image training set into M superpixels; N and M are both positive integers; according to the M ⁇ N superpixels, a first network model is obtained through training; the first network model is used to identify each input superpixel as a key pixel during output Or background pixels; training to obtain a second network model according to the superpixels belonging to the key pixels in the M ⁇ N superpixels; the second network model is used to input each superpixel in the output When marked as diseased or non-pathological.
  • training to obtain the first network model based on the M ⁇ N superpixels includes: constructing a deep neural network; each time at least one of the M ⁇ N superpixels is selected and input In the deep neural network; wherein, each of the M ⁇ N superpixels has been previously marked as a key pixel or a background pixel; and the output result of the deep neural network is compared with the superpixel in advance Compare the marking results of the deep neural network and train the network parameters of the deep neural network until the deep neural network identifies the superpixel as a key pixel or the correct rate of the background pixel is greater than or equal to the first Threshold to obtain the first network model.
  • the deep neural network is a deep belief network.
  • training to obtain a second network model according to the superpixels belonging to the key pixels in the M ⁇ N superpixels includes: constructing a convolutional neural network; each time selecting the M ⁇ N superpixels , At least one of the superpixels among all the superpixels belonging to the key pixel is input into the convolutional neural network; wherein each of the superpixels belonging to the key pixel has been previously marked as diseased or non-pathological; Compare the output result of the convolutional neural network with the pre-marked results of the superpixels belonging to key pixels, and train the network parameters of the convolutional neural network until the loss value of the convolutional neural network is less than or equal to The second threshold is used to obtain the second network model; the output result of the convolutional neural network includes the identification of the superpixel as diseased or non-pathological.
  • the convolutional neural network is a combination of a residual network and an Inception network.
  • the model training method for detecting fundus pictures further includes: Perform pre-processing; the pre-processing includes at least one of rotation, shearing, distortion, scaling, adjusting color difference, and reducing resolution.
  • the embodiments of the present disclosure also provide a method for detecting fundus pictures, including: dividing the fundus picture to be detected into P superpixels, and obtaining the addresses corresponding to the P superpixels one-to-one; Input superpixels into the first network model to obtain the P superpixels identified as key pixels or background pixels; input the superpixels identified as key pixels into the second network model to obtain The non-lesion pixel is identified as the super pixel of the key pixel; according to the address corresponding to the super pixel identified as the key pixel and the lesion pixel, the position of the super pixel is found in the fundus image to be detected, and The position is marked on the fundus picture to be detected.
  • the method for detecting the fundus image further includes: pre-processing the fundus image to be detected Processing;
  • the preprocessing includes: at least one of cropping and scaling.
  • the first network model is obtained through the following training process: constructing a deep neural network; each time at least one of the M ⁇ N superpixels is selected and input into the deep neural network; where M ⁇ N superpixels are obtained by dividing each of the N fundus pictures in the fundus picture training set into M superpixels, each of the M ⁇ N superpixels The super pixel has been pre-marked as a key pixel or background pixel; the output result of the deep neural network is compared with the pre-marked result of the super pixel, and the network parameters of the deep neural network are trained until the deep neural network When the network outputs the super pixel, the accuracy of the super pixel is identified as a key pixel or the background pixel is greater than or equal to a first threshold to obtain the first network model.
  • the deep neural network is a deep belief network.
  • the second network model is obtained through the following training process: constructing a convolutional neural network; each time selecting M ⁇ N superpixels, at least one of the superpixels belonging to the key pixels , Input into the convolutional neural network; wherein, the M ⁇ N superpixels are obtained by dividing each of the N fundus pictures in the fundus picture training set into M superpixels, Wherein, each of the superpixels belonging to the key pixel has been previously marked as a diseased pixel or a non-pathological pixel; the output result of the convolutional neural network is compared with the pre-marked result of the superpixel belonging to the key pixel, Train the network parameters of the convolutional neural network until the loss value of the convolutional neural network is less than or equal to the second threshold to obtain the second network model; the output result of the convolutional neural network includes The pixels are identified as lesion pixels or non-lesion pixels.
  • the convolutional neural network is a combination of a residual network and an Inception network.
  • the embodiments of the present disclosure also provide a computer device, including a memory and a processor; the memory stores a computer program that can be run on the processor; when the processor executes the computer program, The aforementioned model training method for detecting fundus pictures or the aforementioned method for detecting fundus pictures.
  • embodiments of the present disclosure also provide a computer device, including a processor, which implements the aforementioned model training method for detecting fundus pictures or the aforementioned method for detecting fundus pictures when the processor executes a computer program.
  • the embodiments of the present disclosure also provide a computer-readable medium storing a computer program that, when executed by a processor, implements the aforementioned model training method for detecting fundus pictures or the aforementioned fundus pictures The detection method.
  • an embodiment of the present invention also provides a model training device for detecting fundus pictures, including: a segmentation module configured to divide each of the N fundus pictures in the fundus picture training set, Are M superpixels; N and M are both positive integers; the training module is configured to train according to the M ⁇ N superpixels to obtain a first network model; the first network model is used to use each input The super pixels are identified as key pixels or background pixels when output; the training module is also configured to train to obtain a second network model according to the super pixels marked as key pixels; the second network model is used to input Each of the superpixels is marked as diseased or non-pathological when output.
  • an embodiment of the present invention also provides a detection device for a fundus picture, including: a segmentation module configured to divide the fundus picture to be detected into M superpixels; and an acquisition module configured to acquire the P superpixels One-to-one correspondence address; the obtaining module is also configured to input the P superpixels into the first network model, thereby obtaining the P superpixels identified as key pixels or background pixels; the obtaining module is also configured to The superpixels identified as key pixels are input into the second network model, so as to obtain the superpixels identified as key pixels, which are identified as diseased pixels or non-lesion pixels; the identification module is configured to be configured according to the identified key pixels and identified as The address corresponding to the super pixel of the diseased pixel, the position of the super pixel is found in the fundus picture to be detected, and the position is marked on the fundus picture to be detected.
  • FIG. 1 is a flowchart of a model training method for detecting fundus pictures according to an embodiment of the disclosure
  • FIG. 2 is a flowchart of yet another model training method for detecting fundus pictures according to an embodiment of the disclosure
  • FIG. 3 is a flowchart of another model training method for detecting fundus pictures according to an embodiment of the disclosure
  • FIG. 4 is a flowchart of yet another model training method for detecting fundus pictures provided by an embodiment of the disclosure
  • FIG. 5 is a flowchart of another model training method for detecting fundus pictures according to an embodiment of the disclosure.
  • FIG. 6 is a schematic structural diagram of a model training device for detecting fundus pictures according to an embodiment of the disclosure
  • FIG. 7 is a flowchart of a method for detecting fundus pictures according to an embodiment of the disclosure.
  • FIG. 8 is a flowchart of yet another method for detecting fundus pictures according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a detection device for fundus pictures provided by an embodiment of the disclosure.
  • FIG. 10 is a schematic structural diagram of a computer device provided by an embodiment of the disclosure.
  • the embodiment of the present disclosure provides a model training method for detecting fundus pictures, as shown in FIG. 1, including:
  • the fundus image training set refers to a collection of multiple fundus images used to train the model.
  • the number of fundus pictures in the fundus picture training set can be set as needed.
  • Super pixels refer to irregular pixel blocks with a certain visual significance composed of adjacent pixels with similar texture, color, brightness and other characteristics.
  • a small number of super pixels can replace a large number of pixels to express the characteristics of the fundus picture, which reduces the complexity of subsequent processing of the fundus picture.
  • a fundus picture is divided into M superpixels, that is, a large number of pixels in a fundus picture are replaced with M superpixels, which reduces the complexity of the fundus picture.
  • each fundus picture is divided into M superpixels, so that the complexity of all fundus pictures is reduced and reduced to the same degree.
  • the method of dividing each fundus picture into M superpixels is called superpixel division.
  • the principle of the superpixel segmentation method is based on a clustering algorithm, that is, the clustering algorithm is used for the segmentation of fundus pictures.
  • M pixels are uniformly selected as the initial cluster centers.
  • For the remaining pixels according to the distance between the remaining pixels and these cluster centers, according to the principle of nearest neighbor, respectively Assign to the closest cluster.
  • a first network model is obtained through training; the first network model is used to identify each input superpixel as a key pixel or a background pixel during output.
  • M ⁇ N superpixels need to be input in batches, and the number of superpixels input each time can be set as required.
  • the super pixels that best reflect the content of the fundus pictures are called key pixels, and the remaining super pixels are called background pixels. Distinguishing between key pixels and background pixels for all superpixels can eliminate the interference of background pixels in the fundus picture, which is close to the user's detection intention, which is beneficial to the improvement of detection performance.
  • a second network model is obtained by training; the second network model is used to identify each input superpixel as a diseased or non-pathological when outputting.
  • key pixels when training the second network model, key pixels also need to be input in batches, and the number of key pixels input each time can be set according to needs.
  • the key pixel that best reflects the information of the fundus lesion is marked as a lesion, and the remaining key pixels are marked as non-lesion. Distinguishing the key pixels between lesions and non-lesions can eliminate the interference of non-lesion super-pixels in fundus pictures and realize the user's detection intention.
  • the embodiment of the present application provides a model training method for detecting fundus pictures.
  • the first network model is trained by using the superpixels to train the first network model by dividing the fundus pictures in the fundus image training set into multiple superpixels.
  • the super pixels can be identified as key pixels or background pixels.
  • use key pixels to train the second network model so that the second network model can identify the key pixels as lesions or non-lesions in subsequent applications, so that the trained model can recognize lesions through the above simple training method Fast speed and high accuracy rate.
  • the above-mentioned model training method for detecting fundus pictures further includes :
  • the first preprocessing includes at least one of rotation, shearing, distortion, scaling, adjusting color difference, and reducing resolution.
  • Rotation is to randomly rotate the fundus picture by a certain angle with the center or a certain vertex as the origin; cutting is to randomly select a part of the image; distortion is to apply a random four-point perspective transformation to the image; scaling is to unify the size of the fundus picture; To adjust the color difference is to randomly process the hue and saturation of the fundus picture.
  • the model Before training the model, perform the first preprocessing on the fundus picture to correct the content of the fundus picture, which can expand the fundus picture training set, so that the trained model can process images taken under various shooting conditions and improve model recognition Accuracy.
  • the trained model will have a more accurate recognition effect during the actual detection of fundus lesions.
  • the model training method for detecting fundus pictures also includes:
  • the background pixels output by the first network model are deleted, and only key pixels are retained for subsequent processing, which reduces the amount of calculation and can increase the calculation speed.
  • the first network model is obtained by training according to M ⁇ N superpixels in S20, as shown in FIG. 4, which includes:
  • the deep neural network in S201 is a deep belief network (Deep Belief Network, DBN).
  • DBN Deep Belief Network
  • the deep belief network includes multiple stacked Restricted Boltzmann Machines (RBM).
  • RBM Restricted Boltzmann Machines
  • the structural principle of the restricted Boltzmann machine comes from the Boltzmann distribution in physics. Among them, each restricted Boltzmann machine has two layers of neurons. One layer is called the Visible Layer, which is composed of Visible Units and is used for input; the other layer is called Hidden Layer. , Composed of Hidden Units, used for detection. Both the explicit element and the hidden element are binary variables, that is, their state takes the value 0 or 1. In each layer of neurons, there is no connection within the layer, and the layers are fully connected.
  • the hidden layer of the lower restricted Boltzmann machine serves as the visible layer of the higher restricted Boltzmann machine, which is the upper layer of restricted glass. Erzmann machine input data.
  • the number of restricted Boltzmann machines stacked into a deep belief network can be set as required, which is not limited in the present disclosure.
  • each superpixel may be pre-marked by manual marking.
  • the deep neural network is a deep belief network
  • the following provides a method of training the first network model based on M ⁇ N superpixels to clearly describe its implementation process.
  • the hidden layer of the first restricted Boltzmann machine is used as the visible layer of the second restricted Boltzmann machine, features are extracted, and the weights are updated. And so on.
  • unsupervised training refers to each restricted Boltzmann machine.
  • the data input to the explicit layer does not need to be manually labeled.
  • the main steps of the Contrastive Divergence (CD) method include setting the explicit state of the restricted Boltzmann machine according to the superpixel, and calculating the hidden state by using the conditional probability of the hidden layer under the explicit condition; After the state of the hidden element is determined, the state of the next layer is calculated according to the conditional probability of the explicit layer under the condition of the hidden layer, the explicit layer is reconstructed, and sampling is repeated until the model parameters converge.
  • CD Contrastive Divergence
  • the output result of the deep belief network is compared with the artificial labeling result, and the correct rate of all superpixels identified as key pixels or background pixels through the deep belief network is calculated.
  • the accuracy rate is very low, you can use the Error Back Propagation (BP) algorithm to calculate the mean square error of the deep belief network, and continuously adjust the network parameters to make the mean square error of the deep belief network less than or equal to the set value
  • BP Error Back Propagation
  • the second network model is obtained by training according to superpixels belonging to key pixels among the M ⁇ N superpixels in S30, as shown in FIG. 5, which includes:
  • the convolutional neural network model is a multi-layer structure learning algorithm that uses the relative spatial positions and weights in the picture to reduce the number of network weights to improve the performance of complex network training.
  • convolutional neural network When convolutional neural network is trained, it is a machine learning model that learns under supervision.
  • the convolutional neural network is a combination of the residual network and the Inception network.
  • the residual network constructed by jump connection technology, breaks the convention that the output of the S-1 layer of the traditional neural network can only be input to the S layer, so that the output of a certain layer can directly cross several layers as a later layer input of.
  • the stacking of multiple residual networks can reduce the number of network parameters, reduce the amount of calculation, and increase the calculation speed.
  • the Inception network is a network with a parallel structure. Through an asymmetric convolution kernel structure, it can reduce the amount of calculation and increase the speed of calculation while ensuring that the information loss is small enough.
  • At least one superpixel among all the superpixels belonging to the key pixel is selected each time among the M ⁇ N superpixels, and input into the convolutional neural network; wherein, each superpixel belonging to the key pixel has been previously marked as a lesion Or non-pathological.
  • each superpixel belonging to a key pixel may be pre-marked by manual marking.
  • the output result of the convolutional neural network includes identifying superpixels as diseased or non-pathological.
  • the convolutional neural network is the combination of the residual network and the Inception network
  • the following provides a method for training the second network model based on the superpixels belonging to the key pixels in the M ⁇ N superpixels to make it clear Describe its realization process.
  • the number of residual networks included in the convolutional neural network and the number of Inception networks can be set as required, and the present disclosure does not limit this.
  • the output result of the convolutional neural network is compared with the artificial labeling result, and the loss value of all superpixels belonging to the key pixel is calculated.
  • the loss value is large, back propagation can be used to adjust the network parameters until the loss value is less than or equal to the second threshold, thereby obtaining a convolutional neural network.
  • the main function of the convolutional neural network is to divide the superpixels belonging to the key pixels into lesions or non-lesions, which is used as a classification model.
  • the loss function for calculating the loss value uses the cross entropy (Cross Entroy Loss) loss function.
  • y i represents the probability distribution of artificial labeling results
  • y i ' represents the probability distribution of the output results of the convolutional neural network.
  • Cross entropy describes the distance between two probability distributions. The larger the cross entropy, the greater the difference between the two. The smaller the cross entropy, the closer the two are.
  • the embodiment of the present disclosure also provides a computer device, as shown in FIG. 10, including a memory 100 and a processor 200; the memory 100 stores a computer program that can run on the processor 200; the processor 200 executes the computer program to realize the above Model training method for detecting fundus pictures.
  • Memory may include, but is not limited to, disk drives, optical storage devices, solid-state storage devices, floppy disks, flexible disks, hard disks, tapes or any other magnetic media, compact disks or any other optical media, ROM (read only memory), RAM (random Access memory), cache memory and/or any other memory chip or cartridge, and/or any other medium from which the processor can read data, instructions and/or code.
  • the processor may be any type of processor, and may include, but is not limited to, one or more general-purpose processors and/or one or more special-purpose processors (such as special-purpose processing chips).
  • the computer device may not include the memory 100.
  • Computer equipment can retrieve computer programs by accessing external or remote storage.
  • the embodiment of the present disclosure also provides a computer-readable medium storing a computer program, and the computer program is executed by a processor to implement the above-mentioned model training method for detecting fundus pictures.
  • the embodiment of the present invention also provides a model training device for detecting fundus pictures, as shown in FIG. 6, including:
  • the dividing module 10 is configured to divide each fundus picture in the N fundus pictures in the fundus picture training set into M superpixels; N and M are both positive integers.
  • the training module 20 is configured to train a first network model according to M ⁇ N superpixels; the first network model is used to identify each input superpixel as a key pixel or a background pixel when outputting.
  • the training module 20 is also configured to train to obtain a second network model according to the superpixels that have been marked as key pixels; the second network model is used to identify each superpixel input as a diseased or non-pathological when outputting.
  • the embodiment of the present application provides a model training device for detecting fundus pictures.
  • the fundus pictures in the fundus image training set are divided into multiple superpixels through the segmentation module, and then the training module is used to train the first network using the superpixels.
  • the model enables the first network model to recognize superpixels as key pixels or background pixels, and continues to use the training module to train the second network model using key pixels, so that the second network model can recognize that the key pixels are lesions or non-lesions, thereby
  • a model that can quickly recognize lesions in fundus pictures with good recognition effect and high accuracy can be trained.
  • the embodiment of the present disclosure also provides a method for detecting fundus pictures, as shown in FIG. 7, including:
  • S200 Input the P superpixels into the first network model obtained by the above-mentioned model training method for detecting fundus pictures, so as to obtain superpixels identified as key pixels.
  • S300 Input superpixels identified as key pixels into the second network model obtained by the above-mentioned model training method for detecting fundus pictures, so as to obtain superpixels identified as key pixels and with lesions.
  • S400 Find the position of the superpixel in the fundus picture to be detected according to the address corresponding to the superpixel identified as the key pixel and the lesion, and mark the position on the fundus picture to be detected.
  • an address L corresponding to a superpixel identified as a key pixel and a diseased superpixel is used as a seed pixel. According to the address L, it is retrieved whether the superpixels at adjacent addresses L-1 and L+1 are also key pixels and Lesions.
  • the super pixel at address L-1 or L+1 as the seed pixel to find whether the super pixel at the neighboring address of the seed pixel is a key pixel and diseased, and so on, until the super pixel at the neighboring address If none of the key pixels is a key pixel and the lesion is a lesion, then one search ends, and all the adjacent key pixels and superpixels of the lesion found before are merged and identified as positions. Then continue to traverse the next unidentified super pixel that belongs to the key pixel and the lesion.
  • the mark when marking the location of the lesion on the fundus picture to be detected, can be a circle, a dot, a check mark, etc., as long as the human eye can distinguish it from the fundus picture, and its shape and color are not limited in the present disclosure. .
  • the embodiments provided in the present disclosure provide a method for detecting fundus pictures.
  • the first network model obtained by training is used to identify the superpixels, and the key pixels are obtained.
  • the key pixels are input into the second network model obtained by training, the key pixels are identified, the superpixels of the lesions are obtained, and then according to the address of the superpixel, the position of the superpixel is found and marked in the fundus image. Therefore, the above method can quickly and accurately detect the pathological changes in the fundus picture, and when applied, it can assist the doctor in the rapid diagnosis and reduce the probability of misdiagnosis and missed diagnosis.
  • the model training method for detecting the fundus image further includes:
  • a second preprocessing is performed on the fundus picture to unify the size of the fundus picture, reduce adverse effects, and improve the accuracy of detection.
  • the embodiment of the present disclosure also provides a computer device, as shown in FIG. 10, including a memory 100 and a processor 200; the memory 100 stores a computer program that can run on the processor 200; the processor 200 executes the computer program to realize the above Detection method of fundus pictures.
  • the embodiment of the present disclosure also provides a computer-readable medium storing a computer program, and the computer program is executed by a processor to implement the above-mentioned method for detecting fundus pictures.
  • An embodiment of the present invention also provides a device for detecting fundus pictures, as shown in FIG. 9, including:
  • the segmentation module 10 is configured to segment the fundus image to be detected into M superpixels.
  • the obtaining module 30 is configured to obtain the addresses corresponding to the P superpixels one to one.
  • the obtaining module 30 is further configured to input the P superpixels into the first network model obtained by the above-mentioned model training method for detecting fundus pictures, so as to obtain superpixels identified as key pixels.
  • the acquiring module 30 is further configured to input the superpixels identified as key pixels into the second network model obtained by the above-mentioned model training method for detecting fundus pictures, so as to acquire the superpixels identified as key pixels and having lesions. Pixels.
  • the identification module 40 is configured to find the position of the super pixel in the fundus image to be detected according to the address corresponding to the super pixel identified as a key pixel and the lesion, and to identify the position on the fundus image to be detected.

Abstract

The present application relates to a model training method and apparatus for detection on a fundus image, a method and apparatus for detection on a fundus image, a computer device, and a medium. The model training method for detection on a fundus image comprises: dividing each of N fundus images in a fundus image training set into M superpixels, wherein N and M are both positive integers; performing training to obtain a first network model according to the M×N superpixels, the first network model being used for identifying each inputted superpixel as a key pixel or a background pixel during outputting; and performing training to obtain a second network model according to superpixels which are key pixels among the M×N superpixels, the second network model being used for identifying each inputted superpixel as pathological or non-pathological during outputting.

Description

用于检测眼底图片的模型训练方法和装置、眼底图片的检测方法和装置、计算机设备和介质Model training method and device for detecting fundus pictures, method and device for detecting fundus pictures, computer equipment and medium 技术领域Technical field
本公开涉及计算机视觉信息领域,尤其涉及一种用于检测眼底图片的模型训练方法、眼底图片的检测方法及装置。The present disclosure relates to the field of computer vision information, and in particular to a model training method for detecting fundus pictures, a method and device for detecting fundus pictures.
背景技术Background technique
眼底是眼球内后部的组织,眼底的图片即眼底图片。眼底图片可用于诊断诸如青光眼、眼底黄斑性病变等眼底疾病,也可以为诊断糖尿病、高血压等疾病提供参考依据。The fundus is the tissue in the back of the eyeball, and the picture of the fundus is the fundus picture. Fundus pictures can be used to diagnose fundus diseases such as glaucoma and fundus macular degeneration, and can also provide reference for the diagnosis of diabetes, hypertension and other diseases.
目前,医生对眼底病变的识别诊断过程较长,并且对于初期的微小病变,也容易误诊或漏诊。而通过计算机视觉技术对眼底图片进行分析,不仅可以为协助医生快速诊断,也可以降低误诊、漏诊的概率。At present, doctors have a long process of identifying and diagnosing fundus lesions, and it is easy to misdiagnose or miss diagnoses for small initial lesions. The analysis of fundus pictures through computer vision technology can not only assist doctors in rapid diagnosis, but also reduce the probability of misdiagnosis and missed diagnosis.
发明内容Summary of the invention
本公开的实施例提供一种用于检测眼底图片的模型训练方法、眼底图片的检测方法及装置。The embodiments of the present disclosure provide a model training method for detecting fundus pictures, a method and device for detecting fundus pictures.
一方面,本公开的实施例提供了一种用于检测眼底图片的模型训练方法,包括:将眼底图片训练集的N个眼底图片中的每个所述眼底图片,分割为M个超像素;N和M均为正整数;根据M×N个所述超像素,训练得到第一网络模型;所述第一网络模型用于将输入的每个所述超像素,在输出时标识为关键像素或背景像素;根据M×N个所述超像素中属于关键像素的所述超像素,训练得到第二网络模型;所述第二网络模型用于将输入的每个所述超像素,在输出时标识为病变或非病变。On the one hand, an embodiment of the present disclosure provides a model training method for detecting fundus pictures, including: dividing each of the N fundus pictures in the fundus image training set into M superpixels; N and M are both positive integers; according to the M×N superpixels, a first network model is obtained through training; the first network model is used to identify each input superpixel as a key pixel during output Or background pixels; training to obtain a second network model according to the superpixels belonging to the key pixels in the M×N superpixels; the second network model is used to input each superpixel in the output When marked as diseased or non-pathological.
可选地,根据M×N个所述超像素,训练得到第一网络模型,包括:构建深层神经网络;每次选取M×N个所述超像素中的至少一个所述超像素,输入所述深层神经网络中;其中,M×N个所述超像素中的每个所述超像素已预先被标记为关键像素或背景像素;将所述深层神经网络的输出结果与所述超像素预先的标记结果进行比较,训练所述深层神经网络的网络参数,直至所述深层神经网络在输出所述超像素时,将所述超像素标识为关键像素或者背景像素的正确率大于或等于第一阈值,得到所述第一网络模型。Optionally, training to obtain the first network model based on the M×N superpixels includes: constructing a deep neural network; each time at least one of the M×N superpixels is selected and input In the deep neural network; wherein, each of the M×N superpixels has been previously marked as a key pixel or a background pixel; and the output result of the deep neural network is compared with the superpixel in advance Compare the marking results of the deep neural network and train the network parameters of the deep neural network until the deep neural network identifies the superpixel as a key pixel or the correct rate of the background pixel is greater than or equal to the first Threshold to obtain the first network model.
可选地,所述深层神经网络为深度信念网络。Optionally, the deep neural network is a deep belief network.
可选地,根据M×N个所述超像素中属于关键像素的所述超像素,训练得到 第二网络模型,包括:构建卷积神经网络;每次选取M×N个所述超像素中,属于关键像素的所有所述超像素中的至少一个所述超像素,输入所述卷积神经网络中;其中,属于关键像素的每个所述超像素已预先被标记为病变或非病变;将所述卷积神经网络的输出结果与属于关键像素的所述超像素预先的标记结果进行比较,训练所述卷积神经网络的网络参数,直至所述卷积神经网络的损失值小于或等于第二阈值,得到所述第二网络模型;所述卷积神经网络的输出结果包括将所述超像素标识为病变或非病变。Optionally, training to obtain a second network model according to the superpixels belonging to the key pixels in the M×N superpixels includes: constructing a convolutional neural network; each time selecting the M×N superpixels , At least one of the superpixels among all the superpixels belonging to the key pixel is input into the convolutional neural network; wherein each of the superpixels belonging to the key pixel has been previously marked as diseased or non-pathological; Compare the output result of the convolutional neural network with the pre-marked results of the superpixels belonging to key pixels, and train the network parameters of the convolutional neural network until the loss value of the convolutional neural network is less than or equal to The second threshold is used to obtain the second network model; the output result of the convolutional neural network includes the identification of the superpixel as diseased or non-pathological.
可选地,所述卷积神经网络为残差网络和Inception网络的结合。Optionally, the convolutional neural network is a combination of a residual network and an Inception network.
可选地,将眼底图片训练集的N个眼底图片中的每个所述眼底图片,分割为M个超像素之前,所述用于检测眼底图片的模型训练方法还包括:对所述眼底图片进行预处理;所述预处理,包括:旋转、剪切、扭曲、缩放、调整色差、降低分辨率中的至少一种。Optionally, before dividing each of the N fundus pictures in the fundus picture training set into M superpixels, the model training method for detecting fundus pictures further includes: Perform pre-processing; the pre-processing includes at least one of rotation, shearing, distortion, scaling, adjusting color difference, and reducing resolution.
再一方面,本公开的实施例还提供眼底图片的检测方法,包括:将待检测眼底图片分割为P个超像素,并获取所述P个超像素一一对应的地址;将所述P个超像素输入第一网络模型中,从而获取标识为关键像素或背景像素的所述P个超像素;将标识为关键像素的所述超像素输入第二网络模型中,从而获取标识为病变像素或非病变像素的标识为关键像素的所述超像素;根据标识为关键像素且标识为病变像素的所述超像素对应的地址,在所述待检测眼底图片找到所述超像素的位置,并在所述待检测眼底图片上标识出该位置。In another aspect, the embodiments of the present disclosure also provide a method for detecting fundus pictures, including: dividing the fundus picture to be detected into P superpixels, and obtaining the addresses corresponding to the P superpixels one-to-one; Input superpixels into the first network model to obtain the P superpixels identified as key pixels or background pixels; input the superpixels identified as key pixels into the second network model to obtain The non-lesion pixel is identified as the super pixel of the key pixel; according to the address corresponding to the super pixel identified as the key pixel and the lesion pixel, the position of the super pixel is found in the fundus image to be detected, and The position is marked on the fundus picture to be detected.
可选地,将待检测眼底图片分割为P个超像素,并获取P个所述超像素一一对应的地址之前,所述眼底图片的检测方法还包括:对所述待检测眼底图片进行预处理;所述预处理,包括:剪切和缩放中的至少一种。Optionally, before dividing the fundus image to be detected into P superpixels and obtaining the addresses corresponding to the P superpixels one-to-one, the method for detecting the fundus image further includes: pre-processing the fundus image to be detected Processing; The preprocessing includes: at least one of cropping and scaling.
可选地,第一网络模型是通过如下训练处理获得的:构建深层神经网络;每次选取M×N个超像素中的至少一个所述超像素,输入所述深层神经网络中;其中,M×N个所述超像素是通过将眼底图片训练集的N个眼底图片中的每个所述眼底图片分割为M个超像素而获得的,M×N个所述超像素中的每个所述超像素已预先被标记为关键像素或背景像素;将所述深层神经网络的输出结果与所述超像素预先的标记结果进行比较,训练所述深层神经网络的网络参数,直至所述深层神经网络在输出所述超像素时,将所述超像素标识为关键像素或者背景像素的正确率大于或等于第一阈值,得到所述第一网络模型。Optionally, the first network model is obtained through the following training process: constructing a deep neural network; each time at least one of the M×N superpixels is selected and input into the deep neural network; where M ×N superpixels are obtained by dividing each of the N fundus pictures in the fundus picture training set into M superpixels, each of the M×N superpixels The super pixel has been pre-marked as a key pixel or background pixel; the output result of the deep neural network is compared with the pre-marked result of the super pixel, and the network parameters of the deep neural network are trained until the deep neural network When the network outputs the super pixel, the accuracy of the super pixel is identified as a key pixel or the background pixel is greater than or equal to a first threshold to obtain the first network model.
可选地,所述深层神经网络为深度信念网络。Optionally, the deep neural network is a deep belief network.
可选地,第二网络模型是通过如下训练处理获得的:构建卷积神经网络;每 次选取M×N个超像素中,属于关键像素的所有所述超像素中的至少一个所述超像素,输入所述卷积神经网络中;其中,M×N个所述超像素是通过将眼底图片训练集的N个眼底图片中的每个所述眼底图片分割为M个超像素而获得的,其中,属于关键像素的每个所述超像素已预先被标记为病变像素或非病变像素;将所述卷积神经网络的输出结果与属于关键像素的所述超像素预先的标记结果进行比较,训练所述卷积神经网络的网络参数,直至所述卷积神经网络的损失值小于或等于第二阈值,得到所述第二网络模型;所述卷积神经网络的输出结果包括将所述超像素标识为病变像素或非病变像素。Optionally, the second network model is obtained through the following training process: constructing a convolutional neural network; each time selecting M×N superpixels, at least one of the superpixels belonging to the key pixels , Input into the convolutional neural network; wherein, the M×N superpixels are obtained by dividing each of the N fundus pictures in the fundus picture training set into M superpixels, Wherein, each of the superpixels belonging to the key pixel has been previously marked as a diseased pixel or a non-pathological pixel; the output result of the convolutional neural network is compared with the pre-marked result of the superpixel belonging to the key pixel, Train the network parameters of the convolutional neural network until the loss value of the convolutional neural network is less than or equal to the second threshold to obtain the second network model; the output result of the convolutional neural network includes The pixels are identified as lesion pixels or non-lesion pixels.
可选地,所述卷积神经网络为残差网络和Inception网络的结合。Optionally, the convolutional neural network is a combination of a residual network and an Inception network.
另一方面,本公开的实施例还提供一种计算机设备,包括存储器和处理器;所述存储器中存储可在所述处理器上运行的计算机程序;所述处理器执行所述计算机程序时实现上述的用于检测眼底图片的模型训练方法或上述的眼底图片的检测方法。On the other hand, the embodiments of the present disclosure also provide a computer device, including a memory and a processor; the memory stores a computer program that can be run on the processor; when the processor executes the computer program, The aforementioned model training method for detecting fundus pictures or the aforementioned method for detecting fundus pictures.
又一方面,本公开的实施例还提供一种计算机设备,包括处理器,所述处理器执行计算机程序时实现上述的用于检测眼底图片的模型训练方法或上述的眼底图片的检测方法。又一方面,本公开的实施例还提供一种计算机可读介质,其存储有计算机程序,所述计算机程序被处理器执行时实现上述的用于检测眼底图片的模型训练方法或上述的眼底图片的检测方法。In another aspect, embodiments of the present disclosure also provide a computer device, including a processor, which implements the aforementioned model training method for detecting fundus pictures or the aforementioned method for detecting fundus pictures when the processor executes a computer program. In another aspect, the embodiments of the present disclosure also provide a computer-readable medium storing a computer program that, when executed by a processor, implements the aforementioned model training method for detecting fundus pictures or the aforementioned fundus pictures The detection method.
又一方面,本发明的实施例还提供一种用于检测眼底图片的模型训练装置,包括:分割模块,配置为将眼底图片训练集的N个眼底图片中的每个所述眼底图片,分割为M个超像素;N和M均为正整数;训练模块,配置为根据M×N个所述超像素,训练得到第一网络模型;所述第一网络模型用于将输入的每个所述超像素,在输出时标识为关键像素或背景像素;训练模块,还配置为根据已标记为关键像素的所述超像素,训练得到第二网络模型;所述第二网络模型用于将输入的每个所述超像素,在输出时标识为病变或非病变。In another aspect, an embodiment of the present invention also provides a model training device for detecting fundus pictures, including: a segmentation module configured to divide each of the N fundus pictures in the fundus picture training set, Are M superpixels; N and M are both positive integers; the training module is configured to train according to the M×N superpixels to obtain a first network model; the first network model is used to use each input The super pixels are identified as key pixels or background pixels when output; the training module is also configured to train to obtain a second network model according to the super pixels marked as key pixels; the second network model is used to input Each of the superpixels is marked as diseased or non-pathological when output.
又一方面,本发明的实施例还提供一种眼底图片的检测装置,包括:分割模块,配置为将待检测眼底图片分割为M个超像素;获取模块,配置为获取所述P个超像素一一对应的地址;获取模块,还配置为将所述P个超像素输入第一网络模型中,从而获取标识为关键像素或背景像素的所述P个超像素;获取模块,还配置为将标识为关键像素的所述超像素输入第二网络模型中,从而获取标识为病变像素或非病变像素的标识为关键像素的所述超像素;标识模块,配置为根据标识为关键像素且标识为病变像素的所述超像素对应的地址,在所述待检测眼底图 片找到所述超像素的位置,并在所述待检测眼底图片上标识出该位置。In another aspect, an embodiment of the present invention also provides a detection device for a fundus picture, including: a segmentation module configured to divide the fundus picture to be detected into M superpixels; and an acquisition module configured to acquire the P superpixels One-to-one correspondence address; the obtaining module is also configured to input the P superpixels into the first network model, thereby obtaining the P superpixels identified as key pixels or background pixels; the obtaining module is also configured to The superpixels identified as key pixels are input into the second network model, so as to obtain the superpixels identified as key pixels, which are identified as diseased pixels or non-lesion pixels; the identification module is configured to be configured according to the identified key pixels and identified as The address corresponding to the super pixel of the diseased pixel, the position of the super pixel is found in the fundus picture to be detected, and the position is marked on the fundus picture to be detected.
附图说明Description of the drawings
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present disclosure or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present disclosure. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1为本公开实施例提供的一种用于检测眼底图片的模型训练方法的流程图;FIG. 1 is a flowchart of a model training method for detecting fundus pictures according to an embodiment of the disclosure;
图2为本公开实施例提供的再一种用于检测眼底图片的模型训练方法的流程图;2 is a flowchart of yet another model training method for detecting fundus pictures according to an embodiment of the disclosure;
图3为本公开实施例提供的另一种用于检测眼底图片的模型训练方法的流程图;FIG. 3 is a flowchart of another model training method for detecting fundus pictures according to an embodiment of the disclosure;
图4为本公开实施例提供的又一种用于检测眼底图片的模型训练方法的流程图;4 is a flowchart of yet another model training method for detecting fundus pictures provided by an embodiment of the disclosure;
图5为本公开实施例提供的又一种用于检测眼底图片的模型训练方法的流程图;FIG. 5 is a flowchart of another model training method for detecting fundus pictures according to an embodiment of the disclosure;
图6为本公开实施例提供的一种用于检测眼底图片的模型训练装置的结构示意图;FIG. 6 is a schematic structural diagram of a model training device for detecting fundus pictures according to an embodiment of the disclosure;
图7为本公开实施例提供的一种眼底图片的检测方法的流程图;FIG. 7 is a flowchart of a method for detecting fundus pictures according to an embodiment of the disclosure;
图8为本公开实施例提供的再一种眼底图片的检测方法的流程图;FIG. 8 is a flowchart of yet another method for detecting fundus pictures according to an embodiment of the present disclosure;
图9为本公开实施例提供的一种眼底图片的检测装置的结构示意图;9 is a schematic structural diagram of a detection device for fundus pictures provided by an embodiment of the disclosure;
图10为本公开实施例提供的一种计算机设备的结构示意图。FIG. 10 is a schematic structural diagram of a computer device provided by an embodiment of the disclosure.
具体实施方式detailed description
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present disclosure.
本公开的实施例提供一种用于检测眼底图片的模型训练方法,如图1所示,包括:The embodiment of the present disclosure provides a model training method for detecting fundus pictures, as shown in FIG. 1, including:
S10、将眼底图片训练集的N个眼底图片中的每个眼底图片,分割为M个超像素;N和M均为正整数。S10. Divide each of the N fundus pictures in the fundus picture training set into M superpixels; N and M are both positive integers.
其中,眼底图片训练集指多张用于训练模型的眼底图片的集合。眼底图片训练集中的眼底图片的数量可以根据需要进行设置。Among them, the fundus image training set refers to a collection of multiple fundus images used to train the model. The number of fundus pictures in the fundus picture training set can be set as needed.
超像素是指具有相似纹理、颜色、亮度等特征的相邻像素构成的有一定视觉意义的不规则像素块。少量的超像素可以代替大量的像素来表达眼底图片的特征,降低了眼底图片后续处理的复杂度。Super pixels refer to irregular pixel blocks with a certain visual significance composed of adjacent pixels with similar texture, color, brightness and other characteristics. A small number of super pixels can replace a large number of pixels to express the characteristics of the fundus picture, which reduces the complexity of subsequent processing of the fundus picture.
示例的,将一个眼底图片分割为M个超像素,也就是说,用M个超像素代替一个眼底图片中大量的像素,降低了该眼底图片的复杂度。在此基础上,将每个眼底图片都分割为M个超像素,使得所有眼底图片的复杂度被降低,且被降低成同等程度,在训练模型时,更容易进行训练,误差更小。As an example, a fundus picture is divided into M superpixels, that is, a large number of pixels in a fundus picture are replaced with M superpixels, which reduces the complexity of the fundus picture. On this basis, each fundus picture is divided into M superpixels, so that the complexity of all fundus pictures is reduced and reduced to the same degree. When training the model, it is easier to train and the error is smaller.
将每个眼底图片分割为M个超像素的方法叫作超像素分割法。超像素分割法的原理基于聚类算法,即,将聚类算法使用在眼底图片的分割上。The method of dividing each fundus picture into M superpixels is called superpixel division. The principle of the superpixel segmentation method is based on a clustering algorithm, that is, the clustering algorithm is used for the segmentation of fundus pictures.
基于上述描述,以下以一个眼底图片为例,提供一种将眼底图片分割为M个超像素的方法,以清楚描述其过程:Based on the above description, the following takes a fundus picture as an example to provide a method of dividing the fundus picture into M superpixels to clearly describe the process:
首先,设定超像素的个数M,在眼底图片中,均匀的选择M个像素作为初始聚类中心,对于其余像素,则根据其余像素与这些聚类中心的距离,按照最邻近原则,分别分配给距离最近的聚类。First, set the number M of super pixels. In the fundus image, M pixels are uniformly selected as the initial cluster centers. For the remaining pixels, according to the distance between the remaining pixels and these cluster centers, according to the principle of nearest neighbor, respectively Assign to the closest cluster.
其次,重新计算每个所获新聚类的聚类中心(聚类中所有像素的均值),不断的重复这个过程,直到聚类中心的变化极少为止,则超像素分割完成。Second, recalculate the cluster center (the mean value of all pixels in the cluster) of each new cluster obtained, and repeat this process until the cluster center changes very little, then the superpixel segmentation is completed.
S20、根据M×N个超像素,训练得到第一网络模型;第一网络模型用于将输入的每个超像素,在输出时标识为关键像素或背景像素。S20. According to the M×N superpixels, a first network model is obtained through training; the first network model is used to identify each input superpixel as a key pixel or a background pixel during output.
其中,训练第一网络模型时,M×N个超像素需要分批次输入,而每一次输入超像素的个数可以根据需要进行设定。Among them, when training the first network model, M×N superpixels need to be input in batches, and the number of superpixels input each time can be set as required.
在眼底图片中,最能体现眼底图片内容的超像素称之为关键像素,其余的超像素则称之为背景像素。对所有超像素进行关键像素和背景像素的区分,能够排除眼底图片中背景像素的干扰,接近用户的检测意图,有利于检测性能的提高。In fundus pictures, the super pixels that best reflect the content of the fundus pictures are called key pixels, and the remaining super pixels are called background pixels. Distinguishing between key pixels and background pixels for all superpixels can eliminate the interference of background pixels in the fundus picture, which is close to the user's detection intention, which is beneficial to the improvement of detection performance.
S30、根据M×N个超像素中属于关键像素的超像素,训练得到第二网络模型;第二网络模型用于将输入的每个超像素,在输出时标识为病变或非病变。S30. According to the superpixels belonging to the key pixels in the M×N superpixels, a second network model is obtained by training; the second network model is used to identify each input superpixel as a diseased or non-pathological when outputting.
其中,训练第二网络模型时,关键像素也需要分批次输入,而每一次输入关键像素的个数可以根据需要进行设定。Among them, when training the second network model, key pixels also need to be input in batches, and the number of key pixels input each time can be set according to needs.
在关键像素中,最能体现出眼底病变信息的关键像素被标识为病变,其余的关键像素则被标识为非病变。对关键像素进行病变或非病变的区分,能够排除眼 底图片中非病变的超像素的干扰,实现用户的检测意图。Among the key pixels, the key pixel that best reflects the information of the fundus lesion is marked as a lesion, and the remaining key pixels are marked as non-lesion. Distinguishing the key pixels between lesions and non-lesions can eliminate the interference of non-lesion super-pixels in fundus pictures and realize the user's detection intention.
本申请的实施例提供了一种用于检测眼底图片的模型训练方法,通过将眼底图片训练集中的眼底图片分割为多个超像素,使用超像素训练第一网络模型,以使第一网络模型在后续应用中能够识别出超像素为关键像素或背景像素。在此基础上,使用关键像素训练第二网络模型,以使第二网络模型在后续应用中能够识别出关键像素为病变或非病变,从而通过上述简易的训练方法,使得训练出的模型识别病变速度快、准确率高。The embodiment of the present application provides a model training method for detecting fundus pictures. The first network model is trained by using the superpixels to train the first network model by dividing the fundus pictures in the fundus image training set into multiple superpixels. In subsequent applications, the super pixels can be identified as key pixels or background pixels. On this basis, use key pixels to train the second network model so that the second network model can identify the key pixels as lesions or non-lesions in subsequent applications, so that the trained model can recognize lesions through the above simple training method Fast speed and high accuracy rate.
可选地,上述S10中将眼底图片训练集的N个眼底图片中的每个眼底图片,分割为M个超像素之前,如图2所示,上述用于检测眼底图片的模型训练方法还包括:Optionally, before dividing each of the N fundus pictures in the fundus image training set into M superpixels in the above S10, as shown in FIG. 2, the above-mentioned model training method for detecting fundus pictures further includes :
S40、对眼底图片进行第一预处理。S40. Perform first preprocessing on the fundus picture.
第一预处理,包括:旋转、剪切、扭曲、缩放、调整色差、降低分辨率中的至少一种。The first preprocessing includes at least one of rotation, shearing, distortion, scaling, adjusting color difference, and reducing resolution.
旋转是将眼底图片以中心或某个顶点为原点随机旋转一定角度;剪切是随机选取图像的一部分;扭曲是对图像应用一个随机的四点透视变换;缩放是将眼底图片的尺寸进行统一;调整色差是将眼底图片的色调、饱和度等进行随机处理。Rotation is to randomly rotate the fundus picture by a certain angle with the center or a certain vertex as the origin; cutting is to randomly select a part of the image; distortion is to apply a random four-point perspective transformation to the image; scaling is to unify the size of the fundus picture; To adjust the color difference is to randomly process the hue and saturation of the fundus picture.
在训练模型之前,对眼底图片进行第一预处理,校正眼底图片的内容,可以起到扩充眼底图片训练集的作用,使训练出的模型可以处理多种拍摄条件下拍摄的图像,提高模型识别的准确度。Before training the model, perform the first preprocessing on the fundus picture to correct the content of the fundus picture, which can expand the fundus picture training set, so that the trained model can process images taken under various shooting conditions and improve model recognition Accuracy.
此外,当第一预处理包括降低分辨率时,通过使用低分辨率的眼底图片训练模型,使得训练出的模型在实际检测眼底病变的过程中,识别的效果会更精准。In addition, when the first preprocessing includes reducing the resolution, by using a low-resolution fundus picture to train the model, the trained model will have a more accurate recognition effect during the actual detection of fundus lesions.
可选地,上述S20中根据M×N个超像素,训练得到第一网络模型之后,S30根据M×N个超像素中属于关键像素的超像素,训练得到第二网络模型之前,如图3所示,用于检测眼底图片的模型训练方法还包括:Optionally, after the first network model is obtained by training based on M×N superpixels in the above S20, before S30 is trained based on the superpixels belonging to the key pixels in the M×N superpixels, before the second network model is obtained, as shown in Figure 3 As shown, the model training method for detecting fundus pictures also includes:
S50、删除第一网络模型输出的背景像素。S50. Delete the background pixels output by the first network model.
将第一网络模型输出的背景像素进行删除,仅保留关键像素用于后续处理,减少了计算量,可以提高计算速度。The background pixels output by the first network model are deleted, and only key pixels are retained for subsequent processing, which reduces the amount of calculation and can increase the calculation speed.
可选地,上述S20中根据M×N个超像素,训练得到第一网络模型,如图4所示,包括:Optionally, the first network model is obtained by training according to M×N superpixels in S20, as shown in FIG. 4, which includes:
S201、构建深层神经网络。S201. Construct a deep neural network.
可选地,S201中的深层神经网络为深度信念网络(Deep Belief Network, DBN)。Optionally, the deep neural network in S201 is a deep belief network (Deep Belief Network, DBN).
深度信念网络包括多个堆叠的受限玻尔兹曼机(Restricted Boltzmann Machine,RBM)。受限玻尔兹曼机的结构原理来源于物理中的玻尔兹曼分布。其中,每个受限玻尔兹曼机有两层神经元,一层叫做显层(Visible Layer),由显元(Visible Units)组成,用于输入;另一层叫做隐层(Hidden Layer),由隐元(Hidden Units)组成,用作检测。显元和隐元都是二值变量,即,其状态取值为0或1。每一层神经元,层内无连接,层与层之间全连接。The deep belief network includes multiple stacked Restricted Boltzmann Machines (RBM). The structural principle of the restricted Boltzmann machine comes from the Boltzmann distribution in physics. Among them, each restricted Boltzmann machine has two layers of neurons. One layer is called the Visible Layer, which is composed of Visible Units and is used for input; the other layer is called Hidden Layer. , Composed of Hidden Units, used for detection. Both the explicit element and the hidden element are binary variables, that is, their state takes the value 0 or 1. In each layer of neurons, there is no connection within the layer, and the layers are fully connected.
多个堆叠的受限玻尔兹曼机构成深度信念网络时,下层受限玻尔兹曼机的隐层作为高一层受限玻尔兹曼机的显层,为高一层受限玻尔兹曼机输入数据。When multiple stacked restricted Boltzmann machines form a deep belief network, the hidden layer of the lower restricted Boltzmann machine serves as the visible layer of the higher restricted Boltzmann machine, which is the upper layer of restricted glass. Erzmann machine input data.
堆叠成深度信念网络的受限玻尔兹曼机个数可以根据需要进行设置,本公开对此不进行限定。The number of restricted Boltzmann machines stacked into a deep belief network can be set as required, which is not limited in the present disclosure.
S202、每次选取M×N个超像素中的至少一个超像素,输入深层神经网络中;其中,M×N个所述超像素中的每个超像素已预先被标记为关键像素或背景像素。S202. Each time at least one superpixel among the M×N superpixels is selected and input into the deep neural network; wherein, each of the M×N superpixels has been previously marked as a key pixel or a background pixel .
示例的,可以通过人工标记的方式对每个超像素预先进行标记。For example, each superpixel may be pre-marked by manual marking.
S203、将深层神经网络的输出结果与超像素预先的标记结果进行比较,训练深层神经网络的网络参数,直至深层神经网络在输出超像素时,将超像素标识为关键像素或者背景像素的正确率大于或等于第一阈值,得到第一网络模型。S203. Compare the output result of the deep neural network with the pre-marked result of the superpixel, and train the network parameters of the deep neural network, until the deep neural network outputs the superpixel, the correct rate of identifying the superpixel as a key pixel or background pixel If it is greater than or equal to the first threshold, the first network model is obtained.
基于上述描述,当深层神经网络为深度信念网络时,以下提供一种根据M×N个超像素,训练得到第一网络模型的方法,以清楚描述其实现过程。Based on the above description, when the deep neural network is a deep belief network, the following provides a method of training the first network model based on M×N superpixels to clearly describe its implementation process.
首先,构建深度信念网络,设定深度信念网络由Q个受限玻尔兹曼机堆叠而成。Q为正整数。First, construct a deep belief network, and set the deep belief network to be formed by stacking Q restricted Boltzmann machines. Q is a positive integer.
其次,每次将至少一个超像素输入深度信念网络中的第一个受限玻尔兹曼机的显层,进行无监督训练,提取超像素的特征,通过对比散度法,更新权重。将第一个受限玻尔兹曼机的隐层作为第二个受限玻尔兹曼机的显层,提取特征,更新权重。依次类推。将第Q-1个受限玻尔兹曼机的隐层作为第Q个受限玻尔兹曼机的显层,并设置分别代表关键像素和背景像素的标签神经元,继续提取特征,更新权重。将第Q个受限玻尔兹曼机的隐层连接输出层输出。Secondly, each time at least one superpixel is input into the visible layer of the first restricted Boltzmann machine in the deep belief network, unsupervised training is performed, the features of the superpixel are extracted, and the weights are updated through the contrast divergence method. The hidden layer of the first restricted Boltzmann machine is used as the visible layer of the second restricted Boltzmann machine, features are extracted, and the weights are updated. And so on. Use the hidden layer of the Q-1 restricted Boltzmann machine as the visible layer of the Q restricted Boltzmann machine, and set up label neurons representing key pixels and background pixels, continue to extract features, and update Weights. Connect the hidden layer of the Qth restricted Boltzmann machine to the output layer to output.
需要说明的是,无监督训练,是指针对每个受限玻尔兹曼机,在训练阶段,输入显层的数据不需要人工标记。It should be noted that unsupervised training refers to each restricted Boltzmann machine. In the training phase, the data input to the explicit layer does not need to be manually labeled.
对比散度法(Contrastive Divergence,CD)主要步骤包括,根据超像素设置受限玻尔兹曼机的显层状态,利用显层条件下隐层的条件概率,计算隐层状态; 在隐层各隐元的状态确定后,根据隐层条件下显层的条件概率,计算下一层显层状态,重构显层,重复采样,直到模型参数收敛为止。The main steps of the Contrastive Divergence (CD) method include setting the explicit state of the restricted Boltzmann machine according to the superpixel, and calculating the hidden state by using the conditional probability of the hidden layer under the explicit condition; After the state of the hidden element is determined, the state of the next layer is calculated according to the conditional probability of the explicit layer under the condition of the hidden layer, the explicit layer is reconstructed, and sampling is repeated until the model parameters converge.
然后,将深度信念网络的输出结果与人工标记结果进行比较,计算所有超像素通过深度信念网络被标识为关键像素或背景像素的正确率。当正确率很低时,可以利用误差逆传播(Error Back Propagation,BP)算法计算深度信念网络的均方误差,通过不断地调整网络参数,使深度信念网络的均方误差小于或等于设定的第三阈值,从而得到深度信念网络。Then, the output result of the deep belief network is compared with the artificial labeling result, and the correct rate of all superpixels identified as key pixels or background pixels through the deep belief network is calculated. When the accuracy rate is very low, you can use the Error Back Propagation (BP) algorithm to calculate the mean square error of the deep belief network, and continuously adjust the network parameters to make the mean square error of the deep belief network less than or equal to the set value The third threshold, thus get the deep belief network.
可选地,上述S30中根据M×N个超像素中属于关键像素的超像素,训练得到第二网络模型,如图5所示,包括:Optionally, the second network model is obtained by training according to superpixels belonging to key pixels among the M×N superpixels in S30, as shown in FIG. 5, which includes:
S301、构建卷积神经网络。S301. Construct a convolutional neural network.
卷积神经网络模型是一个多层结构学习算法,利用图片中的空间相对位置和权重,减少网络权重的数目,以提高复杂网络训练性能。The convolutional neural network model is a multi-layer structure learning algorithm that uses the relative spatial positions and weights in the picture to reduce the number of network weights to improve the performance of complex network training.
卷积神经网络在训练时,是一种在监督下进行学习的机器学习模型。When convolutional neural network is trained, it is a machine learning model that learns under supervision.
可选地,卷积神经网络为残差网络和Inception网络的结合。Optionally, the convolutional neural network is a combination of the residual network and the Inception network.
残差网络,通过跳跃连接技术构建,打破了传统的神经网络S-1层的输出只能给S层作为输入的惯例,使其某一层的输出可以直接跨过几层作为后面某一层的输入。多个残差网络的堆叠可以降低网络参数的数目,减少计算量,提高运算速度。The residual network, constructed by jump connection technology, breaks the convention that the output of the S-1 layer of the traditional neural network can only be input to the S layer, so that the output of a certain layer can directly cross several layers as a later layer input of. The stacking of multiple residual networks can reduce the number of network parameters, reduce the amount of calculation, and increase the calculation speed.
Inception网络,是一种具有并行结构的网络,其通过不对称的卷积核结构,可以在保证信息损失足够小的情况下,降低计算量,提高运算速度。The Inception network is a network with a parallel structure. Through an asymmetric convolution kernel structure, it can reduce the amount of calculation and increase the speed of calculation while ensuring that the information loss is small enough.
S302、每次选取M×N个超像素中,属于关键像素的所有超像素中的至少一个超像素,输入卷积神经网络中;其中,属于关键像素的每个超像素已预先被标记为病变或非病变。S302. At least one superpixel among all the superpixels belonging to the key pixel is selected each time among the M×N superpixels, and input into the convolutional neural network; wherein, each superpixel belonging to the key pixel has been previously marked as a lesion Or non-pathological.
示例的,可以通过人工标记的方式对属于关键像素的每个超像素进行预先标记。For example, each superpixel belonging to a key pixel may be pre-marked by manual marking.
S303、将卷积神经网络的输出结果与属于关键像素的超像素预先的标记结果进行比较,训练卷积神经网络的网络参数,直至卷积神经网络的损失值小于或等于第二阈值,得到第二网络模型;所述卷积神经网络的输出结果包括将超像素标识为病变或非病变。S303. Compare the output result of the convolutional neural network with the pre-marked result of superpixels belonging to the key pixel, and train the network parameters of the convolutional neural network until the loss value of the convolutional neural network is less than or equal to the second threshold, and obtain the first 2. Network model: The output result of the convolutional neural network includes identifying superpixels as diseased or non-pathological.
基于上述描述,当卷积神经网络为残差网络和Inception网络的结合时,以下提供一种根据M×N个超像素中属于关键像素的超像素,训练得到第二网络模 型的方法,以清楚描述其实现过程。Based on the above description, when the convolutional neural network is the combination of the residual network and the Inception network, the following provides a method for training the second network model based on the superpixels belonging to the key pixels in the M×N superpixels to make it clear Describe its realization process.
首先,构建卷积神经网络。First, build a convolutional neural network.
卷积神经网络中包括的残差网络的个数、Inception网络的个数可以根据需要进行设定,本公开对此不进行限定。The number of residual networks included in the convolutional neural network and the number of Inception networks can be set as required, and the present disclosure does not limit this.
其次,每次将至少一个属于关键像素的超像素输入卷积神经网络的输入层,利用残差网络和Inception网络对其进行卷积,将卷积结果输入全连接层,标识为病变或非病变后输出。Secondly, each time at least one superpixel belonging to the key pixel is input to the input layer of the convolutional neural network, and the residual network and the Inception network are used to convolve it, and the convolution result is input into the fully connected layer, which is marked as diseased or non-pathological After output.
然后,将卷积神经网络的输出结果与人工标记结果进行比较,计算所有属于关键像素的超像素的损失值。当损失值很大时,可以利用反向传播,调整网络参数,直至损失值小于或等于第二阈值,从而得到卷积神经网络。Then, the output result of the convolutional neural network is compared with the artificial labeling result, and the loss value of all superpixels belonging to the key pixel is calculated. When the loss value is large, back propagation can be used to adjust the network parameters until the loss value is less than or equal to the second threshold, thereby obtaining a convolutional neural network.
其中,卷积神经网络的主要作用是将属于关键像素的超像素分为病变或非病变,被用作分类模型,此时,求损失值的损失函数使用的是交叉熵(Cross Entroy Loss)损失函数。Among them, the main function of the convolutional neural network is to divide the superpixels belonging to the key pixels into lesions or non-lesions, which is used as a classification model. At this time, the loss function for calculating the loss value uses the cross entropy (Cross Entroy Loss) loss function.
交叉熵损失函数的关系式为:
Figure PCTCN2020076501-appb-000001
y i表示人工标记结果的概率分布,y i'表示卷积神经网络的输出结果的概率分布。
The relational expression of the cross entropy loss function is:
Figure PCTCN2020076501-appb-000001
y i represents the probability distribution of artificial labeling results, and y i 'represents the probability distribution of the output results of the convolutional neural network.
交叉熵描述了两个概率分布之间的距离,当交叉熵越大,说明二者之间差异越大,交叉熵越小说明二者之间越接近。Cross entropy describes the distance between two probability distributions. The larger the cross entropy, the greater the difference between the two. The smaller the cross entropy, the closer the two are.
需要说明的是,在利用交叉熵损失函数计算损失值之前,由于卷积神经网络的输出结果并不是一个概率分布,因此,还需要使用Softmax回归将卷积神经网络的输出结果,归一化至(0,1)区间内,变成概率分布。It should be noted that before using the cross entropy loss function to calculate the loss value, since the output result of the convolutional neural network is not a probability distribution, it is also necessary to use Softmax regression to normalize the output result of the convolutional neural network to In the (0,1) interval, it becomes a probability distribution.
示例的,将M×N个超像素分批次输入卷积神经网络,进行卷积后输出,其中某一次的输出结果为包含K个超像素的数据,1≤K≤M×N,K为正整数,i表示其中第i个超像素,则根据Softmax回归,第i个超像素的卷积神经网络的输出结果的概率分布为:
Figure PCTCN2020076501-appb-000002
For example, input M×N superpixels into a convolutional neural network in batches, and output after convolution. The output result of a certain time is data containing K superpixels, 1≤K≤M×N, K is A positive integer, i represents the i-th superpixel. According to Softmax regression, the probability distribution of the output result of the i-th superpixel's convolutional neural network is:
Figure PCTCN2020076501-appb-000002
本公开的实施例还提供一种计算机设备,如图10所示,包括存储器100和处理器200;存储器100中存储可在处理器200上运行的计算机程序;处理器200执行计算机程序实现上述的用于检测眼底图片的模型训练方法。存储器可以包括但不限于盘驱动器、光学存储设备、固态存储装置、软盘、柔性盘、硬盘、磁带或任何其它磁性介质、紧凑型盘或任何其它光学介质、ROM(只读存储器)、RAM(随机存取存储器)、高速缓存存储器和/或任何其它存储器芯片或盒带、和/或处理器可以从其读取数据、指令和/或代码的任何其它介质。处理器可以是任何类型的处理器,并且可以包括但不限于一个或多个通用处理器和/或一个或多个专用处 理器(诸如专用处理芯片)。在本公开的一些实施例中,计算机设备可以不包括存储器100。计算机设备可以通过访问外部或远程存储器来取回计算机程序。The embodiment of the present disclosure also provides a computer device, as shown in FIG. 10, including a memory 100 and a processor 200; the memory 100 stores a computer program that can run on the processor 200; the processor 200 executes the computer program to realize the above Model training method for detecting fundus pictures. Memory may include, but is not limited to, disk drives, optical storage devices, solid-state storage devices, floppy disks, flexible disks, hard disks, tapes or any other magnetic media, compact disks or any other optical media, ROM (read only memory), RAM (random Access memory), cache memory and/or any other memory chip or cartridge, and/or any other medium from which the processor can read data, instructions and/or code. The processor may be any type of processor, and may include, but is not limited to, one or more general-purpose processors and/or one or more special-purpose processors (such as special-purpose processing chips). In some embodiments of the present disclosure, the computer device may not include the memory 100. Computer equipment can retrieve computer programs by accessing external or remote storage.
本公开的实施例还提供一种计算机可读介质,其存储有计算机程序,计算机程序被处理器执行时实现上述的用于检测眼底图片的模型训练方法。本发明的实施例还提供一种用于检测眼底图片的模型训练装置,如图6所示,包括:The embodiment of the present disclosure also provides a computer-readable medium storing a computer program, and the computer program is executed by a processor to implement the above-mentioned model training method for detecting fundus pictures. The embodiment of the present invention also provides a model training device for detecting fundus pictures, as shown in FIG. 6, including:
分割模块10,配置为将眼底图片训练集的N个眼底图片中的每个眼底图片,分割为M个超像素;N和M均为正整数。The dividing module 10 is configured to divide each fundus picture in the N fundus pictures in the fundus picture training set into M superpixels; N and M are both positive integers.
训练模块20,配置为根据M×N个超像素,训练得到第一网络模型;第一网络模型用于将输入的每个所述超像素,在输出时标识为关键像素或背景像素。The training module 20 is configured to train a first network model according to M×N superpixels; the first network model is used to identify each input superpixel as a key pixel or a background pixel when outputting.
训练模块20,还配置为根据已标记为关键像素的超像素,训练得到第二网络模型;第二网络模型用于将输入的每个超像素,在输出时标识为病变或非病变。The training module 20 is also configured to train to obtain a second network model according to the superpixels that have been marked as key pixels; the second network model is used to identify each superpixel input as a diseased or non-pathological when outputting.
本申请的实施例提供了一种用于检测眼底图片的模型训练装置,通过分割模块,将眼底图片训练集中的眼底图片分割为多个超像素,再使用训练模块,利用超像素训练第一网络模型,使第一网络模型能够识别出超像素为关键像素或背景像素,继续使用训练模块,利用关键像素训练第二网络模型,使第二网络模型能够识别出关键像素为病变或非病变,从而通过上述简易的训练装置,可以训练出快速识别眼底图片中病变且识别效果好,准确率高的模型。The embodiment of the present application provides a model training device for detecting fundus pictures. The fundus pictures in the fundus image training set are divided into multiple superpixels through the segmentation module, and then the training module is used to train the first network using the superpixels. The model enables the first network model to recognize superpixels as key pixels or background pixels, and continues to use the training module to train the second network model using key pixels, so that the second network model can recognize that the key pixels are lesions or non-lesions, thereby Through the above simple training device, a model that can quickly recognize lesions in fundus pictures with good recognition effect and high accuracy can be trained.
本公开的实施例还提供一种眼底图片的检测方法,如图7所示,包括:The embodiment of the present disclosure also provides a method for detecting fundus pictures, as shown in FIG. 7, including:
S100、将待检测眼底图片分割为P个超像素,并获取P个超像素一一对应的地址。S100. Divide the fundus image to be detected into P superpixels, and obtain addresses corresponding to the P superpixels one-to-one.
S200、将该P个超像素输入由上述的用于检测眼底图片的模型训练方法得到的第一网络模型中,从而获取标识为关键像素的超像素。S200: Input the P superpixels into the first network model obtained by the above-mentioned model training method for detecting fundus pictures, so as to obtain superpixels identified as key pixels.
S300、将标识为关键像素的超像素输入由上述的用于检测眼底图片的模型训练方法得到的第二网络模型中,从而获取标识为关键像素且病变的超像素。S300: Input superpixels identified as key pixels into the second network model obtained by the above-mentioned model training method for detecting fundus pictures, so as to obtain superpixels identified as key pixels and with lesions.
S400、根据标识为关键像素且病变的超像素对应的地址,在待检测眼底图片找到该超像素的位置,并在待检测眼底图片上标识出该位置。S400: Find the position of the superpixel in the fundus picture to be detected according to the address corresponding to the superpixel identified as the key pixel and the lesion, and mark the position on the fundus picture to be detected.
基于此,可以依次遍历标识为关键像素且病变的超像素的地址。将相邻地址标识为关键像素且病变的超像素进行合并,再在待检测眼底图片上标识出合并的位置。Based on this, it is possible to sequentially traverse the addresses of superpixels identified as key pixels and lesions. The adjacent addresses are identified as key pixels and superpixels of the lesion are combined, and then the combined position is identified on the fundus image to be detected.
示例的,一个标识为关键像素且病变的超像素对应的地址L,将其作为种子像素,根据地址L,检索其相邻地址L-1和L+1的超像素是否也同样为关键像素 且病变。For example, an address L corresponding to a superpixel identified as a key pixel and a diseased superpixel is used as a seed pixel. According to the address L, it is retrieved whether the superpixels at adjacent addresses L-1 and L+1 are also key pixels and Lesions.
若否,则单独对其位置进行标识。If not, the location is identified separately.
若至少有一个是,则将地址L-1或L+1的超像素作为种子像素,寻找该种子像素相邻地址的超像素是否为关键像素且病变,依次类推,直至相邻地址的超像素均不是关键像素且病变,则一次检索结束,并将之前找到的所有相邻的关键像素且病变的超像素为位置进行合并标识。再继续遍历下一个未标识位置的属于关键像素且病变的超像素。If at least one of them is yes, then use the super pixel at address L-1 or L+1 as the seed pixel to find whether the super pixel at the neighboring address of the seed pixel is a key pixel and diseased, and so on, until the super pixel at the neighboring address If none of the key pixels is a key pixel and the lesion is a lesion, then one search ends, and all the adjacent key pixels and superpixels of the lesion found before are merged and identified as positions. Then continue to traverse the next unidentified super pixel that belongs to the key pixel and the lesion.
此外,在待检测眼底图片上标识病变位置时,标识可以为圆圈、点、对号等,只要能使人眼可以从眼底图片区分出来即可,其形状和颜色,本公开对比并不进行限定。In addition, when marking the location of the lesion on the fundus picture to be detected, the mark can be a circle, a dot, a check mark, etc., as long as the human eye can distinguish it from the fundus picture, and its shape and color are not limited in the present disclosure. .
本公开提供的实施例提供了一种眼底图片的检测方法,通过将待检测的眼底图片分割为多个超像素,使用训练得到的第一网络模型对超像素进行识别,获取出其中的关键像素,将关键像素输入训练得到的第二网络模型,对关键像素进行识别,获取出其中的病变的超像素,再根据该超像素的地址,在眼底图像中找到其位置并进行标记。由此通过上述方法可快速且准确的检测出眼底图片中的病变,在应用时,可协助医生进行快速诊断,并降低误诊、漏诊的概率。The embodiments provided in the present disclosure provide a method for detecting fundus pictures. By dividing the fundus picture to be detected into multiple superpixels, the first network model obtained by training is used to identify the superpixels, and the key pixels are obtained. , The key pixels are input into the second network model obtained by training, the key pixels are identified, the superpixels of the lesions are obtained, and then according to the address of the superpixel, the position of the superpixel is found and marked in the fundus image. Therefore, the above method can quickly and accurately detect the pathological changes in the fundus picture, and when applied, it can assist the doctor in the rapid diagnosis and reduce the probability of misdiagnosis and missed diagnosis.
可选地,S100将待检测眼底图片分割为P个超像素,并获取P个所述超像素一一对应的地址之前,如图8所示,用于检测眼底图片的模型训练方法还包括:Optionally, before S100 divides the fundus image to be detected into P superpixels and obtains the one-to-one addresses of the P superpixels, as shown in FIG. 8, the model training method for detecting the fundus image further includes:
S500、对待检测眼底图片进行第二预处理;第二预处理,包括:剪切和缩放中的至少一种。S500. Perform a second preprocessing on the fundus image to be detected; the second preprocessing includes at least one of cropping and zooming.
在检测眼底图片之前,对眼底图片进行第二预处理,统一眼底图片的尺寸,降低不良影响,提高检测的准确度。Before detecting the fundus picture, a second preprocessing is performed on the fundus picture to unify the size of the fundus picture, reduce adverse effects, and improve the accuracy of detection.
本公开的实施例还提供一种计算机设备,如图10所示,包括存储器100和处理器200;存储器100中存储可在处理器200上运行的计算机程序;处理器200执行计算机程序实现上述的眼底图片的检测方法。The embodiment of the present disclosure also provides a computer device, as shown in FIG. 10, including a memory 100 and a processor 200; the memory 100 stores a computer program that can run on the processor 200; the processor 200 executes the computer program to realize the above Detection method of fundus pictures.
本公开的实施例还提供一种计算机可读介质,其存储有计算机程序,计算机程序被处理器执行时实现上述的眼底图片的检测方法。The embodiment of the present disclosure also provides a computer-readable medium storing a computer program, and the computer program is executed by a processor to implement the above-mentioned method for detecting fundus pictures.
本发明的实施例还提供一种眼底图片的检测装置,如图9所示,包括:An embodiment of the present invention also provides a device for detecting fundus pictures, as shown in FIG. 9, including:
分割模块10,配置为将待检测眼底图片分割为M个超像素。The segmentation module 10 is configured to segment the fundus image to be detected into M superpixels.
获取模块30,配置为获取P个超像素一一对应的地址。The obtaining module 30 is configured to obtain the addresses corresponding to the P superpixels one to one.
获取模块30,还配置为将该P个超像素输入由上述的用于检测眼底图片的模 型训练方法得到的第一网络模型中,从而获取标识为关键像素的超像素。The obtaining module 30 is further configured to input the P superpixels into the first network model obtained by the above-mentioned model training method for detecting fundus pictures, so as to obtain superpixels identified as key pixels.
获取模块30,还配置为将标识为关键像素的所述超像素输入由上述的用于检测眼底图片的模型训练方法得到的第二网络模型中,从而获取标识为关键像素且病变的所述超像素。The acquiring module 30 is further configured to input the superpixels identified as key pixels into the second network model obtained by the above-mentioned model training method for detecting fundus pictures, so as to acquire the superpixels identified as key pixels and having lesions. Pixels.
标识模块40,配置为根据标识为关键像素且病变的所述超像素对应的地址,在所述待检测眼底图片找到该超像素的位置,并在所述待检测眼底图片上标识出该位置。The identification module 40 is configured to find the position of the super pixel in the fundus image to be detected according to the address corresponding to the super pixel identified as a key pixel and the lesion, and to identify the position on the fundus image to be detected.
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present disclosure. It should be covered within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the claims.

Claims (17)

  1. 一种用于检测眼底图片的模型训练方法,包括:A model training method for detecting fundus pictures includes:
    将眼底图片训练集的N个眼底图片中的每个所述眼底图片,分割为M个超像素;N和M均为正整数;Divide each of the N fundus pictures in the fundus picture training set into M superpixels; N and M are both positive integers;
    根据M×N个所述超像素,训练得到第一网络模型;所述第一网络模型用于将输入的每个所述超像素,在输出时标识为关键像素或背景像素;Training to obtain a first network model according to the M×N superpixels; the first network model is used to identify each input superpixel as a key pixel or a background pixel during output;
    根据M×N个所述超像素中属于关键像素的所述超像素,训练得到第二网络模型;所述第二网络模型用于将输入的每个所述超像素,在输出时标识为病变或非病变。According to the superpixels belonging to the key pixels in the M×N superpixels, a second network model is obtained through training; the second network model is used to identify each of the input superpixels as a lesion when outputting Or non-pathological.
  2. 根据权利要求1所述的用于检测眼底图片的模型训练方法,其中,根据M×N个所述超像素,训练得到第一网络模型,包括:The model training method for detecting fundus pictures according to claim 1, wherein training the first network model according to the M×N superpixels comprises:
    构建深层神经网络;Build a deep neural network;
    每次选取M×N个所述超像素中的至少一个所述超像素,输入所述深层神经网络中;其中,M×N个所述超像素中的每个所述超像素已预先被标记为关键像素或背景像素;Each time at least one of the M×N superpixels is selected and input into the deep neural network; wherein, each of the M×N superpixels has been previously marked Are key pixels or background pixels;
    将所述深层神经网络的输出结果与所述超像素预先的标记结果进行比较,训练所述深层神经网络的网络参数,直至所述深层神经网络在输出所述超像素时,将所述超像素标识为关键像素或者背景像素的正确率大于或等于第一阈值,得到所述第一网络模型。The output result of the deep neural network is compared with the pre-labeled result of the super pixel, and the network parameters of the deep neural network are trained until the deep neural network outputs the super pixel, the super pixel The correct rate of the identified key pixel or background pixel is greater than or equal to the first threshold, and the first network model is obtained.
  3. 根据权利要求2所述的用于检测眼底图片的模型训练方法,其中,所述深层神经网络为深度信念网络。The model training method for detecting fundus pictures according to claim 2, wherein the deep neural network is a deep belief network.
  4. 根据权利要求1-3任一项所述的用于检测眼底图片的模型训练方法,其中,根据M×N个所述超像素中属于关键像素的所述超像素,训练得到第二网络模型,包括:The model training method for detecting fundus pictures according to any one of claims 1 to 3, wherein the second network model is obtained by training according to the superpixels belonging to the key pixels among the M×N superpixels, include:
    构建卷积神经网络;Build a convolutional neural network;
    每次选取M×N个所述超像素中,属于关键像素的所有所述超像素中的至少一个所述超像素,输入所述卷积神经网络中;其中,属于关键像素的每个所述超像素已预先被标记为病变或非病变;Each time the M×N superpixels are selected, at least one of the superpixels belonging to the key pixel is input into the convolutional neural network; wherein, each of the key pixels is Super pixels have been pre-marked as lesions or non-lesions;
    将所述卷积神经网络的输出结果与属于关键像素的所述超像素预先的标记结果进行比较,训练所述卷积神经网络的网络参数,直至所述卷积神经网络的损失值小于或等于第二阈值,得到所述第二网络模型;所述卷积神经网络的输出结果包括将所述超像素标识为病变或非病变。Compare the output result of the convolutional neural network with the pre-marked results of the superpixels belonging to key pixels, and train the network parameters of the convolutional neural network until the loss value of the convolutional neural network is less than or equal to The second threshold is used to obtain the second network model; the output result of the convolutional neural network includes the identification of the superpixel as diseased or non-pathological.
  5. 根据权利要求4所述的用于检测眼底图片的模型训练方法,其中,所述卷积神经网络为残差网络和Inception网络的结合。The model training method for detecting fundus pictures according to claim 4, wherein the convolutional neural network is a combination of a residual network and an Inception network.
  6. 根据权利要求1所述的用于检测眼底图片的模型训练方法,其中,将眼底图片训练集的N个眼底图片中的每个所述眼底图片,分割为M个超像素之前,所述用于检测眼底图片的模型训练方法还包括:The model training method for detecting fundus pictures according to claim 1, wherein before dividing each of the N fundus pictures in the fundus picture training set into M superpixels, the The model training method for detecting fundus pictures also includes:
    对所述眼底图片进行预处理;Preprocessing the fundus picture;
    所述预处理,包括:旋转、剪切、扭曲、缩放、调整色差、降低分辨率中的至少一种。The preprocessing includes at least one of rotation, shearing, distortion, scaling, adjusting color difference, and reducing resolution.
  7. 一种眼底图片的检测方法,包括:A method for detecting fundus pictures includes:
    将待检测眼底图片分割为P个超像素,并获取所述P个超像素一一对应的地址;Dividing the fundus image to be detected into P superpixels, and obtaining addresses corresponding to the P superpixels one-to-one;
    将所述P个超像素输入第一网络模型中,从而获取标识为关键像素或背景像素的所述P个超像素;Input the P superpixels into the first network model to obtain the P superpixels identified as key pixels or background pixels;
    将标识为关键像素的所述超像素输入第二网络模型中,从而获取标识为病变像素或非病变像素的标识为关键像素的所述超像素;Input the superpixels identified as key pixels into the second network model, thereby obtaining the superpixels identified as diseased pixels or non-pathological pixels as key pixels;
    根据标识为关键像素且标识为病变像素的所述超像素对应的地址,在所述待检测眼底图片找到所述超像素的位置,并在所述待检测眼底图片上标识出该位置。According to the address corresponding to the super pixel identified as a key pixel and identified as a diseased pixel, the position of the super pixel is found in the fundus picture to be detected, and the position is marked on the fundus picture to be detected.
  8. 根据权利要求7所述的眼底图片的检测方法,其中,将待检测眼底图片分割为P个超像素,并获取P个所述超像素一一对应的地址之前,所述眼底图片的检测方法还包括:The method for detecting a fundus picture according to claim 7, wherein before dividing the fundus picture to be detected into P superpixels, and obtaining the addresses corresponding to the P superpixels one-to-one, the detection method for the fundus picture further include:
    对所述待检测眼底图片进行预处理;Preprocessing the fundus image to be detected;
    所述预处理,包括:剪切和缩放中的至少一种。The preprocessing includes at least one of cropping and scaling.
  9. 根据权利要求7所述的眼底图片的检测方法,其中,第一网络模型是通过如下训练处理获得的:The method for detecting fundus pictures according to claim 7, wherein the first network model is obtained through the following training process:
    构建深层神经网络;Build a deep neural network;
    每次选取M×N个超像素中的至少一个所述超像素,输入所述深层神经网络中;其中,M×N个所述超像素是通过将眼底图片训练集的N个眼底图片中的每个所述眼底图片分割为M个超像素而获得的,M×N个所述超像素中的每个所述超像素已预先被标记为关键像素或背景像素;Each time at least one of the M×N superpixels is selected and input into the deep neural network; wherein, the M×N superpixels are selected from the N fundus pictures in the fundus picture training set. Each of the fundus images is obtained by dividing each of the fundus pictures into M superpixels, each of the M×N superpixels has been previously marked as a key pixel or a background pixel;
    将所述深层神经网络的输出结果与所述超像素预先的标记结果进行比较,训练所述深层神经网络的网络参数,直至所述深层神经网络在输出所述超像素时,将所述超像素标识为关键像素或者背景像素的正确率大于或等于第一阈值,得到 所述第一网络模型。The output result of the deep neural network is compared with the pre-labeled result of the super pixel, and the network parameters of the deep neural network are trained until the deep neural network outputs the super pixel, the super pixel The correct rate of the identified key pixel or background pixel is greater than or equal to the first threshold, and the first network model is obtained.
  10. 根据权利要求9所述的眼底图片的检测方法,其中,所述深层神经网络为深度信念网络。The method for detecting fundus pictures according to claim 9, wherein the deep neural network is a deep belief network.
  11. 根据权利要求7所述的眼底图片的检测方法,其中,第二网络模型是通过如下训练处理获得的:8. The method for detecting fundus pictures according to claim 7, wherein the second network model is obtained through the following training processing:
    构建卷积神经网络;Build a convolutional neural network;
    每次选取M×N个超像素中,属于关键像素的所有所述超像素中的至少一个所述超像素,输入所述卷积神经网络中;其中,M×N个所述超像素是通过将眼底图片训练集的N个眼底图片中的每个所述眼底图片分割为M个超像素而获得的,其中,属于关键像素的每个所述超像素已预先被标记为病变像素或非病变像素;Each time M×N superpixels are selected, at least one of the superpixels belonging to the key pixel is input into the convolutional neural network; wherein, the M×N superpixels are passed through Obtained by dividing each of the N fundus pictures in the fundus picture training set into M superpixels, wherein each of the superpixels belonging to the key pixel has been previously marked as a lesion pixel or a non-lesion pixel Pixel
    将所述卷积神经网络的输出结果与属于关键像素的所述超像素预先的标记结果进行比较,训练所述卷积神经网络的网络参数,直至所述卷积神经网络的损失值小于或等于第二阈值,得到所述第二网络模型;所述卷积神经网络的输出结果包括将所述超像素标识为病变像素或非病变像素。Compare the output result of the convolutional neural network with the pre-marked results of the superpixels belonging to key pixels, and train the network parameters of the convolutional neural network until the loss value of the convolutional neural network is less than or equal to The second threshold is used to obtain the second network model; the output result of the convolutional neural network includes identifying the superpixel as a diseased pixel or a non-pathological pixel.
  12. 根据权利要求11所述的眼底图片的检测方法,其中,所述卷积神经网络为残差网络和Inception网络的结合。The method for detecting fundus pictures according to claim 11, wherein the convolutional neural network is a combination of a residual network and an Inception network.
  13. 一种计算机设备,包括存储器和处理器;所述存储器中存储可在所述处理器上运行的计算机程序;所述处理器执行所述计算机程序时实现如权利要求1-6任一项所述的用于检测眼底图片的模型训练方法或如权利要求7-8任一项所述的眼底图片的检测方法。A computer device, comprising a memory and a processor; the memory stores a computer program that can be run on the processor; when the processor executes the computer program, the implementation of any one of claims 1-6 The model training method for detecting fundus pictures or the method for detecting fundus pictures according to any one of claims 7-8.
  14. 一种计算机设备,包括处理器,所述处理器执行计算机程序时实现如权利要求1-6任一项所述的用于检测眼底图片的模型训练方法或如权利要求7-8任一项所述的眼底图片的检测方法。A computer device, comprising a processor, which implements the model training method for detecting fundus pictures according to any one of claims 1-6 or the method according to any one of claims 7-8 when executing a computer program The detection method of fundus pictures described.
  15. 一种计算机可读介质,其存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-6任一项所述的用于检测眼底图片的模型训练方法或如权利要求7-8任一项所述的眼底图片的检测方法。A computer-readable medium, which stores a computer program that, when executed by a processor, implements the model training method for detecting fundus pictures according to any one of claims 1-6 or according to claim 7- 8. The detection method of any of the fundus pictures.
  16. 一种用于检测眼底图片的模型训练装置,包括:A model training device for detecting fundus pictures includes:
    分割模块,配置为将眼底图片训练集的N个眼底图片中的每个所述眼底图片,分割为M个超像素;N和M均为正整数;A segmentation module, configured to divide each of the N fundus pictures in the fundus picture training set into M superpixels; N and M are both positive integers;
    训练模块,配置为根据M×N个所述超像素,训练得到第一网络模型;所述第一网络模型用于将输入的每个所述超像素,在输出时标识为关键像素或背景像素;The training module is configured to train to obtain a first network model according to the M×N superpixels; the first network model is used to identify each input superpixel as a key pixel or a background pixel when outputting ;
    训练模块,还配置为根据已标记为关键像素的所述超像素,训练得到第二网络模型;所述第二网络模型用于将输入的每个所述超像素,在输出时标识为病变或非病变。The training module is further configured to train to obtain a second network model according to the superpixels that have been marked as key pixels; the second network model is used to identify each of the input superpixels as lesions or Non-pathological.
  17. 一种眼底图片的检测装置,包括:A detection device for fundus pictures includes:
    分割模块,配置为将待检测眼底图片分割为M个超像素;A segmentation module, configured to segment the fundus image to be detected into M superpixels;
    获取模块,配置为获取所述P个超像素一一对应的地址;An obtaining module configured to obtain the addresses corresponding to the P superpixels one-to-one;
    获取模块,还配置为将所述P个超像素输入第一网络模型中,从而获取标识为关键像素或背景像素的所述P个超像素;The obtaining module is further configured to input the P superpixels into the first network model, so as to obtain the P superpixels identified as key pixels or background pixels;
    获取模块,还配置为将标识为关键像素的所述超像素输入第二网络模型中,从而获取标识为病变像素或非病变像素的标识为关键像素的所述超像素;The acquiring module is further configured to input the superpixels identified as key pixels into the second network model, so as to acquire the superpixels identified as diseased pixels or non-pathological pixels as key pixels;
    标识模块,配置为根据标识为关键像素且标识为病变像素的所述超像素对应的地址,在所述待检测眼底图片找到所述超像素的位置,并在所述待检测眼底图片上标识出该位置。The identification module is configured to find the position of the super pixel in the fundus picture to be detected according to the address corresponding to the super pixel that is identified as a key pixel and a diseased pixel, and to mark it on the fundus picture to be detected The location.
PCT/CN2020/076501 2019-04-19 2020-02-25 Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium WO2020211530A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910320422.X 2019-04-19
CN201910320422.XA CN110070531B (en) 2019-04-19 2019-04-19 Model training method for detecting fundus picture, and fundus picture detection method and device

Publications (1)

Publication Number Publication Date
WO2020211530A1 true WO2020211530A1 (en) 2020-10-22

Family

ID=67368200

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/076501 WO2020211530A1 (en) 2019-04-19 2020-02-25 Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium

Country Status (2)

Country Link
CN (1) CN110070531B (en)
WO (1) WO2020211530A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926596A (en) * 2021-02-10 2021-06-08 北京邮电大学 Real-time superpixel segmentation method and system based on recurrent neural network
CN114693670A (en) * 2022-04-24 2022-07-01 西京学院 Ultrasonic detection method for weld defects of longitudinal submerged arc welded pipe based on multi-scale U-Net

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070531B (en) * 2019-04-19 2021-05-07 京东方科技集团股份有限公司 Model training method for detecting fundus picture, and fundus picture detection method and device
CN111046835A (en) * 2019-12-24 2020-04-21 杭州求是创新健康科技有限公司 Eyeground illumination multiple disease detection system based on regional feature set neural network
CN111402246A (en) * 2020-03-20 2020-07-10 北京工业大学 Eye ground image classification method based on combined network
CN111716368A (en) * 2020-06-29 2020-09-29 重庆市柏玮熠科技有限公司 Intelligent matching checking robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050391A1 (en) * 2012-08-17 2014-02-20 Nec Laboratories America, Inc. Image segmentation for large-scale fine-grained recognition
US9443314B1 (en) * 2012-03-29 2016-09-13 Google Inc. Hierarchical conditional random field model for labeling and segmenting images
CN106599805A (en) * 2016-12-01 2017-04-26 华中科技大学 Supervised data driving-based monocular video depth estimating method
CN106934816A (en) * 2017-03-23 2017-07-07 中南大学 A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on ELM
CN110070531A (en) * 2019-04-19 2019-07-30 京东方科技集团股份有限公司 For detecting the model training method of eyeground picture, the detection method and device of eyeground picture

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040122672A1 (en) * 2002-12-18 2004-06-24 Jean-Francois Bonastre Gaussian model-based dynamic time warping system and method for speech processing
CN104517116A (en) * 2013-09-30 2015-04-15 北京三星通信技术研究有限公司 Device and method for confirming object region in image
CN107016677B (en) * 2017-03-24 2020-01-17 北京工业大学 Cloud picture segmentation method based on FCN and CNN
CN107194929B (en) * 2017-06-21 2020-09-15 太原理工大学 Method for tracking region of interest of lung CT image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9443314B1 (en) * 2012-03-29 2016-09-13 Google Inc. Hierarchical conditional random field model for labeling and segmenting images
US20140050391A1 (en) * 2012-08-17 2014-02-20 Nec Laboratories America, Inc. Image segmentation for large-scale fine-grained recognition
CN106599805A (en) * 2016-12-01 2017-04-26 华中科技大学 Supervised data driving-based monocular video depth estimating method
CN106934816A (en) * 2017-03-23 2017-07-07 中南大学 A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on ELM
CN110070531A (en) * 2019-04-19 2019-07-30 京东方科技集团股份有限公司 For detecting the model training method of eyeground picture, the detection method and device of eyeground picture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926596A (en) * 2021-02-10 2021-06-08 北京邮电大学 Real-time superpixel segmentation method and system based on recurrent neural network
CN114693670A (en) * 2022-04-24 2022-07-01 西京学院 Ultrasonic detection method for weld defects of longitudinal submerged arc welded pipe based on multi-scale U-Net

Also Published As

Publication number Publication date
CN110070531B (en) 2021-05-07
CN110070531A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
WO2020211530A1 (en) Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN109325942B (en) Fundus image structure segmentation method based on full convolution neural network
CN111985536B (en) Based on weak supervised learning gastroscopic pathology image Classification method
WO2020087960A1 (en) Image recognition method and device, terminal apparatus, and medical system
JP2022528539A (en) Quality evaluation in video endoscopy
CN109886946B (en) Deep learning-based early senile maculopathy weakening supervision and classification method
CN112396605B (en) Network training method and device, image recognition method and electronic equipment
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
Zhang et al. Attention-based multi-model ensemble for automatic cataract detection in B-scan eye ultrasound images
CN114372951A (en) Nasopharyngeal carcinoma positioning and segmenting method and system based on image segmentation convolutional neural network
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN113643261B (en) Lung disease diagnosis method based on frequency attention network
CN113781489B (en) Polyp image semantic segmentation method and device
Miao et al. Classification of Diabetic Retinopathy Based on Multiscale Hybrid Attention Mechanism and Residual Algorithm
Guergueb et al. A Review of Deep Learning Techniques for Glaucoma Detection
CN112634291A (en) Automatic burn wound area segmentation method based on neural network
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Lin et al. Res-UNet based optic disk segmentation in retinal image
CN116091446A (en) Method, system, medium and equipment for detecting abnormality of esophageal endoscope image
Jayachandran et al. Retinal vessels segmentation of colour fundus images using two stages cascades convolutional neural networks
CN112734769B (en) Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
CN114708236B (en) Thyroid nodule benign and malignant classification method based on TSN and SSN in ultrasonic image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20792180

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20792180

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20792180

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.05.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20792180

Country of ref document: EP

Kind code of ref document: A1