CN114140381A - Vitreous opacity grading screening method and device based on MDP-net - Google Patents
Vitreous opacity grading screening method and device based on MDP-net Download PDFInfo
- Publication number
- CN114140381A CN114140381A CN202111232478.3A CN202111232478A CN114140381A CN 114140381 A CN114140381 A CN 114140381A CN 202111232478 A CN202111232478 A CN 202111232478A CN 114140381 A CN114140381 A CN 114140381A
- Authority
- CN
- China
- Prior art keywords
- mdp
- net
- vitreous
- turbid
- screening device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012216 screening Methods 0.000 title claims abstract description 39
- 208000034700 Vitreous opacities Diseases 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000013135 deep learning Methods 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims description 23
- 210000004127 vitreous body Anatomy 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 16
- 210000000695 crystalline len Anatomy 0.000 claims description 11
- 238000011176 pooling Methods 0.000 claims description 10
- 210000005252 bulbus oculi Anatomy 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 210000001508 eye Anatomy 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 7
- 230000002146 bilateral effect Effects 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 230000005284 excitation Effects 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 abstract description 8
- 238000002604 ultrasonography Methods 0.000 abstract description 5
- 238000007635 classification algorithm Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000005070 sampling Methods 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
According to the MDP-net-based vitreous opacity grading screening method and device, the self-adaptive segmentation model of the eye vitreous ultrasound image and the severity classification algorithm can segment lens, trapezoid-like vitreous cavity and opacity focus area in the eye vitreous ultrasound image through deep learning, then the ratio of the trapezoid-like vitreous cavity to the opacity focus area is calculated, and then the opacity severity of the vitreous opacity of a patient can be rapidly and accurately judged and screened after image training by applying the deep learning technology, and the obtained result is rapid, objective, accurate and stable.
Description
Technical Field
The invention relates to an image processing and deep learning technology, in particular to a vitreous opacity grading screening method and device based on an MDP-net (Multi-output Dense Pyramid network).
Background
The specific condition of vitreous opacity can be represented by the shape and size of an opacity focus in a vitreous cavity in an image shot by an ophthalmologic B ultrasonic instrument, but the existing instrument cannot provide a quantitative index required for diagnosing vitreous opacity. The known methods for grading the severity of vitreous opacity are: the doctor evaluates and analyzes the B ultrasonic image of the vitreous body according to clinical experience and professional knowledge. The method has the problems of strong dependence on clinical experience of doctors and low diagnosis efficiency, is difficult to obtain accurate and reliable screening results in a short time, consumes more energy of doctors when the number of images is large, takes more time, and can only be visually judged by doctors, so the accuracy is low.
Disclosure of Invention
The invention overcomes the defects in the prior art, provides a vitreous opacity grading screening method and a vitreous opacity grading screening device based on MDP-net, automatically divides and grades vitreous opacity to replace the traditional adjusting method, and can quickly, objectively, accurately and clearly obtain the severity of vitreous opacity of a patient.
In order to solve the technical problems, the invention is realized by the following technical scheme:
a vitreous opacity classification screening method based on MDP-net is characterized by comprising the following steps:
1) the establishment of the MDP-net network model comprises the following steps: marking three parts of a crystalline lens, a trapezoid-like vitreous body cavity and a turbid stove region of an existing previously shot eye vitreous body turbid B-ultrasonic image, and training to form a label set in a training data set; inputting the training data set into the MDP-net network, updating the parameters of the network by adopting a random gradient descent method, and iterating for multiple times to obtain an MDP-net network model;
2) the matlab software inputs eyeball vitreous body pictures shot by a B-type ultrasonic diagnostic apparatus, and preprocesses the pictures, including bilateral filtering and maximum and minimum normalization algorithms, to obtain a preprocessed eyeball B ultrasonic map;
3) taking the preprocessed B-ultrasonic diagram of the eyeball as an input MDP-net network model, and outputting and dividing the crystalline lens, the trapezoid-like vitreous body cavity, the turbid focus area and the background part of the shot picture by the MDP-net network model;
4) performing morphology and thresholding post-treatment on the cavity of the trapezoid-like vitreous body and the turbid stove region, and removing scattered isolated turbid pixel points and low-intensity low-echo turbid pixel points;
5) and automatically grading the turbidity severity according to the ratio of the calculated trapezoid-like vitreous body cavity to the turbid focus area, and displaying a grading result.
Further, the MDP-net network structure is as follows: inputting a 256 multiplied by 1 image, wherein 1 represents a gray channel of the image, the size of a convolution kernel is 3 multiplied by 3, average pooling is adopted for pooling, the window size is 2 multiplied by 2, the step size of all the convolution kernels is 1, the pooling step size is 2, an excitation layer is connected behind all convolution layers and full-connection layers, an activation function is relu (linear rectification function), and the MDP-net has the task of segmenting a trapezoid-like vitreous cavity and a turbid part at image positioning; for the image segmentation task, the loss function adopts a cross entropy loss function, and the keypoint location regression task adopts a smooth L1 (minimum absolute value deviation) loss function, so that the loss function of the whole network is the weighted sum of the two loss functions.
An MDP-net based vitreous opacity classification screening device includes a processor, a memory, and a computer program stored in the memory and executable on the processor.
Further, the processor executes the computer program 113, the computer program being: and operating the software python to perform network training of deep learning by using the built MDP-net network code to obtain a model, inputting a picture into the MDP-net network model by using the code, and operating the software matlab to perform bilateral filtering and maximum and minimum normalization algorithm preprocessing on the shot image.
Further, the computer program may be partitioned into one or more modules/units, which are stored in the memory and executed by the processor.
Further, the vitreous opacity grading screening device is a desktop computer, a notebook computer, a palm computer or a cloud server.
Further, the processor is a central processing unit, general purpose processor, digital signal processor, application specific integrated circuit, off-the-shelf programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware component.
Further, the storage is an internal storage element of the vitreous opacity grading screening device, and is a hard disk or a memory of an automatic grading screening device.
Further, the storage is an external storage device of the automatic grading screening device, and the external storage device is a plug-in hard disk, an intelligent memory card, a secure digital card or a flash memory card.
Compared with the prior art, the invention has the beneficial effects that:
according to the MDP-net-based vitreous opacity grading screening method and device, the self-adaptive segmentation model of the eye vitreous ultrasound image and the severity classification algorithm can segment lens, trapezoid-like vitreous cavity and opacity focus area in the eye vitreous ultrasound image through deep learning, then the ratio of the trapezoid-like vitreous cavity to the opacity focus area is calculated, and then the opacity severity of the vitreous opacity of a patient can be rapidly and accurately judged and screened after image training by applying the deep learning technology, and the obtained result is rapid, objective, accurate and stable.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, with the understanding that the present disclosure is to be considered as an exemplification of the invention and is not intended to limit the invention to the embodiments illustrated in the drawings, in which:
FIG. 1a is a diagram of a neural network architecture;
FIG. 1b is a Denseblock Dense block in a Dense network;
FIG. 1c is an explanatory illustration of the symbols in FIGS. 1a and 1 b;
fig. 2 is a flowchart of an adaptive segmentation model and a severity classification algorithm for an ultrasound image of a vitreous eye according to embodiment 1 of the present invention;
fig. 3 is a schematic composition diagram of an algorithm processing device provided in embodiment 2 of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Referring to fig. 1a, 1b and 1c, the MDP-net network building includes:
encoding part (Down sampling)
1. The image size of the first run software python input is 256 x 256;
2. inputting a standard convolution layer with convolution kernel number 96 and kernel size 3 x 3 for the first time;
3. and then 5 times of Densblock operation is used, the Densblock block is formed by matching and combining a plurality of convolutional layers and a Concatenate cascade layer, and the specific structure is shown in the figure.
4. And 5 downsampling operations are also adopted in total, and an average pooling layer of LeakyRelu (with leakage correction linear unit) active layer + convolution kernel 1 x 1 convolution layer + kernel 2 x 2 is used alternately between each block to reduce the image size by half to realize downsampling
Description of the drawings: the number of convolution kernels (convolution layers in step 4) in downsampling is half the number of input image channels. Each densilock block consists of several convolutions, all previous convolutional layer outputs are added to the subsequent convolutional layers, the number of convolution kernels is 48, the size is 3 × 3, 1 × 1 convolution is introduced before the convolution of 3 × 3 to reduce the number of input feature maps, and Batch Normalization and leak Relu are used as activation functions before the convolution in the densilock block. The feature of the deeper layer is more beneficial to feature expression of a complex region, so that the segmentation precision of the target region is improved [24,25], therefore, the number of convolution kernels is doubled while down-sampling is carried out, namely, the number of times of a Densblock block in 5 times in coding is respectively 3, 6, 12, 24 and 16 from bottom to top, and finally, a feature map with the size of 8 x 1536 is obtained.
Decoding section
The decoding part accepts the features extracted from the encoding part and performs multi-scale processing with the features that have been up-sampled previously
And (4) fusing. The size of the up-sampling step is 2, the number of convolution kernels of each up-sampling is equal to the number of characteristic channels copied from the coding part after the up-sampling, and therefore a symmetric pyramid characteristic structure for coding and decoding is formed. 5 channels with different dimensions of the coding part are fused by copying the size corresponding to the decoding channel after upsampling, and a convolution of 3 multiplied by 3 is added after fusion so as to reduce aliasing effect of the upsampling. The image is finally restored to the original size 256 x 256. It is desirable to encourage discrimination of turbidity over various scales in the classifier, thereby providing more accurate turbidity location information, while increasing the returned gradient signal and providing additional regularization. Therefore, referring to the FPN (full name: Feature Pyramid Networks) network design, the results are finally divided into four categories by utilizing the softmax function on images with the size of 5 scales from 16 × 16 to 256 × 256, and note that the output image is not restored to 256 × 256 here, because direct upsampling to the original size can cause too much distortion of image pixels to cause too much network loss, and 8 × 8 size features are too few, so that the obtained information is not much, and therefore, prediction output is not performed. The output proportion of the 5 th output loss function of the experiment is 1, the rest is 0.3, and the final model segmentation result adopts the size of 256 × 256 of the final output.
And (3) establishing an MDP-net network model: the method adopts deep learning to automatically screen the turbid B-ultrasonic image of the vitreous body of the eye in a grading way. In the training stage, firstly, marking three parts of a crystalline lens, a trapezoid-like vitreous body cavity and a turbid focus on the existing previously shot turbid B-mode ultrasonic image of the vitreous body of the eye to form a label set in a training data set; and then inputting the training data set into the MDP-net network, updating the parameters of the network by adopting a random gradient descent method, and iterating for multiple times to obtain an MDP-net network model.
In the present embodiment, as shown in fig. 2, the MDP-net convolutional neural network structure is input with 256 × 256 × 1 images, 1 represents the gray channel of the image, the size of the convolutional kernel is 3 × 3, the pooling adopts average pooling, the window size is 2 × 2, the step size of all convolutional kernels in the network structure diagram is 1, the step size of pooling is 2, all convolutional layers and all connection layers are connected with the excitation layer, and the activation function is relu. The MDP-net is used for positioning and segmenting images to form a similar trapezoidal vitreous cavity and a turbid focus part; for the image segmentation task, the loss function adopts a cross entropy loss function, and in order to reduce the sensitivity to abnormal samples and prevent gradient explosion, the key point position regression task adopts a smooth L1 loss function, so that the loss function of the whole network is the weighted sum of the two loss functions.
The MDP-net inputs 2 kinds of data, the first kind is an image gray scale map data map, and the second kind is three kinds of segmentation target label maps of the image, including three parts of crystalline lens, trapezoid-like vitreous body cavity and turbid focus. Because the data volume is considered to be large, the convolution layer is used for reducing data, and the Dense layer is only used for concentrating neurons, so that the training quality can be met, the network is ensured to be complex enough, the training is quicker, and under-fitting is prevented.
(1) Data set preparation
A certain number of initial shot images to be adjusted are prepared, wherein the initial shot images can be in different positions and gains, and then image preprocessing transformation is carried out on the initial shot images. The more different data sets, the stronger the generalization ability of the model obtained by deep learning training.
(2) Training and prediction
The network frame uses MDP-net network, after setting training parameters, training with the batch number of 100 (the training times are fluctuated due to different data, the training batch number is reduced due to overfitting caused by too many batches, and the batch number is increased due to underfitting caused by too few batches) is carried out, a model (the file format is h5df, the gray scale image of the shot picture is input after the model is read, the lens S1, the trapezoid-like vitreous body cavity S2, the turbid focus region S3 and the background part S4 of the initial picture after the model prediction are obtained), then any initial picture (the gray scale image of the shot picture is input) can be directly predicted, the lens S1, the trapezoid-like vitreous body cavity S2, the turbid focus region S3 and the background part S4 of the initial picture are obtained after the prediction, the 4 types of the segmentation map are respectively represented by different colors in 4 (the network divides the lens region S1 in the picture is represented as 'a' segmentation map Part ", the network will mark them as (255,0,0), S2 as (255,163,0), S3 as (255,155,0), S4 as (128,128,128).
Referring to fig. 2, the method for automatically classifying and screening vitreous opacity mainly includes the following steps:
101. inputting an eyeball vitreous body picture shot by a B-type ultrasonic diagnostic apparatus, and preprocessing the picture, wherein algorithms such as bilateral filtering, maximum and minimum normalization and the like are included to obtain a preprocessed eyeball B ultrasonic image;
102. taking the preprocessed B-ultrasonic picture of the eyeball as the input of a vitreous opacity automatic screening and grading model, and outputting and dividing the crystalline lens, the trapezoid-like vitreous cavity, the opacity focus area and the background part of the shot picture by the model;
103. carrying out post-treatment such as morphology, thresholding and the like on the cavity of the trapezoid-like vitreous body and the turbid stove region, and removing scattered isolated turbid pixel points and low-intensity low-echo turbid pixel points;
104. and automatically grading the turbidity severity according to the ratio of the calculated and post-processed trapezoid-like vitreous cavity to the turbid focus area, and displaying a grading result.
Referring to fig. 3, the vitreous opacity automatic grading screening system provided in this embodiment includes a processor 111, a memory 112, and a computer program 113 stored in the memory 112 and operable on the processor 111, such as a vitreous opacity automatic grading screening program. The processor 111 implements the steps of embodiment 1 described above, for example, the steps shown in fig. 1, when executing the computer program 113 (the computer program runs a software python, performs network training for deep learning by using a built MDP-net neural network code to obtain a model, inputs a picture into the network model by using the code, and runs a software matlab to perform algorithm preprocessing such as bilateral filtering and maximum and minimum normalization on a captured image).
Illustratively, the computer program 113 may be divided into one or more modules/units, which are stored in the memory 112 (the memory is also called a hard disk for storing the code files, and storing environment programs for running the programs, such as (python, matlab) and even Windows system files for booting a computer, hardware drivers (GPU graphics card for training, CPU processor, etc.)), and executed by the processor 111 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments (instructions are simply the code we have entered) capable of performing specific functions, which are used to describe the execution of the computer program 113 in the automated turbid grading screening apparatus.
The automatic picture grading and screening device can be computing equipment such as a desktop computer, a notebook computer, a palm computer and a cloud server. The automated hierarchical screening apparatus may include, but is not limited to, a processor 111, a memory 112. Those skilled in the art will appreciate that more or fewer components than those shown may be included, or certain components may be combined, or different components may be included, for example, the adaptive alignment apparatus may also include input output devices, network access devices, buses, etc.
The Processor 111 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 112 may be an internal storage element of the automated hierarchical screening device, such as a hard disk or memory of the automated hierarchical screening device. The memory 112 may also be an external storage device of the automatic hierarchical screening apparatus, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the automatic hierarchical screening apparatus. Further, the storage 112 may also include both an internal storage unit and an external storage device of the automated hierarchical screening apparatus. The memory 112 is used to store the computer program and other programs and data required by the automated hierarchical screening apparatus. The memory 112 may also be used to temporarily store data that has been output or is to be output.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of embodiment 1.
The computer-readable medium can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
Finally, it should be noted that: although the present invention has been described in detail with reference to the embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.
Claims (9)
1. A vitreous opacity classification screening method based on MDP-net is characterized by comprising the following steps:
1) the establishment of the MDP-net network model comprises the following steps: marking three parts of a crystalline lens, a trapezoid-like vitreous body cavity and a turbid stove region of an existing previously shot eye vitreous body turbid B-ultrasonic image, and training to form a label set in a training data set; inputting the training data set into the MDP-net network, updating the parameters of the network by adopting a random gradient descent method, and iterating for multiple times to obtain an MDP-net network model;
2) the matlab software inputs eyeball vitreous body pictures shot by a B-type ultrasonic diagnostic apparatus, and preprocesses the pictures, including bilateral filtering and maximum and minimum normalization algorithms, to obtain a preprocessed eyeball B ultrasonic map;
3) taking the preprocessed B-ultrasonic diagram of the eyeball as an input MDP-net network model, and outputting and dividing the crystalline lens, the trapezoid-like vitreous body cavity, the turbid focus area and the background part of the shot picture by the MDP-net network model;
4) performing morphology and thresholding post-treatment on the cavity of the trapezoid-like vitreous body and the turbid stove region, and removing scattered isolated turbid pixel points and low-intensity low-echo turbid pixel points;
5) and automatically grading the turbidity severity according to the ratio of the calculated trapezoid-like vitreous body cavity to the turbid focus area, and displaying a grading result.
2. The MDP-net-based vitreous opacity classification screening method as claimed in claim 1, wherein the MDP-net network structure is: inputting a 256 multiplied by 1 image, wherein 1 represents a gray channel of the image, the size of a convolution kernel is 3 multiplied by 3, average pooling is adopted for pooling, the window size is 2 multiplied by 2, the step size of all the convolution kernels is 1, the pooling step size is 2, an excitation layer is connected behind all convolution layers and a full connection layer, an activation function is relu, and the MDP-net has the task of segmenting a similar trapezoidal vitreous cavity and a turbid focus part in image positioning; for the image segmentation task, the loss function adopts a cross entropy loss function, and the key point position regression task adopts a smooth L1 loss function, so that the loss function of the whole network is the weighted sum of the two loss functions.
3. The MDP-net based vitreous opacity classification screening device according to claim 1, comprising a processor, a memory and a computer program stored in the memory and executable on the processor.
4. The MDP-net based vitreous opacity classification screening device according to claim 3, wherein the processor executes the computer program 113, the computer program being: and operating the software python to perform network training of deep learning by using the built MDP-net network code to obtain a model, inputting a picture into the MDP-net network model by using the code, and operating the software matlab to perform bilateral filtering and maximum and minimum normalization algorithm preprocessing on the shot image.
5. The MDP-net based vitreous opacity classification screening device according to claim 3, wherein the computer program can be divided into one or more modules/units, which are stored in the memory and executed by the processor.
6. The MDP-net based vitreous opacity classification screening device according to claim 3, wherein the vitreous opacity classification screening device is a desktop computer, a notebook, a palm computer or a cloud server.
7. The MDP-net based vitreous opacity grading screening device according to claim 3, wherein the processor is a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, an off-the-shelf programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware component.
8. The MDP-net based vitreous opacity grading screening device of claim 3, wherein the storage is an internal storage element of the vitreous opacity grading screening device, being a hard disk or a memory of an automatic grading screening device.
9. The MDP-net based vitreous opacity grading screening device of claim 3, wherein the storage is an external storage device of the automatic grading screening device, the external storage device being a hard plug-in disk, a smart memory card secure digital card or a flash memory card.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111232478.3A CN114140381A (en) | 2021-10-22 | 2021-10-22 | Vitreous opacity grading screening method and device based on MDP-net |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111232478.3A CN114140381A (en) | 2021-10-22 | 2021-10-22 | Vitreous opacity grading screening method and device based on MDP-net |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114140381A true CN114140381A (en) | 2022-03-04 |
Family
ID=80394682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111232478.3A Pending CN114140381A (en) | 2021-10-22 | 2021-10-22 | Vitreous opacity grading screening method and device based on MDP-net |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114140381A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114998353A (en) * | 2022-08-05 | 2022-09-02 | 汕头大学·香港中文大学联合汕头国际眼科中心 | System for automatically detecting vitreous opacity spot fluttering range |
-
2021
- 2021-10-22 CN CN202111232478.3A patent/CN114140381A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114998353A (en) * | 2022-08-05 | 2022-09-02 | 汕头大学·香港中文大学联合汕头国际眼科中心 | System for automatically detecting vitreous opacity spot fluttering range |
CN114998353B (en) * | 2022-08-05 | 2022-10-25 | 汕头大学·香港中文大学联合汕头国际眼科中心 | System for automatically detecting vitreous opacity spot fluttering range |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110399929B (en) | Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium | |
CN110378381B (en) | Object detection method, device and computer storage medium | |
CN110120047B (en) | Image segmentation model training method, image segmentation method, device, equipment and medium | |
EP4105877A1 (en) | Image enhancement method and image enhancement apparatus | |
US20220198230A1 (en) | Auxiliary detection method and image recognition method for rib fractures based on deep learning | |
WO2023070447A1 (en) | Model training method, image processing method, computing processing device, and non-transitory computer readable medium | |
CN113205524B (en) | Blood vessel image segmentation method, device and equipment based on U-Net | |
CN113240655B (en) | Method, storage medium and device for automatically detecting type of fundus image | |
CN112862805B (en) | Automatic auditory neuroma image segmentation method and system | |
Zhang et al. | Multi-scale neural networks for retinal blood vessels segmentation | |
CN112836653A (en) | Face privacy method, device and apparatus and computer storage medium | |
Tran et al. | Fully convolutional neural network with attention gate and fuzzy active contour model for skin lesion segmentation | |
Wang et al. | SERR‐U‐Net: Squeeze‐and‐Excitation Residual and Recurrent Block‐Based U‐Net for Automatic Vessel Segmentation in Retinal Image | |
Wu et al. | Continuous refinement-based digital pathology image assistance scheme in medical decision-making systems | |
Shamrat et al. | An advanced deep neural network for fundus image analysis and enhancing diabetic retinopathy detection | |
Gulati et al. | Comparative analysis of deep learning approaches for the diagnosis of diabetic retinopathy | |
Khattar et al. | Computer assisted diagnosis of skin cancer: a survey and future recommendations | |
CN114140381A (en) | Vitreous opacity grading screening method and device based on MDP-net | |
CN117253034A (en) | Image semantic segmentation method and system based on differentiated context | |
CN112862089B (en) | Medical image deep learning method with interpretability | |
CN115410032A (en) | OCTA image classification structure training method based on self-supervision learning | |
Desiani et al. | Multi-Stage CNN: U-Net and Xcep-Dense of Glaucoma Detection in Retinal Images | |
CN113052012A (en) | Eye disease image identification method and system based on improved D-S evidence | |
CN112734701A (en) | Fundus focus detection method, fundus focus detection device and terminal equipment | |
CN111179226A (en) | Visual field map identification method and device and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Country or region after: China Address after: 528225 Foshan Institute of science and technology, Xianxi reservoir West Road, Shishan town, Nanhai District, Foshan City, Guangdong Province Applicant after: Foshan University Address before: 528225 Foshan Institute of science and technology, Xianxi reservoir West Road, Shishan town, Nanhai District, Foshan City, Guangdong Province Applicant before: FOSHAN University Country or region before: China |