CN116630324B - Method for automatically evaluating adenoid hypertrophy by MRI (magnetic resonance imaging) image based on deep learning - Google Patents

Method for automatically evaluating adenoid hypertrophy by MRI (magnetic resonance imaging) image based on deep learning Download PDF

Info

Publication number
CN116630324B
CN116630324B CN202310912201.8A CN202310912201A CN116630324B CN 116630324 B CN116630324 B CN 116630324B CN 202310912201 A CN202310912201 A CN 202310912201A CN 116630324 B CN116630324 B CN 116630324B
Authority
CN
China
Prior art keywords
layer
convolution
image
enters
adenoid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310912201.8A
Other languages
Chinese (zh)
Other versions
CN116630324A (en
Inventor
周柚
贺梓泠
安光辉
宋磊
王昌龙
孙铭蔚
王鏐璞
杜伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202310912201.8A priority Critical patent/CN116630324B/en
Publication of CN116630324A publication Critical patent/CN116630324A/en
Application granted granted Critical
Publication of CN116630324B publication Critical patent/CN116630324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention is applicable to the technical field of medical image processing, and provides a method for automatically evaluating adenoid hypertrophy by MRI images based on deep learning, which comprises the following steps: converting the acquired MRI image in DICOM format into PNG format, selecting one frame with nasal septum and several frames around the frame as data set from sagittal image sequence, and preprocessing the image; image enhancement, expanding a data set; making a label; extracting features of the image; four landmarks are automatically located in the image and the ratio of the adenoid thickness to the nasopharyngeal cavity gap (AN ratio) is calculated to evaluate whether the image corresponds to a patient with adenoid hypertrophy. According to the invention, by automatically positioning four landmarks and further calculating the AN ratio, the automatic evaluation of whether the patient has adenoid hypertrophy or not is realized, the repeated and time-consuming measurement work of doctors is reduced, and the doctors are assisted to evaluate and diagnose the adenoid hypertrophy.

Description

Method for automatically evaluating adenoid hypertrophy by MRI (magnetic resonance imaging) image based on deep learning
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a method for automatically evaluating adenoid hypertrophy by using MRI images based on deep learning.
Background
The adenoids are a mass of lymphoid tissue located in the posterior wall of the top of the nasopharynx. The adenoids can be physiologically hypertrophic at the age of 2-10 years, but local infection and inflammatory stimulus can cause the pathological hypertrophy of the adenoids. For children, the frequency of upper respiratory tract infections increases due to the fact that adenoid hypertrophy encroaches on the airways, resulting in a relatively small nasopharyngeal volume. Other problems that may be caused by adenoid hypertrophy include maxillofacial dysplasia, excessive daytime sleepiness, impaired cognitive function, poor learning performance, and the like. Depending on the symptoms and course of disease in children, mild symptoms can inhibit continued hypertrophy of the adenoids by symptomatic conservative treatment, and the like, which self-atrophy, but when the adenoids are severely hypertrophic, surgical excision may be required. The adenoid hypertrophy has great harm to children and the incidence rate is remarkably increased in recent years, and the timely detection and diagnosis have important significance for early treatment, disease course control and the like.
Currently, the examination of childhood adenoid hypertrophy mainly includes nasopharyngeal lateral images and flexible nasopharyngeal microscopy. However, the invasiveness of flexible nasopharyngeal mirrors makes many children difficult to cooperate with doctors in preoperative glandular assessment, thereby limiting their use in clinical diagnosis. Therefore, nasopharyngeal side imaging is the most commonly used examination tool for infants suffering from adenoid hypertrophy, and the degree of adenoid hypertrophy and nasopharyngeal cavity obstruction is judged by measuring the adenoid/nasopharyngeal cavity (A/N) ratio on the nasopharyngeal side image, so that a basis is provided for targeted treatment. There are difficulties in manually measuring the various indices on the image, which results in a large number of errors and individual differences between measuring physicians, the accuracy of the identification is largely dependent on the clinical experience of the doctors, inaccurate identification may lead to incorrect evaluation results, and such evaluation is time-consuming and requires repeated work. For this reason we propose a method for automatically assessing adenoid hypertrophy based on deep learning MRI images.
Disclosure of Invention
The invention aims to provide a method for automatically evaluating adenoid hypertrophy based on a deep learning MRI image, which aims to solve the problems in the background art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a method for automatically assessing adenoid hypertrophy based on deep learning MRI images, comprising the steps of:
step A, converting all acquired MRI images from a DICOM format to a PNG format, selecting one frame of a nasal septum and a plurality of left and right frames of images thereof from a coronal bit sequence, carrying out gray scale normalization preprocessing, unifying pixel values to a [0,1] interval, cutting into a picture of 560pix x 640pix, and marking a data set;
step B, enhancing the image processed in the step A, expanding a data set, and dividing the expanded data set into three parts, namely a training set, a verification set and a test set;
step C, constructing an adenoidet adeno sample evaluation network model based on a convolutional neural network;
step D, training the Adenoidenet by using the training set and the verification set obtained in the step B to generate a training model;
and E, testing the training model generated in the step D by using the test set obtained in the step B.
Further, in the step B, the specific operation of expanding the data set is as follows:
the image data is flipped horizontally, rotated by different angles and scaled.
Further, in the step C, the CNN-based adenoidenet adenovirus evaluation network model includes:
an encoder module for extracting local features by convolution;
a decoder module for recovering image resolution and capturing long-range dependencies;
and the landmark detection module is used for realizing landmark positioning.
Further, the specific operation of the step C is as follows: the input PNG image is subjected to information loss caused by downsampling through a coder module and a decoder module, the final characteristics are extracted and obtained, the characteristics are sent into a landmark detection module, the coordinates of four finally predicted landmarks are obtained, and finally the AN ratio is calculated.
Further, in the step C:
in the encoder, input data firstly passes through two convolution layers, namely a BN layer and a ReLU layer, the convolution kernel size of each convolution layer is 3*3, the step length is 1, the output of each convolution layer enters the BN layer and the ReLU layer, the output of the last ReLU layer enters a pooling layer, the pooling mode is maximum pooling, and the pooling window size is 2; then three repeated sub-modules based on depth separable convolution are adopted, each module comprises a convolution layer, an LN layer and two full connection layers, the convolution kernel size of the convolution layer is 7*7, the step length is 1, the input of the convolution layer enters the LN layer, then the input of the convolution layer sequentially passes through the full connection layers and the GELU layers, the output of the last GELU layer in each module enters a pooling layer, the pooling mode is the maximum pooling, the pooling window size is 2, and the local characteristics are finally obtained; the local feature is then input into a decoder;
in the decoder, the input features pass through three sub-modules based on adaptive convolution, each module comprises a convolution layer, a BN layer, a ReLU layer, an adaptive convolution layer and an upsampling layer, the convolution kernel of the convolution layer has a size of 3*3, the step size is 1, the output of the convolution layer enters the BN layer and the ReLU layer, and then enters the adaptive convolution layer, and the adaptive convolution layer comprises two branches: one branch is a standard 3*3 convolution layer, a BN layer and a ReLU layer, the other branch is a deformable convolution, the outputs of the two branches are added to obtain a final output, the final output enters an up-sampling layer, the up-sampling uses transposed convolution, the convolution kernel size is 3*3, and the step length is 1; the output of the last up-sampling layer enters a landmark detection module;
in the landmark detection module, the feature obtains a heat map through a convolution layer, and then the convolution layer outputs and enters a DSNT layer to obtain a final coordinate value; the convolution kernel size of the convolution layer is 1*1; and finally, calculating according to the coordinates to obtain AN AN ratio.
Further, the specific operation of the step D is as follows:
and C, training the Adenoidenet by using the training set obtained in the step B, setting the initial learning rate to be 0.001, the batch_size to be 16, the loss function to be a mean square loss function, and the optimizer to be an Adam optimizer.
Further, the specific operation of the step E is as follows:
and C, testing the trained model by using the test set obtained in the step B, predicting coordinates of four landmarks, calculating AN AN ratio, and judging whether the model is adenoid hypertrophy.
Compared with the prior art, the invention has the beneficial effects that:
according to the method for automatically evaluating adenoid hypertrophy based on the MRI image of deep learning, the encoder module in the adenoid evaluation network model based on the convolutional neural network can efficiently extract local features through convolution, the decoder module can recover image resolution and capture long-distance dependence, the local features and global features are mutually fused by the adenoid, so that more-scale and richer features are obtained, and accurate landmark positioning is finally realized in the landmark detection module. The method can realize accurate landmark positioning and evaluation of adenoid hypertrophy, reduces the cost of manual participation, and provides a powerful auxiliary means for the evaluation of adenoid hypertrophy.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a block diagram of an adenoidenet of the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Specific implementations of the invention are described in detail below in connection with specific embodiments.
The invention provides a method for automatically evaluating adenoid hypertrophy based on a deep learning MRI image, which comprises the following steps:
step A, converting all acquired MRI images from a DICOM format to a PNG format, selecting one frame of a nasal septum and a plurality of left and right frames of images thereof from a coronal bit sequence, carrying out gray scale normalization preprocessing, unifying pixel values to a [0,1] interval, cutting into a picture of 560pix x 640pix, and marking a data set;
step B, enhancing the image processed in the step A, expanding a data set, and dividing the expanded data set into three parts, namely a training set, a verification set and a test set;
step C, constructing an adenoidet adeno sample evaluation network model based on a convolutional neural network;
step D, training the Adenoidenet by using the training set and the verification set obtained in the step B to generate a training model;
and E, testing the training model generated in the step D by using the test set obtained in the step B.
As a preferred embodiment of the present invention, in the step B, the specific operation of expanding the data set is as follows:
all image data is flipped horizontally, rotated through different angles and scaled.
As a preferred embodiment of the present invention, in the step C, the CNN-based adenoidet adenoid evaluation network model includes:
an encoder module for extracting local features by convolution;
a decoder module for recovering image resolution and capturing long-range dependencies;
and the landmark detection module is used for realizing landmark positioning.
As a preferred embodiment of the present invention, the specific operation of the step C is as follows: the input PNG image passes through AN encoder module and a decoder module, information loss caused by downsampling is compensated through skip connection, final characteristics are extracted, the characteristics are sent into a landmark detection module, final predicted landmark coordinates are obtained, and AN AN ratio is calculated.
As a preferred embodiment of the present invention, in the step C:
in the encoder, input data firstly passes through two convolution layers, namely a BN layer and a ReLU layer, the convolution kernel size of each convolution layer is 3*3, the step length is 1, the output of each convolution layer enters the BN layer and the ReLU layer, the output of the last ReLU layer enters a pooling layer, the pooling mode is maximum pooling, and the pooling window size is 2; then three repeated sub-modules based on depth separable convolution are adopted, each module comprises a convolution layer, an LN layer and two full connection layers, the convolution kernel size of the convolution layer is 7*7, the step length is 1, the input of the convolution layer enters the LN layer, then the input of the convolution layer sequentially passes through the full connection layers and the GELU layers, the output of the last GELU layer in each module enters a pooling layer, the pooling mode is the maximum pooling, the pooling window size is 2, and the local characteristics are finally obtained; the local feature is then input into a decoder;
in the decoder, the input features pass through three sub-modules based on adaptive convolution, each module comprises a convolution layer, a BN layer, a ReLU layer, an adaptive convolution layer and an upsampling layer, the convolution kernel of the convolution layer has a size of 3*3, the step size is 1, the output of the convolution layer enters the BN layer and the ReLU layer, and then enters the adaptive convolution layer, and the adaptive convolution layer comprises two branches: one branch is a standard 3*3 convolution layer, a BN layer and a ReLU layer, the other branch is a deformable convolution, the outputs of the two branches are added to obtain a final output, the final output enters an up-sampling layer, the up-sampling uses transposed convolution, the convolution kernel size is 3*3, and the step length is 1; the output of the last up-sampling layer enters a landmark detection module;
in the landmark detection module, the feature obtains a heat map through a convolution layer, and then the convolution layer outputs and enters a DSNT layer to obtain a final coordinate value; the convolution kernel size of the convolution layer is 1*1; and finally, calculating according to the coordinates to obtain AN AN ratio.
In the embodiment of the invention, preferably, the deformable convolution can increase the standard position of the sampling point by any offset so as to enlarge the sampling area and capture the deformation characteristic.
As a preferred embodiment of the present invention, the specific operation of the step D is as follows:
and C, training the Adenoidenet by using the training set obtained in the step B, setting the initial learning rate to be 0.001, the batch_size to be 16, the loss function to be a mean square loss function, and the optimizer to be an Adam optimizer.
As a preferred embodiment of the present invention, the specific operation of step E is as follows:
and C, testing the trained model by using the test set obtained in the step B, predicting coordinates of four landmarks, calculating AN AN ratio, and judging whether the model is adenoid hypertrophy.
Embodiment 1, a method for automatically evaluating adenoid hypertrophy based on deep learning MRI images according to an embodiment of the present invention, includes the steps of:
A. image preprocessing
Collecting 300 samples, wherein the coronal position of each sample comprises 96-134 frames which are different, firstly converting the samples from a DICOM format into PNG images in batches through a pydicom library in Python, then selecting one frame of images with the nasal septum and left and right frames of images with the image size of 737pix×901pix, firstly carrying out gray scale normalization treatment on the images, normalizing pixel values to be in a [0,1] interval, and then cutting pictures to be in a size of 560pix×640 pix; the poorly imaged sample was removed, according to 7:1:2, dividing the sample into a training set, a verification set and a test set according to the proportion, and finally obtaining: training set 350, verification set 50, test set 100.
Enhancement of the original data: (a) rotating by-10 degrees to 10 degrees with a random probability of 0.5; (b) flipping with a probability level of 0.5; (c) randomly scaling 90% of the original length and width with a probability of 0.5.
B. Network construction and training
Constructing a CNN-based network, wherein the input of the Adenoidenet is a PNG image, the input image passes through an encoder module and a decoder module, and compensates information loss caused by downsampling through skip connection, extracts to obtain final characteristics, and then sends the characteristics to a landmark detection module to obtain final predicted landmark coordinates;
in the encoder, input data firstly passes through two convolution layers, namely a BN layer and a ReLU layer, the convolution kernel size of each convolution layer is 3*3, the step length is 1, the output of each convolution layer enters the BN layer and the ReLU layer, the output of the last ReLU layer enters a pooling layer, the pooling mode is maximum pooling, and the pooling window size is 2; then three repeated sub-modules based on depth separable convolution are adopted, each module comprises a convolution layer, an LN layer and two full connection layers, the convolution kernel size of the convolution layer is 7*7, the step length is 1, the input of the convolution layer enters the LN layer, then the input of the convolution layer sequentially passes through the full connection layers and the GELU layers, the output of the last GELU layer in each module enters a pooling layer, the pooling mode is the maximum pooling, the pooling window size is 2, and the local characteristics are finally obtained; the local feature is then input into a decoder;
in the decoder, the input features pass through three sub-modules based on adaptive convolution, each module comprises a convolution layer, a BN layer, a ReLU layer, an adaptive convolution layer and an upsampling layer, the convolution kernel of the convolution layer has a size of 3*3, the step size is 1, the output of the convolution layer enters the BN layer and the ReLU layer, and then enters the adaptive convolution layer, and the adaptive convolution layer comprises two branches: one branch is a standard 3*3 convolution layer, a BN layer and a ReLU layer, the other branch is a deformable convolution, the outputs of the two branches are added to obtain a final output, the final output enters an up-sampling layer, the up-sampling uses transposed convolution, the convolution kernel size is 3*3, and the step length is 1; the output of the last up-sampling layer enters a landmark detection module;
in the landmark detection module, the feature obtains a heat map through a convolution layer, and then the output of the convolution layer enters a DSNT layer to obtain the final coordinate value of the predicted landmark. The convolution kernel size of the convolution layer is 1*1. And finally, calculating according to the coordinates to obtain AN AN ratio, and further evaluating whether the adenoid hypertrophy exists.
C. Training of a network
Training the Adenoidenet by using the training set obtained in the A, and setting the initial learning rate to be 0.001 and the batch_size to be 16. The loss function is set as a mean square loss function and the optimizer selects the Adam optimizer. Setting training epoch as 100, setting learning rate attenuation strategy, wherein the learning rate is stepwise reduced according to the increase of training times to [0.0001,0.00001,0.000001], the corresponding training epoch is [25,50,75], and once for each training round, verifying, and stopping training when the testing effect of the verification set reaches convergence.
D. The test data verifies the trained model and determines the test effect
And predicting landmark positions of the test set. Loading the model and the weight stored in the training stage, inputting test data into the trained model to obtain a test result, wherein the test result comprises the coordinate and the AN ratio of each landmark, and calculating the average radial error and the AN error of the model predictive landmark according to the predictive value and the true value given by the model.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and improvements can be made by those skilled in the art without departing from the spirit of the present invention, and these should also be considered as the scope of the present invention, which does not affect the effect of the implementation of the present invention and the utility of the patent.

Claims (5)

1. A method for automatically assessing adenoid hypertrophy based on deep learning MRI images, comprising the steps of:
step A, converting all acquired MRI images from a DICOM format to a PNG format, selecting one frame of a nasal septum and a plurality of left and right frames of images thereof from a coronal bit sequence, carrying out gray scale normalization preprocessing, unifying pixel values to a [0,1] interval, cutting into a picture of 560pix x 640pix, and marking a data set;
step B, enhancing the image processed in the step A, expanding a data set, and dividing the expanded data set into three parts, namely a training set, a verification set and a test set;
step C, constructing an adenoidet adeno sample evaluation network model based on a convolutional neural network;
step D, training the Adenoidenet by using the training set and the verification set obtained in the step B to generate a training model;
step E, testing the training model generated in the step D by using the test set obtained in the step B;
in the step C, the CNN-based adenoidet adenoid evaluation network model includes:
an encoder module for extracting local features by convolution;
a decoder module for recovering image resolution and capturing long-range dependencies;
the landmark detection module is used for realizing landmark positioning;
in the step C:
in the encoder, input data firstly passes through two convolution layers, namely a BN layer and a ReLU layer, the convolution kernel size of each convolution layer is 3*3, the step length is 1, the output of each convolution layer enters the BN layer and the ReLU layer, the output of the last ReLU layer enters a pooling layer, the pooling mode is maximum pooling, and the pooling window size is 2; then three repeated sub-modules based on depth separable convolution are adopted, each module comprises a convolution layer, an LN layer and two full connection layers, the convolution kernel size of the convolution layer is 7*7, the step length is 1, the input of the convolution layer enters the LN layer, then the input of the convolution layer sequentially passes through the full connection layers and the GELU layers, the output of the last GELU layer in each module enters a pooling layer, the pooling mode is the maximum pooling, the pooling window size is 2, and the local characteristics are finally obtained; the local feature is then input into a decoder;
in the decoder, the input features pass through three sub-modules based on adaptive convolution, each module comprises a convolution layer, a BN layer, a ReLU layer, an adaptive convolution layer and an upsampling layer, the convolution kernel of the convolution layer has a size of 3*3, the step size is 1, the output of the convolution layer enters the BN layer and the ReLU layer, and then enters the adaptive convolution layer, and the adaptive convolution layer comprises two branches: one branch is a standard 3*3 convolution layer, a BN layer and a ReLU layer, the other branch is a deformable convolution, the outputs of the two branches are added to obtain a final output, the final output enters an up-sampling layer, the up-sampling uses transposed convolution, the convolution kernel size is 3*3, and the step length is 1; the output of the last up-sampling layer enters a landmark detection module;
in the landmark detection module, the feature obtains a heat map through a convolution layer, and then the convolution layer outputs and enters a DSNT layer to obtain a final coordinate value; the convolution kernel size of the convolution layer is 1*1; and finally, calculating according to the coordinates to obtain AN AN ratio.
2. The method for automatically assessing adenoid hypertrophy based on deep learning MRI images as claimed in claim 1 wherein in said step B the specific operation of expanding the data set is as follows:
the image data is flipped horizontally, rotated by different angles and scaled.
3. The method for automatically assessing adenoid hypertrophy based on deep learning MRI images as set forth in claim 1 wherein said step C comprises the specific operations of: the input PNG image is subjected to information loss caused by downsampling through a coder module and a decoder module, the final characteristics are extracted and obtained, the characteristics are sent into a landmark detection module, the coordinates of four finally predicted landmarks are obtained, and finally the AN ratio is calculated.
4. The method for automatically assessing adenoid hypertrophy based on deep learning MRI images as set forth in claim 1 wherein said step D comprises the specific operations of:
and C, training the Adenoidenet by using the training set obtained in the step B, setting the initial learning rate to be 0.001, the batch_size to be 16, the loss function to be a mean square loss function, and the optimizer to be an Adam optimizer.
5. The method for automatically assessing adenoid hypertrophy based on deep learning MRI images as set forth in claim 1 wherein said step E is specifically performed by:
and C, testing the trained model by using the test set obtained in the step B, predicting coordinates of four landmarks, calculating AN AN ratio, and judging whether the model is adenoid hypertrophy.
CN202310912201.8A 2023-07-25 2023-07-25 Method for automatically evaluating adenoid hypertrophy by MRI (magnetic resonance imaging) image based on deep learning Active CN116630324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310912201.8A CN116630324B (en) 2023-07-25 2023-07-25 Method for automatically evaluating adenoid hypertrophy by MRI (magnetic resonance imaging) image based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310912201.8A CN116630324B (en) 2023-07-25 2023-07-25 Method for automatically evaluating adenoid hypertrophy by MRI (magnetic resonance imaging) image based on deep learning

Publications (2)

Publication Number Publication Date
CN116630324A CN116630324A (en) 2023-08-22
CN116630324B true CN116630324B (en) 2023-10-13

Family

ID=87592504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310912201.8A Active CN116630324B (en) 2023-07-25 2023-07-25 Method for automatically evaluating adenoid hypertrophy by MRI (magnetic resonance imaging) image based on deep learning

Country Status (1)

Country Link
CN (1) CN116630324B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726624B (en) * 2024-02-07 2024-05-28 北京长木谷医疗科技股份有限公司 Method and device for intelligently identifying and evaluating adenoid lesions in real time under video stream
CN118053192A (en) * 2024-03-01 2024-05-17 中国人民解放军空军军医大学 Adenoid hypertrophy recognition system based on multi-angle face image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN114187331A (en) * 2021-12-10 2022-03-15 哈尔滨工程大学 Unsupervised optical flow estimation method based on Transformer feature pyramid network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN114187331A (en) * 2021-12-10 2022-03-15 哈尔滨工程大学 Unsupervised optical flow estimation method based on Transformer feature pyramid network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Mingmin Bi et al.."MIB-ANet: A novel multi-scale deep network for nasal endoscopy-based adenoid hypertrophy grading".《Fronties in Medicine》.2023,第1-9页. *
Tingting Zhao et al.."Automated Adenoid Hypertrophy Assessment with Lateral Cephalometry in Children Based on Artificial Intelligence".《MDPI》.2021,第1-11页. *

Also Published As

Publication number Publication date
CN116630324A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN116630324B (en) Method for automatically evaluating adenoid hypertrophy by MRI (magnetic resonance imaging) image based on deep learning
KR102125127B1 (en) Method of brain disorder diagnosis via deep learning
CN111798416B (en) Intelligent glomerulus detection method and system based on pathological image and deep learning
CN112529894B (en) Thyroid nodule diagnosis method based on deep learning network
JP6746027B1 (en) Artificial intelligence based Parkinson's disease diagnostic device and method
CN112365464B (en) GAN-based medical image lesion area weak supervision positioning method
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN111462201B (en) Follow-up analysis system and method based on novel coronavirus pneumonia CT image
CN113298831B (en) Image segmentation method and device, electronic equipment and storage medium
CN116703901B (en) Lung medical CT image segmentation and classification device and equipment
CN114119516B (en) Virus focus segmentation method based on migration learning and cascade self-adaptive cavity convolution
CN110974306A (en) System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope
CN117876402B (en) Intelligent segmentation method for temporomandibular joint disorder image
CN111311626A (en) Skull fracture automatic detection method based on CT image and electronic medium
CN111445443B (en) Early acute cerebral infarction detection method and device
CN114972266A (en) Lymphoma ultrasonic image semantic segmentation method based on self-attention mechanism and stable learning
CN114581459A (en) Improved 3D U-Net model-based segmentation method for image region of interest of preschool child lung
CN113066549B (en) Clinical effectiveness evaluation method and system of medical instrument based on artificial intelligence
CN113902738A (en) Heart MRI segmentation method and system
CN116309522B (en) Panorama piece periodontitis intelligent grading system based on two-stage deep learning model
Dai et al. More reliable AI solution: Breast ultrasound diagnosis using multi-AI combination
Liu et al. Automatic fetal ultrasound image segmentation of first trimester for measuring biometric parameters based on deep learning
CN116168029A (en) Method, device and medium for evaluating rib fracture
Zhang et al. Atlas-driven lung lobe segmentation in volumetric X-ray CT images
CN116091522A (en) Medical image segmentation method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant