CN116384448A - CD severity grading system based on hybrid high-order asymmetric convolution network - Google Patents

CD severity grading system based on hybrid high-order asymmetric convolution network Download PDF

Info

Publication number
CN116384448A
CN116384448A CN202310375935.7A CN202310375935A CN116384448A CN 116384448 A CN116384448 A CN 116384448A CN 202310375935 A CN202310375935 A CN 202310375935A CN 116384448 A CN116384448 A CN 116384448A
Authority
CN
China
Prior art keywords
order
feature map
convolution
module
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310375935.7A
Other languages
Chinese (zh)
Other versions
CN116384448B (en
Inventor
戚婧
魏艳玲
粘永健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Military Medical University TMMU
Original Assignee
Third Military Medical University TMMU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Military Medical University TMMU filed Critical Third Military Medical University TMMU
Priority to CN202310375935.7A priority Critical patent/CN116384448B/en
Publication of CN116384448A publication Critical patent/CN116384448A/en
Application granted granted Critical
Publication of CN116384448B publication Critical patent/CN116384448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a CD severity grading system based on a hybrid high-order asymmetric convolution network, which comprises the following steps: the input module is used for acquiring CT images to be classified; the preprocessing module is used for preprocessing the CT image to be classified to obtain an intestinal wall image to be classified; the grading module inputs the intestinal wall image to be graded into a trained grading model, and the grading model outputs a grading result which is a CD severity grade; the output module outputs a grading result; the hierarchical model comprises an asymmetric convolution module and a mixed high-order module which are connected in series, wherein the order number of the mixed high-order module is more than or equal to 2. The classification model adopts an asymmetric convolution network to capture as much effective information in the transverse direction or the longitudinal direction as possible, is beneficial to modeling long-distance relations, and utilizes a mixed high-order module to capture and fuse different orders of information, so that performance indexes such as accuracy and authenticity of the classification model can be improved, and further accuracy and application value of CD severity classification are improved.

Description

CD severity grading system based on hybrid high-order asymmetric convolution network
Technical Field
The invention relates to the technical field of intelligent medical treatment, in particular to a CD severity grading system based on a hybrid high-order asymmetric convolution network.
Background
CD, the abbreviation for Crohn's disease, represents Crohn's disease. The clinical conventional scales for evaluating the severity of the CD endoscopic images are CDEIS and SES-CD, and in view of the segmental characteristics of CD lesions, the scoring rules comprise intestinal segment position information or the number of intestinal segment stenosis, which is very unfriendly for retrospective study data labeling. Typically, CD-induced inflammation, ulcers are transmural, endoscopic imaging only provides intestinal mucosal information within the intestinal lumen, while CT imaging provides intestinal wall information. CT small intestine contrast imaging (CTE) therefore plays an increasingly important role in the diagnosis and evaluation of CD. CTE images present more comprehensive features to the intestinal wall, helping the deep learning model capture more features, which can lead to better grading if CD severity grading is explored based on CTE images. With the rapid development of artificial intelligence, many existing mature classification network structures, such as classical networks of VGG19, resNet50, denseNet121, google net and the like, have appeared, but these classical networks lack designs based on CTE image features, and performance indexes such as accuracy rate are still to be improved when CD severity classification is performed based on CTE images by using these classical networks.
Disclosure of Invention
The invention aims at least solving the technical problems existing in the prior art and providing a CD severity grading system based on a hybrid high-order asymmetric convolution network.
To achieve the above object of the present invention, the present invention provides a CD severity classification system based on a hybrid high-order asymmetric convolution network, comprising: the input module is used for acquiring CT images to be classified; the preprocessing module is used for preprocessing the CT image to be classified to obtain an intestinal wall image to be classified; the grading module inputs the intestinal wall image to be graded into a trained grading model, and the grading model outputs a grading result which is a CD severity grade; the output module outputs a grading result; the hierarchical model comprises an asymmetric convolution module and a mixed high-order module which are connected in series, wherein the order number of the mixed high-order module is more than or equal to 2.
The invention builds a corresponding grading model based on the image characteristics of the intestinal wall image in the CT image, adopts an asymmetric convolution network to capture as much of the effective information of the intestinal wall in the transverse direction or the longitudinal direction as possible, is beneficial to modeling the long-distance relation, captures different-order information by utilizing the mixed high-order module and fuses, thus improving the performance indexes such as the accuracy and the authenticity of the grading model, and further improving the accuracy and the application value of CD severity grading.
Drawings
FIG. 1 is a system block diagram of a CD severity ranking system based on a hybrid high-order asymmetric convolutional network in accordance with a preferred embodiment of the present invention;
FIG. 2 is a schematic illustration of the extraction and delineation of intestinal wall images from CT images in accordance with a preferred embodiment of the present invention;
FIG. 3 is a graph showing the effect of the invention before and after the image of the intestinal wall is magnified;
FIG. 4 is a schematic diagram of a network structure of a hierarchical model in a preferred embodiment of the present invention;
FIG. 5 is a schematic diagram of an asymmetric convolution module in accordance with a preferred embodiment of the present invention;
FIG. 6 is a schematic diagram of a hybrid third-order module in accordance with a preferred embodiment of the present invention;
FIG. 7 is a schematic diagram of a generation of an countermeasure network in accordance with a preferred embodiment of the present invention;
FIG. 8 is a schematic diagram of the performance relationship of the hybrid higher-order and hierarchical model of the present invention;
FIG. 9 is a graph of the effect of the data augmentation experiment of the present invention;
FIG. 10 is a graph showing the effect of the pressure test of the classification model according to the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that the terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and defined, it should be noted that the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, mechanical or electrical, or may be in communication with each other between two elements, directly or indirectly through intermediaries, as would be understood by those skilled in the art, in view of the specific meaning of the terms described above.
The invention discloses a CD severity grading system based on a mixed high-order asymmetric convolution network, which in a preferred embodiment, as shown in FIG. 1, comprises:
and the input module is used for acquiring CT images to be classified. The input module is preferably, but not limited to, a data interface for reading CT images to be classified from a storage medium storing the CT images or for reading CT images of a patient from a host computer of the CT apparatus.
And the preprocessing module is used for preprocessing the CT image to be classified to obtain an intestinal wall image to be classified. Considering that the organs in the abdominal cavity are more and the situation is complex, and the CTE symptoms of CD are mainly concentrated on the intestinal wall part, if the original CT image is taken as input, the feature extraction of the grading model based on deep learning is very unfavorable under the condition of less data quantity, so that the application only takes the intestinal wall part as the input of the neural network of the grading model, and the preprocessing process comprises the step of extracting the intestinal wall image from the CT image to be graded. Preferably, the preprocessing module comprises an intestinal wall sketching unit and an extraction unit, wherein the intestinal wall sketching unit can be a Labelme software unit loaded by a computer host, and the extraction unit is used for extracting partial images of the intestinal wall through a mask after the intestinal wall sketching unit sketches an inner ring and an outer ring of the intestinal wall. As shown in fig. 2, the first image from the left is an original CT image, the second image from the left is an image of a portion of the intestinal wall outlined in the CT image, the outer and inner circles of the intestinal wall are respectively outlined, specifically, the portion of the intestinal wall can be outlined by using Labelme software, and the third image from the left is an image of a portion of the intestinal wall obtained by generating a mask code by using the existing algorithm, that is, an image of the intestinal wall.
As can be seen from fig. 2, the sketched intestinal wall part only occupies a small part in one image, when the grading model is input, a large amount of invalid information with black 0 pixel value exists when the image is directly cut into the size of the input preset image (such as 224×224), and the performance indexes such as the accuracy and the authenticity of the grading model are influenced.
The grading module inputs the intestinal wall image to be graded into a trained grading model, and the grading model outputs a grading result which is a CD severity grade; the hierarchical model comprises an asymmetric convolution module and a mixed high-order module which are connected in series, wherein the order number of the mixed high-order module is more than or equal to 2.
And the output module is used for outputting the grading result. The output module is preferably, but not limited to, a display or a data output interface.
In this embodiment, as shown in fig. 4, the hierarchical model preferably includes a first convolution normalization layer Conv k7s 2p3+bn 1, a first maximum pooling layer MaxPool k3s21, a second convolution normalization layer Conv k1s1+bn2, a third convolution normalization layer Conv k3s1p1+bn3, a second maximum pooling layer MaxPool k3s22, a first asymmetric convolution module ACM (a), a second asymmetric convolution module ACM (b), a third maximum pooling layer MaxPool k3s23, and a third asymmetric convolution, which are sequentially connectedModule ACM (c), hybrid higher order module H 2 OM, fourth asymmetric convolution module ACM (d), fifth asymmetric convolution module ACM (e), fourth max pooling layer MaxPool k2s 24, sixth asymmetric convolution module ACM (f), seventh asymmetric convolution module ACM (g), average pooling layer Adaptive AvgPool, dropout layer and Linear layer Linear. The convolution normalization layer is a convolution layer plus a batch normalization layer. Since the image data input into the hierarchical model neural network only contains intestinal wall portions, geometrically presents a plurality of 'circular rings', compared with classical symmetric convolution, the asymmetric convolution captures as much effective information in the transverse or longitudinal direction as possible while helping to model long-distance relationships, capturing intestinal wall features is achieved in the network structure of the hierarchical model by means of asymmetric convolution modules (Asymmetric convolution module, ACM), using hybrid high-order modules (Hybrid high order module, H 2 OM) extracts higher order information.
In this embodiment, in the hierarchical model shown in fig. 4, a multi-layer convolution is set before the asymmetric convolution module, and 7×7 convolution with a larger receptive field is used to perform preliminary extraction of image information, so as to preserve the original information of the image to the greatest extent possible, and avoid extracting too local features. The multi-layer convolution and the maximum pooling are matched to realize the downsampling process, so that the size of the feature map entering the asymmetric convolution module is not too large, more channels can be adopted, and the diversity of feature extraction is realized.
In the present embodiment, in the hierarchical model shown in fig. 4, for the feature of the intestinal wall image, an asymmetric convolution module ACM and a mixed high-order module H are used 2 OM is connected in series, and the asymmetric convolution module ACM is used for matching with the geometric form characteristics of the circular shape of the intestinal wall, so that similar information is extracted as much as possible, and the high-order module H is mixed 2 OM characterizes higher order information, capturing subtle differences between classes. Hybrid high order module H 2 The OM is located behind the third asymmetric convolution module ACM (c), is favorable for realizing high-order representation of the middle-level abstract features, extracts higher-level features through the following 4 third asymmetric convolution modules, and realizes deep modeling of high-level information. In the hierarchical model, 7-level asymmetric convolution module is used for guaranteeing feature extraction and enhancementThe richness is achieved, and meanwhile excessive calculation amount caused by excessive parameters is avoided, so that the overfitting to the task is avoided.
In this embodiment, in the hierarchical model as shown in fig. 4, the third maximum pooling layer and the fourth maximum pooling layer are connected in series in the multistage serial asymmetric convolution module, so that concentrated information is added in the abstract feature extraction process, and recognition deviation caused by too much dependence of the network on detail features is avoided.
In this embodiment, preferably, as shown in fig. 5, the asymmetric convolution module includes an Input unit Input and a connection unit Concat, and four branches connected in parallel to the Input unit Input and the connection unit Concat; the first branch comprises a first branch convolution Conv k1s1 and a first BN-activation layer BatchNorm+ReLU1 which are connected in sequence; the second branch comprises a first asymmetric convolution layer Convk (3 multiplied by 1) slp (1, 0), a second BN-active layer BatchNorm+ReLU2, a second asymmetric convolution layer Convk (7 multiplied by 3) slp (3, 1) and a third BN-active layer BatchNorm+ReLU3 which are connected in sequence; the third branch comprises a third asymmetric convolution layer Convk (1 multiplied by 3) slp (0, 1), a fourth BN-active layer BatchNorm+ReLU4, a fourth asymmetric convolution layer Convk (3 multiplied by 7) slp (1, 3) and a fifth BN-active layer BatchNorm+ReLU5 which are connected in sequence; the fourth branch comprises a maximum pooling layer MaxPool k3s1p1, a convolution layer Conv k1s1 and a sixth BN-active layer BatchNorm+ReLU6 which are connected in sequence. The ACM is composed of four branches, wherein the second branch and the third branch are asymmetric convolution branches, the convolution kernel size of the first asymmetric convolution layer Conv k (3×1) slp (1, 0) is 3×1, the convolution kernel size of the second asymmetric convolution layer Conv k (7×3) slp (3, 1) is 7×3, the convolution kernel size of the third asymmetric convolution layer Conv k (1×3) slp (0, 1) is 1×3, and the convolution kernel size of the fourth asymmetric convolution layer Conv k (3×7) slp (1, 3) is 3×7.ACM is convolved with 1 x 3 and 3 x 7 in the transverse direction and 3 x 1 and 7 x 3 in the longitudinal direction, respectively. The other two branches use 1 x 1 convolution to realize channel dimension reduction, and one branch adds maximum pooling to concentrate information.
In the present embodiment, the input and output feature maps of the asymmetric convolution module ACM are respectively set as
Figure BDA0004170431230000071
Know->
Figure BDA0004170431230000072
C in And C out Representing the dimensions of the ACM input and output feature maps, respectively, the output dimension of the first branch in FIG. 5 is C conv The method comprises the steps of carrying out a first treatment on the surface of the The hidden dimension of the two asymmetric convolution branches is C hid The output dimension is C asy The method comprises the steps of carrying out a first treatment on the surface of the The output dimension of the fourth branch is C maxc Performing Concat splicing on the feature images subjected to feature extraction of different branches in the channel dimension to obtain C out =C conv +C maxc +2C asy
In the present embodiment, the classification model (H 2 The detailed settings of the ACM channels in O-ACN) are shown in Table 1.
Table 1H 2 Feature mapping details of O-ACN
Figure BDA0004170431230000081
In this embodiment, for modeling interactions of higher order information above the second order, a linear polynomial predictor a (x) is defined for x,
Figure BDA0004170431230000082
to input H 2 Feature map of OM->
Figure BDA0004170431230000083
One-dimensional tensors at spatial locations (m, n).
Figure BDA0004170431230000084
Wherein, the liquid crystal display device comprises a liquid crystal display device,<·,·>is the inner product of two tensors of the same size,
Figure BDA0004170431230000085
an r-th order tensor product of x, which contains xAll r-order single items, w r The R-order tensor to be learned comprises the weight of R-order variable combination in x, and R is the order. When r is larger, w r Excessive parameters can be introduced to cause overfitting problem, and when r is more than 1, w is r Can be decomposed by tensor from D r The rank-1 tensor of the dimension is approximated as follows:
Figure BDA0004170431230000091
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004170431230000092
is an intermediate vector +.>
Figure BDA0004170431230000093
Is the outer product, alpha r,d Is the weight of the d-th rank-1 tensor. At this time, the formula (1) may be expressed as:
Figure BDA0004170431230000094
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00041704312300000919
is a weight vector, ++>
Figure BDA00041704312300000920
Figure BDA0004170431230000095
Set 1 T For a full 1-row vector, as well as for Hadamard product, equation (3) can be simplified to (4), introducing an auxiliary matrix P r To obtain->
Figure BDA0004170431230000096
As shown in formula (5).
Figure BDA0004170431230000097
Figure BDA0004170431230000098
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004170431230000099
when r > 1, & gt + & gt>
Figure BDA00041704312300000910
Due to P r ,w 1 And alpha r Are all learnable parameters, and { P } can be calculated by matrix algebra 1 ,w 1 Merge into a new matrix +.>
Figure BDA00041704312300000911
In { P r ,w r Merge into ∈ ->
Figure BDA00041704312300000912
In this case, formula (5) may be represented by formula (6), provided +.>
Figure BDA00041704312300000913
Can be approximated by multiplication of two matrices, i.e.>
Figure BDA00041704312300000914
The formula (6) can be simplified to a more general form of formula (7).
Figure BDA00041704312300000915
Figure BDA00041704312300000916
a (x) can be modeled and used
Figure BDA00041704312300000917
Is introduced into the nonlinear relationship by using the ReLU function and passes through SigThe moid function realizes higher-order mapping to obtain +.>
Figure BDA00041704312300000918
The expression is as follows:
Figure BDA0004170431230000101
a (x) is defined on a spatially local one-dimensional tensor, and the formula (8) is generalized in three-dimensional space, so that
Figure BDA0004170431230000102
Weight sharing at different spatial locations, i.e. a (χ) = { a (x) (1,1) ),…,A(x (H,W) ) }. At H 2 In the construction of 0M, when r=1,/i>
Figure BDA0004170431230000103
Know->
Figure BDA0004170431230000104
Respectively from the output channels as D r 1X 1 convolution with C is implemented, when R > 1 and R > 1,/is>
Figure BDA0004170431230000105
Also with a 1 x 1 convolution this is done,
Figure BDA0004170431230000106
for a series of output channels D r Is a 1 x 1 convolution of (1), which generates a series of feature maps +.>
Figure BDA0004170431230000107
Generated by
Figure BDA0004170431230000108
By element-level multiplication combining +.>
Figure BDA0004170431230000109
In a preferred embodiment, the hybrid higher order module has an order of 3 and the performance of the time division model is optimized when the order is 3.
In this embodiment, preferably, as shown in fig. 6, when the order of the mixed higher order module is 3, the mixed higher order module includes:
a first order branch, performing: performing 1×1 convolution processing on the input feature map of the mixed high-order module to obtain a first feature map, performing ReLU activation function processing and 1×1 convolution processing on the first feature map to obtain a second feature map, and performing Sigmoid function processing on the second feature map to obtain a first-order branch feature map;
second order branch, execute: performing convolution processing on the input feature map of the mixed high-order module by using two 1 multiplied by 1 to obtain a first second order convolution result and a second order convolution result, performing element level multiplication on the first second order convolution result and the second order convolution result to obtain a third feature map, performing ReLU activation function processing and 1 multiplied by 1 to obtain a fourth feature map, and performing Sigmoid function processing on the fourth feature map to obtain a second order branch feature map;
third-order branch, execute: performing convolution processing on the input feature map of the mixed high-order module by using three 1×1 convolutions to obtain a first third-order convolution result, a second third-order convolution result and a third-order convolution result, performing element level multiplication on the first third-order convolution result, the second third-order convolution result and the third-order convolution result to obtain a fifth feature map, performing ReLU activation function processing and 1×1 convolution processing on the fifth feature map to obtain a sixth feature map, and performing Sigmoid function processing on the sixth feature map to obtain a third-order branch feature map;
the first-order jumper is used for mapping and conveying the input features of the mixed high-order module to the first-order channel fusion unit;
a first-order channel fusion unit that performs: performing element level multiplication on the first-order branch feature map and the input feature map of the mixed high-order module to obtain a first-order channel feature map;
a second-order jumper wire is used for mapping and conveying the input characteristics of the mixed high-order module to a second-order channel fusion unit;
a second-order channel fusion unit that performs: performing element-level addition fusion on the first-order branch feature map and the second-order branch feature map to obtain a seventh feature map, performing Sigmoid function processing on the seventh feature map to obtain an eighth feature map, and performing element-level multiplication on the eighth feature map and the input feature map of the mixed high-order module to obtain a second-order channel feature map;
the third-order jumper is used for mapping and conveying the input features of the mixed high-order module to the third-order channel fusion unit;
a third-order channel fusion unit that performs: performing element-level addition fusion on the first-order branch feature map, the second-order branch feature map and the third-order branch feature map to obtain a ninth feature map, performing Sigmoid function processing on the ninth feature map to obtain a tenth feature map, and performing element-level multiplication on the tenth feature map and the input feature map of the mixed high-order module to obtain a third-order channel feature map;
and the total fusion unit is used for carrying out element-level addition fusion on the first-order channel feature map, the second-order channel feature map and the third-order channel feature map to obtain the output feature map of the mixed high-order module.
The high-order information A (χ) acquired by the mixed third-order module structure is realized by simple 1 x 1 convolution, so that the budget is greatly reduced, and the execution speed is increased; the weight sharing of different spatial positions in the structure avoids the generation of a large number of parameters, and the information of different orders is added and fused through element level. H 2 The mapping size of the input-output characteristics of OM is not changed, and the captured high-order information still needs to be further modeled, so H is 2 OM in classification mode H 2 The middle position of the O-ACN whole body. Preferably, input H 2 OM feature mapping channel C is 512, set D r 128.
In this embodiment, preferably, the method further includes a data enhancement module for enhancing the training sample image set of the hierarchical model, where the data enhancement module performs random horizontal inversion and random vertical inversion on the original sample image to obtain a new sample image, and the execution probability is set to 0.3. Through data enhancement, the diversity of samples is increased, so that the trained hierarchical model has better robustness.
In this embodiment, to compensate for the problem of insufficient feature information contained in each CD severity level caused by insufficient number of collected samples, the overall feature information contained in the training set is substantially changed, so as to avoid overfitting of the hierarchical model. Preferably, the grading system further comprises a generating countermeasure network module for augmenting the grading model training sample image, the generating countermeasure network module comprising: a generator for generating a simulated intestinal wall image from random noise; a discriminator for learning and discriminating the real intestinal wall image in the sample image set from the simulated intestinal wall image generated by the generator. During the training of the growing countermeasure network module, the generator continuously generates simulated images from random noise, and the discriminator learns to discriminate the real images in the sample set from the simulated images generated by the generator until the discriminator cannot discriminate the real simulated images, at which time the training of the growing countermeasure network module is completed.
In this embodiment, further preferably, as shown in fig. 7, the generator is composed of a multi-layer transposed convolution serial, the last layer transposed convolution of the generator uses a Tanh activation function, and the rest of the layer transposed convolutions except for the last layer in the generator use a batch norm and a ReLU activation function; the discriminator is composed of a series of layers of convolutions, the first layer of convolutions using the LeakyReLU activation function and the last layer of convolutions having no activation function, and the remaining layers of convolutions of the discriminator, except for the first layer and the last layer, using the BatchNorm and ReLU activation functions. In a preferred embodiment, a method of training a hierarchical model is provided, the method comprising:
step 1, constructing a sample set, collecting a plurality of CT small intestine radiography images of the cross section of a CD patient, namely CTE images, preprocessing each CTE image to obtain an enlarged intestinal wall image, and taking the intestinal wall image as a sample.
And 2, establishing a CTE scoring rule, scoring each sample image according to the scoring rule by an expert, and marking the CD severity level according to the total score of the sample images.
While CTE is commonly used to assess CD activity and severity and appears to have a moderate correlation with endoscopy and biopsy, CTE scoring systems are currently lacking clinically. CTE image manifestations associated with CD lesions and little enteritis include thickening of the intestinal wall, thickening of the mucosa, differentiation of the wall layers, rectal vascular congestion or distension distortion, also known as pectination. The present application uses these CT signs to assess CD severity level. Table 2 shows detailed scoring details including four aspects of intestinal wall thickness, mode of reinforcement, degree of reinforcement, and presence or absence of comb. The higher the enhancement degree is, the higher the gray value is, the uniform enhancement is represented by transmural enhancement, the obvious enhancement of the mucous membrane is represented by the obvious change of the gray value of the mucous membrane layer of the intestinal wall, the layered enhancement is represented by the layered change of the intestinal wall, the inner mucous membrane ring and the outer serosa ring are obviously enhanced, and the target sign or the double halo sign is represented. And scoring each item of detail according to the severity of the patient, and finally adding the scores of each item to obtain a total score. When no single score is more than or equal to 1, the clinical remission is realized, and when the total score is 1-3, the clinical remission is slight movement; 4-6 are divided into moderate activities; 7-10 are categorized as heavy activities, i.e., CD severity grades include clinical remission, light activity, moderate activity, and heavy activity, denoted CTE0, CTE1, CTE2, and CTE3, respectively. And the expert scores the sample image based on the scoring rule, obtains the corresponding CD severity level according to the total scoring level, and associates the obtained CD severity level with the sample image to finish image marking. CD severity scales may also be combined, such as combining CTE0 and CTE1 as mild, CTE2 and CTE3 as moderate, with a classification model used for classification 2, where the number of sample sets is small.
Table 2 CTE grade score for CD activity
Figure BDA0004170431230000141
Step 3, after the actual labeling is completed, the number of images in the whole data set, CTE0, CTE1, CTE2 and CTE3, is 304, 194, 213 and 214 respectively. Because of the smaller number of images, it was relatively difficult to achieve a CD severity 4 fraction, combining CTE0 and CTE1 as mild, and CTE2 and CTE3 as moderate, with a category-2 study. The combined mild type has 498 images and the moderate type has 427 images. Preferably, the method further comprises a sample enhancement step, wherein the original sample image is subjected to random horizontal overturn and random vertical overturn to obtain a new sample image; a new sample image is generated using the generation countermeasure network module.
And 4, randomly dividing the training set and the testing set in the sample set. The training set contained 657 images with 364 and 293 mild and moderate severe, respectively, while the test set contained 268 images with 134 mild and moderate severe, respectively.
And step 5, training the grading model by using a training set, verifying and testing the trained grading model by using a testing set, and adjusting the parameters of the grading model to complete the training of the grading model. In the model training process, a cross entropy loss function and an SGD optimizer are used and weight-decay is set to be 1×10 and momentum is set to be 0.9. The batch size was set to 32, the initial learning rate was set to 0.01, and the learning rate was adjusted using an exponential decay-down strategy, namely: lr=0.01×e -epoch/5 A total of 30 epochs were trained.
The following are experimental verification results of the overall performance of part of the module functions and the hierarchical model of the present application.
Experimental-pretreatment Module intestinal wall image amplification beneficial effect verification experiment
To verify the effectiveness of intestinal wall enlargement and evaluate the performance of different models in CD severity classification tasks, VGG series network, resNet series network, denseNet121 and GoogLeNet were selected for five-fold cross-validation of all data, the experimental results are shown in Table 3, where S and L represent the use of the original image and the enlarged image, respectively.
TABLE 3 all data Cross-validation experiment results
Figure BDA0004170431230000151
Figure BDA0004170431230000161
As can be seen from the experimental results in table 3, the performance of most models is improved by using the amplified image, but this phenomenon is not obvious in the VGG network, and when the original image is used, the performance of the VGG is equivalent to that of the ResNet, but when the amplified image is used, the performance of the ResNet is obviously better than that of the VGG, which implies that the capturing capability of the VGG network for fine information may be relatively poor. In the ResNet series network contrast, resNet101 also does not exhibit depth advantages, given the simplicity of the task and the small amount of data, excessive network parameters may cause overfitting problems.
Experimental two-hybrid high-order module H 2 OM medium order effect verification
H 2 The number of the mixture Gao Jiejie in OM is a worth exploring super parameter, and the higher the mixing order is, the H 2 The more complex the OM construction is, the more high-order information is, the over-fitting problem possibly exists, and the redundancy of the information possibly brings certain deviation to the feature capture of the hierarchical model, so that the performance of the hierarchical model is affected; the feature of the intestinal wall image cannot be effectively extracted due to the lower mixing order, so that the classification accuracy of the classification model is reduced; therefore, how the order of the mixed higher order is selected is important.
The experiment sets 1-6 orders, and FIG. 8 shows H when different mixing orders are used 2 The O-ACN model (hierarchical model of the application) changes the AUC and ACC of all data five-fold cross-validation, where an order of 0 indicates that H is not used 2 OM, at this time, model ACC 84.97% (95% ci: 82.53-87.13), AUC 0.908 (95% ci: 0.887-0.928), H at 3 rd order with increasing number of mixes Gao Jiejie 2 The O-ACN performance was highest, ACC was 85.41% (95% CI: 82.98-87.53), AUC was 0.916 (95% CI: 0.898-0.934), and after more than 3 steps, model performance began to drop, so the mix Gao Jiejie number was set to 3.
Experiment three model comparison experiment
To verify H 2 The effectiveness of O-ACN is compared by selecting VGG19, resNet50, denseNet121, googLeNet ViT-S and other models with better expression in the existing classification model network structure, and the five-fold cross validation result is shown in Table 4, H 2 O-CAN (hierarchical model of the application) is used for each evaluation indexSignificant advantages are achieved.
TABLE 4 comparison of Cross-validation results for different models
Figure BDA0004170431230000171
Experiment four data augmentation experiment
In order to explore the effectiveness of the data augmentation method based on the generation of the countermeasure, the experiment is set to take 40 steps, images generated by GAN are added into the training set, and because 364 and 293 images in two categories in the training set are respectively available, 320 images are added into each category at most, and at the moment, the real images only occupy half. Selection of ResNet50, denseNet121, googLeNet and H proposed in the present application with better performance in Cross-validation 2 The O-ACN performs a data augmentation experiment to generate a graph of the image addition number versus AUC of the model in the test set as shown in FIG. 9. From the analysis of the experimental results in FIG. 9, it can be seen that GAN-based data augmentation scheme is shown in GoogLeNet and H 2 There is some effectiveness on the O-ACN, but it adversely affects ResNet50 and DenseNet 121.
Experiment five-stage model pressure test experiment
For H 2 The O-ACN model is subjected to pressure test, and the saturation is not significant for gray level images, so the pressure test only comprises brightness and contrast, the pressure test result is shown as figure 10, H 2 O-ACN has certain compressive capacity. In general, according to the curve change trend, the influence of brightness and contrast on the model is basically similar, when the AUC of the model can reach more than 0.8, the brightness change degree is between 0.6 and 1.6, and the contrast change degree is between 0.5 and 1.7.
In the description of the present specification, a description of the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. A CD severity classification system based on a hybrid high-order asymmetric convolutional network, comprising:
the input module is used for acquiring CT images to be classified;
the preprocessing module is used for preprocessing the CT image to be classified to obtain an intestinal wall image to be classified;
the grading module inputs the intestinal wall image to be graded into a trained grading model, and the grading model outputs a grading result which is a CD severity grade;
the output module outputs a grading result;
the hierarchical model comprises an asymmetric convolution module and a mixed high-order module which are connected in series, wherein the order number of the mixed high-order module is more than or equal to 2.
2. The CD severity classification system based on a hybrid high order asymmetric convolution network of claim 1, wherein the classification model comprises a first convolution normalization layer, a first maximum pooling layer, a second convolution normalization layer, a third convolution normalization layer, a second maximum pooling layer, a first asymmetric convolution module, a second asymmetric convolution module, a third maximum pooling layer, a third asymmetric convolution module, a hybrid high order module, a fourth asymmetric convolution module, a fifth asymmetric convolution module, a fourth maximum pooling layer, a sixth asymmetric convolution module, a seventh asymmetric convolution module, an average pooling layer, a Dropout layer, and a linear layer, connected in sequence.
3. The CD severity classification system based on a hybrid high-order asymmetric convolution network according to claim 2, wherein said asymmetric convolution module comprises an input unit and a connection unit, and four branches connected in parallel connecting the input unit and the connection unit;
wherein the first branch comprises a first branch volume and a first BN-activation layer which are sequentially connected; the second branch comprises a first asymmetric convolution layer, a second BN-active layer, a second asymmetric convolution layer and a third BN-active layer which are sequentially connected; the third branch comprises a third asymmetric convolution layer, a fourth BN-active layer, a fourth asymmetric convolution layer and a fifth BN-active layer which are sequentially connected; the fourth branch includes a maximum pooling layer, a convolution layer, and a sixth BN-activation layer connected in sequence.
4. The CD severity classification system based on a hybrid high-order asymmetric convolution network of claim 3, wherein a convolution kernel of a first asymmetric convolution layer is 3 x 1, a convolution kernel of a second asymmetric convolution layer is 7 x 3, a convolution kernel of a third asymmetric convolution layer is 1 x 3, and a convolution kernel of a fourth asymmetric convolution layer is 3 x 7.
5. A CD severity ranking system based on a hybrid high order asymmetric convolutional network according to any of claims 1-4, wherein the order of said hybrid high order module is 3.
6. The CD severity classification system based on a hybrid high-order asymmetric convolutional network of claim 5, wherein said hybrid high-order module comprises:
a first order branch, the first order branch performing: performing 1×1 convolution processing on the input feature map of the mixed high-order module to obtain a first feature map, performing ReLU activation function processing and 1×1 convolution processing on the first feature map to obtain a second feature map, and performing Sigmoid function processing on the second feature map to obtain a first-order branch feature map;
a second order branch, the second order branch performing: performing convolution processing on the input feature map of the mixed high-order module by using two 1 multiplied by 1 to obtain a first second order convolution result and a second order convolution result, performing element level multiplication on the first second order convolution result and the second order convolution result to obtain a third feature map, performing ReLU activation function processing and 1 multiplied by 1 to obtain a fourth feature map, and performing Sigmoid function processing on the fourth feature map to obtain a second order branch feature map;
a third order branch, the third order branch performing: performing convolution processing on the input feature map of the mixed high-order module by using three 1×1 convolutions to obtain a first third-order convolution result, a second third-order convolution result and a third-order convolution result, performing element level multiplication on the first third-order convolution result, the second third-order convolution result and the third-order convolution result to obtain a fifth feature map, performing ReLU activation function processing and 1×1 convolution processing on the fifth feature map to obtain a sixth feature map, and performing Sigmoid function processing on the sixth feature map to obtain a third-order branch feature map;
the first-order jumper is used for mapping and conveying the input features of the mixed high-order module to the first-order channel fusion unit;
a first-order channel fusion unit that performs: performing element level multiplication on the first-order branch feature map and the input feature map of the mixed high-order module to obtain a first-order channel feature map;
a second-order jumper wire is used for mapping and conveying the input characteristics of the mixed high-order module to a second-order channel fusion unit;
a second-order channel fusion unit that performs: performing element-level addition fusion on the first-order branch feature map and the second-order branch feature map to obtain a seventh feature map, performing Sigmoid function processing on the seventh feature map to obtain an eighth feature map, and performing element-level multiplication on the eighth feature map and the input feature map of the mixed high-order module to obtain a second-order channel feature map;
the third-order jumper is used for mapping and conveying the input features of the mixed high-order module to the third-order channel fusion unit;
a third-order channel fusion unit that performs: performing element-level addition fusion on the first-order branch feature map, the second-order branch feature map and the third-order branch feature map to obtain a ninth feature map, performing Sigmoid function processing on the ninth feature map to obtain a tenth feature map, and performing element-level multiplication on the tenth feature map and the input feature map of the mixed high-order module to obtain a third-order channel feature map;
and the total fusion unit is used for carrying out element-level addition fusion on the first-order channel feature map, the second-order channel feature map and the third-order channel feature map to obtain the output feature map of the mixed high-order module.
7. The CD severity classification system based on a hybrid high-order asymmetric convolutional network of claim 1 or 2 or 3 or 4 or 6, wherein said preprocessing comprises:
and extracting an intestinal wall image from the CT image to be classified, and amplifying the extracted intestinal wall image to obtain the intestinal wall image to be classified.
8. The CD severity classification system based on a hybrid high-order asymmetric convolution network of claim 1 or 2 or 3 or 4 or 6, further comprising a generation countermeasure network module for augmenting the classification model training sample images, said generation countermeasure network module comprising:
a generator for generating a simulated intestinal wall image from random noise;
and the discriminator is used for learning and discriminating the real intestinal wall image in the training sample set and the simulated intestinal wall image generated by the generator.
9. The CD severity classification system based on a hybrid high-order asymmetric convolution network of claim 8, wherein said generator is comprised of a series of multi-layer transposed convolutions, a last layer transposed convolution of the generator employing a Tanh activation function, and the remaining layer transposed convolutions of the generator, except for the last layer, employing a batch norm and a ReLU activation function;
the discriminator is composed of a series of layers of convolutions, the first layer of convolutions adopts a LeakyReLU activation function, the last layer of convolutions do not have an activation function, and the rest of convolutions except the first layer and the last layer of convolutions in the discriminator adopt a BatchNorm and a ReLU activation function.
10. The CD severity classification system based on a hybrid high-order asymmetric convolution network of claim 8, further comprising a data enhancement module for enhancing a training sample set of a classification model, said data enhancement module performing a random horizontal flip and a random vertical flip on an original sample image to obtain a new sample image.
CN202310375935.7A 2023-04-10 2023-04-10 CD severity grading system based on hybrid high-order asymmetric convolution network Active CN116384448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310375935.7A CN116384448B (en) 2023-04-10 2023-04-10 CD severity grading system based on hybrid high-order asymmetric convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310375935.7A CN116384448B (en) 2023-04-10 2023-04-10 CD severity grading system based on hybrid high-order asymmetric convolution network

Publications (2)

Publication Number Publication Date
CN116384448A true CN116384448A (en) 2023-07-04
CN116384448B CN116384448B (en) 2023-09-12

Family

ID=86967240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310375935.7A Active CN116384448B (en) 2023-04-10 2023-04-10 CD severity grading system based on hybrid high-order asymmetric convolution network

Country Status (1)

Country Link
CN (1) CN116384448B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360192A (en) * 2018-09-25 2019-02-19 郑州大学西亚斯国际学院 A kind of Internet of Things field crop leaf diseases detection method based on full convolutional network
CN109977947A (en) * 2019-03-13 2019-07-05 中南大学 A kind of image characteristic extracting method and device
US20190244680A1 (en) * 2018-02-07 2019-08-08 D-Wave Systems Inc. Systems and methods for generative machine learning
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN112446875A (en) * 2020-12-11 2021-03-05 南京泰明生物科技有限公司 AMD grading system based on macular attention mechanism and uncertainty
CN112446423A (en) * 2020-11-12 2021-03-05 昆明理工大学 Fast hybrid high-order attention domain confrontation network method based on transfer learning
CN112634289A (en) * 2020-12-28 2021-04-09 华中科技大学 Rapid feasible domain segmentation method based on asymmetric void convolution
US20220036553A1 (en) * 2018-12-14 2022-02-03 Union Strong (Beijing) Technology Co. Ltd. Cranial ct-based grading method and system
CN115410046A (en) * 2022-09-22 2022-11-29 河南科技大学 Skin disease tongue picture classification model based on deep learning, establishing method and application
US20220382553A1 (en) * 2021-05-24 2022-12-01 Beihang University Fine-grained image recognition method and apparatus using graph structure represented high-order relation discovery
CN115587979A (en) * 2022-10-10 2023-01-10 山东财经大学 Method for grading diabetic retinopathy based on three-stage attention network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190244680A1 (en) * 2018-02-07 2019-08-08 D-Wave Systems Inc. Systems and methods for generative machine learning
CN109360192A (en) * 2018-09-25 2019-02-19 郑州大学西亚斯国际学院 A kind of Internet of Things field crop leaf diseases detection method based on full convolutional network
US20220036553A1 (en) * 2018-12-14 2022-02-03 Union Strong (Beijing) Technology Co. Ltd. Cranial ct-based grading method and system
CN109977947A (en) * 2019-03-13 2019-07-05 中南大学 A kind of image characteristic extracting method and device
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN112446423A (en) * 2020-11-12 2021-03-05 昆明理工大学 Fast hybrid high-order attention domain confrontation network method based on transfer learning
CN112446875A (en) * 2020-12-11 2021-03-05 南京泰明生物科技有限公司 AMD grading system based on macular attention mechanism and uncertainty
CN112634289A (en) * 2020-12-28 2021-04-09 华中科技大学 Rapid feasible domain segmentation method based on asymmetric void convolution
US20220382553A1 (en) * 2021-05-24 2022-12-01 Beihang University Fine-grained image recognition method and apparatus using graph structure represented high-order relation discovery
CN115410046A (en) * 2022-09-22 2022-11-29 河南科技大学 Skin disease tongue picture classification model based on deep learning, establishing method and application
CN115587979A (en) * 2022-10-10 2023-01-10 山东财经大学 Method for grading diabetic retinopathy based on three-stage attention network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUOFENG TONG: "MA‑CRNN: a multi‑scale attention CRNN for Chinese text line recognition in natural scenes", 《HTTPS://DOI.ORG/10.1007/S10032-019-00348-7》 *
ZHICHAO YUAN 等: "Object Detection in Remote Sensing Images via Multi-Feature Pyramid Network with Receptive Field Block", 《REMOTE SENSING》 *
杨一风 等: "基于多模态 MRI 与深度学习的乳腺病变良恶性鉴别", 《波普学杂志》 *

Also Published As

Publication number Publication date
CN116384448B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
Cadene et al. Murel: Multimodal relational reasoning for visual question answering
CN113538313B (en) Polyp segmentation method and device, computer equipment and storage medium
Wu et al. Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation
Reddy et al. A novel computer-aided diagnosis framework using deep learning for classification of fatty liver disease in ultrasound imaging
CN109544517A (en) Method and system are analysed in multi-modal ultrasound group credit based on deep learning
WO2022205502A1 (en) Image classification model construction method, image classification method, and storage medium
Zhang et al. Dual encoder fusion u-net (defu-net) for cross-manufacturer chest x-ray segmentation
CN115424104A (en) Target detection method based on feature fusion and attention mechanism
CN116543429A (en) Tongue image recognition system and method based on depth separable convolution
Raj et al. StrokeViT with AutoML for brain stroke classification
Shi et al. Automatic detection of pulmonary nodules in CT images based on 3D Res-I network
Singh et al. A comparative analysis of deep learning algorithms for skin cancer detection
CN116958535B (en) Polyp segmentation system and method based on multi-scale residual error reasoning
CN116384448B (en) CD severity grading system based on hybrid high-order asymmetric convolution network
CN116779176A (en) Method for amplifying chronic kidney disease samples after transplantation based on neural network depth characteristics
Wei et al. Genetic U-Net: automatically designing lightweight U-shaped CNN architectures using the genetic algorithm for retinal vessel segmentation
Rodrigues et al. DermaDL: advanced convolutional neural networks for automated melanoma detection
Iqbal et al. LDMRes-Net: Enabling real-time disease monitoring through efficient image segmentation
Zhang et al. Mixed‐decomposed convolutional network: A lightweight yet efficient convolutional neural network for ocular disease recognition
Jarosik et al. The feasibility of deep learning algorithms integration on a GPU-based ultrasound research scanner
Li et al. Design of an incremental music Teaching and assisted therapy system based on artificial intelligence attention mechanism
Song et al. Classification of cervical lesion images based on CNN and transfer learning
CN111275720A (en) Full end-to-end small organ image identification method based on deep learning
Ghosh et al. Improved Gastrointestinal Screening: Deep Features using Stacked Generalization
CN110969117A (en) Fundus image segmentation method based on Attention mechanism and full convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant