CN113139627B - Mediastinal lump identification method, system and device - Google Patents

Mediastinal lump identification method, system and device Download PDF

Info

Publication number
CN113139627B
CN113139627B CN202110691215.2A CN202110691215A CN113139627B CN 113139627 B CN113139627 B CN 113139627B CN 202110691215 A CN202110691215 A CN 202110691215A CN 113139627 B CN113139627 B CN 113139627B
Authority
CN
China
Prior art keywords
slices
image
window
attention
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110691215.2A
Other languages
Chinese (zh)
Other versions
CN113139627A (en
Inventor
杜强
高泽宾
郭雨晨
聂方兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiao Bai Century Network Technology Co ltd
Original Assignee
Beijing Xiao Bai Century Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiao Bai Century Network Technology Co ltd filed Critical Beijing Xiao Bai Century Network Technology Co ltd
Priority to CN202110691215.2A priority Critical patent/CN113139627B/en
Publication of CN113139627A publication Critical patent/CN113139627A/en
Application granted granted Critical
Publication of CN113139627B publication Critical patent/CN113139627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method, a system and a device for identifying a mediastinum, wherein the method comprises the following steps: s1, preprocessing the CT image of the mediastinal tumor; s2, taking a plurality of continuous slices of the preprocessed CT image, and processing each slice into a matrix with a plurality of window widths and window levels; and S3, before inputting the multiple slices processed into the matrix with the multiple window width and window level into a 2.5DUNet of a two-stage self-attention mechanism, performing packet convolution on the multiple slices, and inputting the multiple slices into a two-stage self-attention mechanism module for fusion to obtain a recognition result. The method uses deep learning and combines an attention mechanism, and simultaneously utilizes the advantages of 2D and 3D, so that the model has high precision and high reasoning speed; in addition, due to the fact that the size of the database is large, generalization performance of the database can be guaranteed.

Description

Mediastinal lump identification method, system and device
Technical Field
The invention relates to the field of artificial intelligence, in particular to a mediastinum identification method, a mediastinum identification system and a mediastinum identification device.
Background
Medically, mediastinum refers to an area anterior to the sternum, posterior to the spine, superior to the neck, and inferior to the diaphragm. It contains the heart, thymus, some lymph nodes and part of the airways (trachea), etc., excluding the lungs. Tumors growing in the mediastinal region can be classified into: anterior mediastinum, median mediastinum and posterior mediastinum tumors.
Mediastinal tumors contain multiple disease types. Taking thymus tumor as an example, 90% of thymus tumor is thymus tumor, and the rest is thymus cancer, lymphoma and carcinoid, etc. Mediastinal tumors have the characteristic of low incidence of disease, for example, thymoma has less than 1% of all adult malignant tumors; it is also characterized by the location of the disease, for example, breast tumors account for about 30% of adult premenstrual tumor, and according to the national cancer institute report, the incidence of American thymus tumors is 0.15/10 ten thousand, so that fewer samples can be studied.
The imaging examination may assist the physician in diagnosing the mediastinal lump. In the case of thymoma, approximately 80% of thymoma patients exhibit a mediastinal contour abnormality or bump on the orthotopic chest plate. The chest enhanced CT is a preferred image examination method for mediastinal tumors before diagnosis, and can not only display the size, density and edge of a lesion, but also prompt the relationship between the lesion and peripheral organs in the chest, including large blood vessels, lungs, pericardium, heart, pleura and the like. In the enhanced CT sequence, blood vessels and the like are characterized by high density, and mediastinal tumors are characterized by low density and are easy to identify.
Combining the above features, CT can help doctors to diagnose mediastinal tumors quickly, but has two difficulties. The first is that because the incidence of disease is low, the learnable image samples are few, and in addition, the individual types occupy less in the whole incidence of disease, the samples are fewer; secondly, the disease is classified into a plurality of types. Diagnosis of mediastinal tumors by CT is a great challenge for less experienced imaging physicians, and missed diagnosis is likely to occur.
With the development of computer and digital image processing techniques, many computer image algorithms have emerged for processing CT images to assist physicians in diagnosing cancer. On one hand, on the other hand, a considerable part of the algorithms are based on traditional machine learning, and the obtained diagnosis precision is limited; on the other hand, the patient is limited by the low incidence of mediastinal mass, and the learnable sample is difficult to collect, so the generalization performance is limited.
Disclosure of Invention
The invention aims to provide a method, a system and a device for identifying a mediastinum object, and aims to solve the problem of the identification method of the mediastinum object.
The invention provides a mediastinal tumor identification method based on a two-stage self-attention mechanism and 2.5DUNet, which comprises the following steps:
s1, preprocessing the CT image of the mediastinal tumor;
s2, taking a plurality of continuous slices of the preprocessed CT image, and processing each slice into a matrix with a plurality of window widths and window levels;
and S3, before inputting the multiple slices processed into the matrix with the multiple window width and window level into a 2.5DUNet of a two-stage self-attention mechanism, performing packet convolution on the multiple slices, and inputting the multiple slices into a two-stage self-attention mechanism module for fusion to obtain a recognition result.
The invention also provides a mediastinal tumor identification system based on a two-stage self-attention mechanism and 2.5DUNet, which comprises:
a preprocessing module: for preprocessing the CT image of mediastinal tumors;
a slicing module: taking a plurality of continuous slices from the preprocessed CT image, and processing each slice into a matrix with a plurality of window widths and window levels;
a fusion module: before inputting a plurality of slices processed into a matrix with a plurality of window width and window levels into a 2.5DUNet of a two-stage self-attention mechanism, carrying out grouping convolution on the plurality of slices, and then inputting the plurality of slices into a two-stage self-attention mechanism module for fusion to obtain an identification result.
The embodiment of the invention also provides a mediastinal tumor identification system of 2.5DUNet based on a two-stage self-attention mechanism, which comprises: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the above method when executed by the processor.
The embodiment of the invention also provides a computer readable storage medium, wherein an implementation program for information transmission is stored on the computer readable storage medium, and the implementation program realizes the steps of the method when being executed by a processor.
By adopting the embodiment of the invention, an algorithm which gives consideration to both 2D and 3D information is developed by combining an attention mechanism, so that the division of the mediastinal tumor is realized. Because the deep learning is used and the attention mechanism is combined, and the advantages of 2D and 3D are utilized, the model has high precision and high reasoning speed; in addition, due to the fact that the size of the database is large, generalization performance of the database can be guaranteed.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method of 2.5DUNet mediastinal mass identification based on a two-level self-attention mechanism in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network structure of a mediastinal mass identification method of 2.5DUNet based on a two-level self-attention mechanism according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a packet convolution of a 2.5DUNet mediastinal mass identification method based on a two-level self-attention mechanism according to an embodiment of the present invention;
FIG. 4 is a schematic inter-slice position attention diagram of a 2.5DUNet mediastinal mass identification method based on a two-stage self-attention mechanism according to an embodiment of the present invention;
FIG. 5 is a schematic inter-slice fusion attention diagram of a 2.5DUNet mediastinal mass identification method based on a two-stage self-attention mechanism according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a 2.5DUNet mediastinal mass identification system based on a two-level self-attention mechanism in accordance with an embodiment of the present invention;
FIG. 7 is a schematic diagram of a 2.5DUNet mediastinal mass identification device based on a two-level self-attention mechanism, in accordance with an embodiment of the present invention.
Description of reference numerals:
610: a preprocessing module; a 620 slicing module; 630: and a fusion module.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise. Furthermore, the terms "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Method embodiment
According to an embodiment of the present invention, a mediastinal mass identification method of 2.5DUNet based on a two-level self-attention machine system is provided, and fig. 1 is a flowchart of the mediastinal mass identification method of 2.5DUNet based on the two-level self-attention machine system according to the embodiment of the present invention, as shown in fig. 1, specifically including:
s1, preprocessing the CT image of the mediastinal tumor;
s1 specifically includes:
the method comprises the steps of resampling the image to uniform resolution, graying the resampled image, obtaining a binary image by adopting an Otsu algorithm on the image after the gray scale, carrying out open operation processing on the binary image to process noise, carrying out edge detection on the binary image after the noise is processed to obtain a body region range, and removing most irrelevant regions in the image.
S2, taking a plurality of continuous slices of the preprocessed CT image, and processing each slice into a matrix with a plurality of window widths and window levels;
s2 specifically includes:
taking a plurality of continuous slices of the preprocessed CT image, and processing each slice into a matrix of 2 window-width levels to obtain W × H × (2 × n) data, wherein W and H represent the width and height of the matrix, 2 represents 2 window-width levels, and n represents n continuous slices.
And S3, before inputting the multiple slices processed into the matrix with the multiple window width and window level into a 2.5DUNet of a two-stage self-attention mechanism, performing packet convolution on the multiple slices, and inputting the multiple slices into a two-stage self-attention mechanism module for fusion to obtain a recognition result.
S3 specifically includes:
before inputting a plurality of slices processed into a matrix with a plurality of window widths and window levels into a 2.5DUNet of a two-stage self-attention mechanism, grouping the obtained plurality of slices according to channel dimensions to obtain C groups, inputting the grouped slices into an inter-slice position attention structure for extracting characteristics, and then inputting the inter-slice position attention structure for fusion to obtain a recognition result.
Inputting the grouped slices into the inter-slice position attention structure firstly to extract features specifically comprises the following steps: using convolution with step length to process the characteristic graph from H multiplied by W multiplied by C to N multiplied by C resolution, wherein N is the same as the resolution of the minimum characteristic graph size in the whole network structure, then using three weights to extract a Query characteristic vector, a Key characteristic vector and a Value characteristic vector for the characteristics of the N multiplied by C resolution, carrying out matrix multiplication on the Query characteristic vector and the Key characteristic vector to obtain influence factors among the characteristics of different positions, and taking the result as a weighting factor to multiply with the Value characteristic vector to obtain the characteristics weighted by the position information.
Inputting the fusion attention structure for fusion, and obtaining a recognition result specifically comprises: dividing the feature map into grids, expanding the features in the grids into 1-dimensional vectors, enabling each channel to obtain N multiplied by N feature vectors, inputting all the feature vectors into a linear mapper for mapping, sending the vectors corresponding to the grids at the same positions in different channels into a 1D linear mapping structure as a group of features so as to obtain features of the ith position and different channels after self-attention enhancement, splicing the obtained results to obtain the feature map, multiplying the feature map by the original features as a weighting factor, fusing the features among different channels, and then realizing the fusion of the features among different space positions of slices through a position attention structure.
In the present example, 890 mediastinal mass-related data were collected, each of which contained at least one enhancing sequence. Dividing a training set and a verification set:
training set: 703 CT sequences from 703 patients;
and (3) test set: 187 CT sequences from 187 patients.
Two experts in each case use a marking tool to outline the boundary of the focus region, marking results are stored in a JSON format, and then marking of segmentation levels is obtained through processing.
The mediastinal lump image segmentation method provided by the embodiment of the invention has the following working process: data are first preprocessed and augmented. The pre-processing operation consists of clipping and resampling. The cropping operation refers to cropping the lung region from the overall image, and the resampling resamples the different types of image data to a spatial resolution of 1mm x 1 mm. The data augmentation method comprises the following steps: pixel shift, flip up and down or left and right, rotation of any angle. After the preprocessing is completed, the data is input into the deep learning network extraction features provided by the embodiment of the invention. In order to use 3D spatial information of CT in input, not only a single image but n slices are continuously input.
The embodiment of the invention provides a 2.5DUNet mediastinal tumor CT image identification method based on a two-stage self-attention mechanism, which has the following specific implementation mode:
image preprocessing:
because the data time span is longer and comes from different types of equipment in the same hospital, the spatial resolution of the CT image is inconsistent, and for the convenience of network training, the image is firstly resampled to the spatial resolution of 1mm multiplied by 1mm, and then the image is processed by using an image graphics method to obtain a body region. In particular, the amount of the solvent to be used,
the operation flow of the image graphics is as follows: graying the image, obtaining a binary image by an OTSU algorithm, carrying out open operation processing noise on the binary image, and carrying out edge detection to obtain a body area range. Most extraneous regions in the image can be removed after image graphics processing.
Processing an input image:
in the embodiment of the invention, in order to better utilize the rapid processing advantage of the 2D image and the information gain brought by the 3D space structure, a mode of combining 2D with attention is used. Therefore, some processing of the image input to the model is required. Including data augmentation and multi-channel connections. The purpose of data augmentation is to improve the performance of the model and improve the edge segmentation effect of the model. The data augmentation processing mode comprises the following operations: up-down flipping, left-right flipping, arbitrary angle rotation, pixel offset, and the like. To utilize the 3D spatial information, the present patent requires that the input comprises a plurality of slices in series, further comprising: each slice is processed into a matrix of 2 window-wide window levels, and the final input data is W × H × (2 × n), where W and H represent the width and height of the image, 2 represents 2 window-wide window levels, and n represents n consecutive slices.
Extracting and segmenting image features:
UNet is a widely adopted image semantic segmentation method, and many improved versions of UNet are provided. The patent of the embodiment of the invention aims at the problems that the 2D UNet cannot utilize 3D spatial information to cause false positive segmentation and omission and 3DUNet needs a large amount of computing resources, and designs a 2.5D UNet mediastinal lump identification (2.5D-DLSAUNet) method by combining a self-attention mechanism.
Fig. 2 is a schematic network structure diagram of a mediastinal mass identification method of 2.5DUNet based on a two-stage self-attention mechanism according to an embodiment of the present invention: as shown in fig. 2:
the common basic unit of UNet is Conv2D-BN-RELU-Conv2D-BN-RELU, and the skip-join fusion feature is used in combination with the upsampling and downsampling extraction features. The whole framework of the embodiment of the invention is consistent with UNet, namely, data flow is constructed by up-down sampling, hop connection and the like. But the basic unit is GroupConv-BN-RELU-Dual Level Self Attention-BN-RELU. The Dual Level Self authorization module is designed for 3D spatial feature fusion in the embodiment of the invention.
FIG. 3 is a schematic diagram of a packet convolution of a 2.5DUNet mediastinal mass identification method based on a two-level self-attention mechanism according to an embodiment of the present invention: as shown in fig. 3:
because the input data of the 2.5D-DLSAUNet is CT image data of a plurality of slices, if normal convolution operation is directly used, the characteristics of different slices are mixed up, and the experimental result shows that the deep learning model cannot better utilize the information of the 3D space. Thus, embodiments of the present invention employ packet convolution. The structure is shown in fig. 3. The grouping convolution can group the input features or images according to the channel dimension, so that the purpose that the feature extraction processes among different slices are not interfered mutually is realized. The features extracted by the grouped convolution are input into a Dual Level orientation module, and the Dual Level orientation module comprises an Internal services Location orientation and a Fusion orientation.
Fig. 4 is a schematic inter-slice position attention diagram of a mediastinal mass identification method of 2.5DUNet based on a two-stage self-attention mechanism according to an embodiment of the present invention: as shown in fig. 4:
the Dual Level Attention module designed by the embodiment of the invention has a two-stage structure. The first level is an Internal Slices Location Attention structure, and the second level is a Fusion Attention structure. The idea of transform is adopted in the Internal services Location authorization structure, as shown in FIG. 4. The Transformer method consumes a lot of computing resources when the size of the feature map is large, so for the large size of the feature map, the feature map is first processed from H × W × C to N × C resolution by using convolution with step size, where N is the same as the resolution of the minimum feature map size in the whole network structure, i.e., N in fig. 1. Then, three weights are used for extracting a Query feature vector, a Key feature vector and a Value feature vector for the feature with reduced resolution respectively. And performing matrix multiplication on the Query feature vector and the Key feature vector to obtain influence factors among the features at different positions, and multiplying the result serving as a weighting factor by the Value feature vector to obtain the feature weighted by the position information.
Due to the use of packet convolution, features between different slices need to be fused in a manner to better utilize 3D spatial information. Therefore, after passing through the first-level Internal Slices Location architecture, the features are sent to the second-level Fusion Location architecture.
FIG. 5 is a schematic inter-slice fusion attention diagram of a 2.5DUNet mediastinal mass identification method based on a two-stage self-attention mechanism according to an embodiment of the present invention: as shown in fig. 5:
similar to the first-level structure, in order to reduce the computational resource consumption in the case of a high-resolution feature map, the feature map is divided into patches, and after the features in the patches are expanded into 1-dimensional vectors, N × N feature vectors can be obtained for each channel. All feature vectors are input into Liner project for mapping. In order to fuse 3D spatial information, features between different channels need to be fused. In the embodiment of the present invention, vectors corresponding to Patch at the same position in different channels are sent to the 1D transform structure as a group of features, and as shown in fig. 5, vectors at the ith position in channels 1, 2, and 3 are used as a group of input transform structures, so as to obtain features with enhanced self-attention among channels at the ith position. And splicing the obtained results to obtain a characteristic graph which is used as a weighting factor to be multiplied by the original characteristic. Therefore, feature fusion among different channels is completed, namely feature fusion among different slices, information of a 3D space is utilized, and then the coupling of features among different space positions in the 3D space is indirectly realized through a Location Attention structure.
The following are the training hyper-parameters and strategy settings:
different super parameters can cause different model performances, manual adjustment of the super parameters depends on experience of algorithm engineers, and the super parameters are automatically searched in a grid searching mode. The search hyper-parameters include an initial learning rate, weight decay parameters, and the like.
A total of 100 epochs were trained, using two los:
Figure GDA0003234294130000101
Figure GDA0003234294130000111
specifically, the first 10 epoch loss functions are:
loss is 0.001 × diceloss + bceloss formula 3;
the last 90 epoch loss functions are:
loss is 0.1 × dicelios + bceloss formula 4;
the optimizer uses AdaBound during training.
Evaluation index
For the experimental results, evaluations were performed using Dice Score, whose formula is:
Figure GDA0003234294130000112
the embodiment of the invention adopts an algorithm which combines 2D and 3D information to be developed by combining an attention mechanism, thereby realizing the segmentation of mediastinal masses. Because the deep learning is used and the attention mechanism is combined, and the advantages of 2D and 3D are utilized, the model has high precision and high reasoning speed; in addition, due to the fact that the size of the database is large, generalization performance of the database can be guaranteed.
System embodiment
According to an embodiment of the present invention, there is provided a mediastinal mass CT image recognition apparatus based on a two-stage self-attention machine 2.5DUNet, and fig. 6 is a schematic diagram of a mediastinal mass CT image recognition method based on a two-stage self-attention machine 2.5DUNet according to an embodiment of the present invention, as shown in fig. 6, the method for mediastinal mass CT image recognition based on a two-stage self-attention machine 2.5DUNet according to an embodiment of the present invention specifically includes:
the preprocessing module 610: for preprocessing the CT image of mediastinal tumors;
the preprocessing module 610 is specifically configured to:
resampling the image to uniform resolution, graying the resampled image, obtaining a binary image by adopting an Otsu algorithm on the image after the gray scale, carrying out open operation processing on the binary image to process noise, carrying out edge detection on the binary image after the noise is processed to obtain a body region range, and removing most irrelevant regions in the image;
the slicing module 620: taking a plurality of continuous slices from the preprocessed CT image, and processing each slice into a matrix with a plurality of window widths and window levels;
the slicing module is specifically configured to: taking a plurality of continuous slices of the preprocessed CT image, and processing each slice into a matrix with 2 window-width window levels to obtain W multiplied by H (2 multiplied by n) data, wherein W and H represent the width and height of the matrix, 2 represents 2 window-width window levels, and n represents n continuous slices;
the fusion module 630: before inputting a plurality of slices processed into a matrix with a plurality of window width and window levels into a 2.5DUNet of a two-stage self-attention mechanism, carrying out grouping convolution on the plurality of slices, and then inputting the plurality of slices into a two-stage self-attention mechanism module for fusion to obtain an identification result.
The fusion module 630 is specifically configured to:
before inputting a plurality of slices processed into a matrix with a plurality of window widths and window levels into a 2.5DUNet of a two-stage self-attention mechanism, grouping the plurality of slices according to channel dimensions to obtain C groups, inputting the grouped slices into an inter-slice position attention structure for extracting characteristics, then inputting the inter-slice position attention structure into a fusion attention structure for fusion to obtain a recognition result,
the fusion module 630 is specifically configured to:
using convolution with step length to process the characteristic graph from H multiplied by W multiplied by C to N multiplied by C resolution, wherein N is the same as the resolution of the minimum characteristic graph size in the whole network structure, then using three weights to extract Query characteristic vector, Key characteristic vector and Value characteristic vector for the characteristics of N multiplied by C resolution, matrix multiplication is made between the Query characteristic vector and the Key characteristic vector to obtain the influence factors among the characteristics of different positions, using the result as the weighting factor to multiply with the Value characteristic vector to obtain the characteristics weighted by the position information,
the fusion module 630 is specifically configured to:
dividing the feature map into grids, expanding the features in the grids into 1-dimensional vectors, enabling each channel to obtain N x N feature vectors, inputting all the feature vectors into a linear mapper for mapping, sending the vectors corresponding to the grids at the same positions in different channels into a 1D linear mapping structure as a group of features so as to obtain features of the ith position and different channels after self-attention enhancement, splicing the obtained results to obtain feature maps, multiplying the feature maps with the original features as weighting factors, fusing the features among different channels, and then realizing the fusion of the features among different spatial positions of the slice through a position attention structure once.
The embodiment of the present invention is a system embodiment corresponding to the above method embodiment, and specific operations of each module may be understood with reference to the description of the method embodiment, which is not described herein again.
Device embodiment II
The embodiment of the present invention provides a mediastinal tumor CT image recognition device based on a 2.5DUNet two-level self-attention mechanism, as shown in fig. 7, including: a memory 70, a processor 72 and a computer program stored on the memory 70 and executable on the processor 72, the computer program, when executed by the processor, implementing the steps of the above-described method embodiments.
Device embodiment III
The embodiment of the present invention provides a computer-readable storage medium, on which an implementation program for information transmission is stored, and when the program is executed by the processor 72, the steps in the above method embodiments are implemented.
The computer-readable storage medium of this embodiment includes, but is not limited to: ROM, RAM, magnetic or optical disks, and the like.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; however, these modifications or alternatives are not intended to depart from the scope of the corresponding technical solutions.

Claims (6)

1. A mediastinal tumor identification method of 2.5DUNet based on a two-stage self-attention mechanism is characterized by comprising the following steps:
s1, preprocessing the CT image of the mediastinal tumor;
s2, taking a plurality of continuous slices of the preprocessed CT image, and processing each slice into a matrix with a plurality of window widths and window levels;
s3, before inputting the multiple slices processed into the matrix with the multiple window width and window level into a 2.5DUNet of a two-stage self-attention mechanism, performing packet convolution on the multiple slices, and inputting the multiple slices into a two-stage self-attention mechanism module for fusion to obtain an identification result;
the S3 specifically includes:
before inputting a plurality of slices processed into a matrix with a plurality of window widths and window levels into a 2.5DUNet of a two-stage self-attention mechanism, grouping the obtained plurality of slices according to channel dimensions to obtain C groups, inputting the grouped slices into an inter-slice position attention structure for extracting characteristics, and then inputting the inter-slice position attention structure for fusion to obtain a recognition result;
the step of inputting the grouped slices into the inter-slice position attention structure for extracting features specifically comprises the following steps: using convolution with step length to process the characteristic diagram from H multiplied by W multiplied by C to N multiplied by C resolution, wherein N is the same as the resolution of the minimum characteristic diagram size in the whole network structure, then respectively using three weights to extract a Query characteristic vector, a Key characteristic vector and a Value characteristic vector for the characteristics of the N multiplied by C resolution, carrying out matrix multiplication on the Query characteristic vector and the Key characteristic vector to obtain influence factors among the characteristics of different positions, and taking the result as a weighting factor to multiply with the Value characteristic vector to obtain the characteristics weighted by the position information;
dividing the feature map into grids, expanding the features in the grids into 1-dimensional vectors, enabling each channel to obtain N x N feature vectors, inputting all the feature vectors into a linear mapper for mapping, sending the vectors corresponding to the grids at the same positions in different channels into a 1D linear mapping structure as a group of features so as to obtain features of the ith position and different channels after self-attention enhancement, splicing the obtained results to obtain feature maps, multiplying the feature maps with the original features as weighting factors, fusing the features among different channels, and then realizing the fusion of the features among different spatial positions of the slice through a position attention structure once.
2. The method for 2.5DUNet mediastinal mass identification based on the two-stage attention mechanism as claimed in claim 1, wherein the step S1 specifically comprises:
the method comprises the steps of resampling the image to uniform resolution, graying the resampled image, obtaining a binary image by adopting an Otsu algorithm on the image after the gray scale, carrying out open operation processing on the binary image to process noise, carrying out edge detection on the binary image after the noise is processed to obtain a body region range, and removing most irrelevant regions in the image.
3. The method for 2.5DUNet mediastinal mass identification based on the two-stage attention mechanism as claimed in claim 1, wherein the step S2 specifically comprises:
taking a plurality of continuous slices of the preprocessed CT image, and processing each slice into a matrix of 2 window-width levels to obtain W × H × (2 × n) data, wherein W and H represent the width and height of the matrix, 2 represents 2 window-width levels, and n represents n continuous slices.
4. A 2.5DUNet mediastinal mass identification system based on a two-stage self-attentive mechanism, comprising:
a preprocessing module: for preprocessing the CT image of mediastinal tumors;
a slicing module: taking a plurality of continuous slices from the preprocessed CT image, and processing each slice into a matrix with a plurality of window widths and window levels;
a fusion module: before inputting a plurality of slices processed into a matrix with a plurality of window widths and window levels into a 2.5DUNet of a two-stage self-attention mechanism, carrying out grouped convolution on the plurality of slices, and then inputting the slices into a two-stage self-attention mechanism module for fusion to obtain an identification result;
the preprocessing module is specifically configured to:
resampling the image to uniform resolution, graying the resampled image, obtaining a binary image by adopting an Otsu algorithm on the image after the gray scale, carrying out open operation processing on the binary image to process noise, carrying out edge detection on the binary image after the noise is processed to obtain a body region range, and removing most irrelevant regions in the image;
the slicing module is specifically configured to: taking a plurality of continuous slices of the preprocessed CT image, and processing each slice into a matrix with 2 window-width window levels to obtain W multiplied by H (2 multiplied by n) data, wherein W and H represent the width and height of the matrix, 2 represents 2 window-width window levels, and n represents n continuous slices;
the fusion module is specifically configured to:
before inputting a plurality of slices processed into a matrix with a plurality of window widths and window levels into a 2.5DUNet of a two-stage self-attention mechanism, grouping the obtained plurality of slices according to channel dimensions to obtain C groups, inputting the grouped slices into an inter-slice position attention structure for extracting characteristics, and then inputting the inter-slice position attention structure for fusion to obtain a recognition result;
using convolution with step length to process the characteristic diagram from H multiplied by W multiplied by C to N multiplied by C resolution, wherein N is the same as the resolution of the minimum characteristic diagram size in the whole network structure, then respectively using three weights to extract a Query characteristic vector, a Key characteristic vector and a Value characteristic vector for the characteristics of the N multiplied by C resolution, carrying out matrix multiplication on the Query characteristic vector and the Key characteristic vector to obtain influence factors among the characteristics of different positions, and taking the result as a weighting factor to multiply with the Value characteristic vector to obtain the characteristics weighted by the position information;
dividing the feature map into grids, expanding the features in the grids into 1-dimensional vectors, enabling each channel to obtain N multiplied by N feature vectors, inputting all the feature vectors into a linear mapper for mapping, sending the vectors corresponding to the grids at the same positions in different channels into a 1D linear mapping structure as a group of features so as to obtain features of the ith position and different channels after self-attention enhancement, splicing the obtained results to obtain the feature map, multiplying the feature map by the original features as a weighting factor, fusing the features among different channels, and then realizing the fusion of the features among different space positions of slices through a position attention structure.
5. A mediastinal mass identification device based on a 2.5DUNet two-level self-attentive mechanism, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the two-level self-attention mechanism 2.5DUNet based mediastinal mass identification method of any one of claims 1 to 3.
6. A computer-readable storage medium, having stored thereon a program for implementing information transfer, which when executed by a processor implements the steps of the method for mediastinal mass identification based on a two-level self-attention system 2.5DUNet as claimed in any one of claims 1 to 3.
CN202110691215.2A 2021-06-22 2021-06-22 Mediastinal lump identification method, system and device Active CN113139627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110691215.2A CN113139627B (en) 2021-06-22 2021-06-22 Mediastinal lump identification method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110691215.2A CN113139627B (en) 2021-06-22 2021-06-22 Mediastinal lump identification method, system and device

Publications (2)

Publication Number Publication Date
CN113139627A CN113139627A (en) 2021-07-20
CN113139627B true CN113139627B (en) 2021-11-05

Family

ID=76815968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110691215.2A Active CN113139627B (en) 2021-06-22 2021-06-22 Mediastinal lump identification method, system and device

Country Status (1)

Country Link
CN (1) CN113139627B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113869443A (en) * 2021-10-09 2021-12-31 新大陆数字技术股份有限公司 Jaw bone density classification method, system and medium based on deep learning

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network
CN109886321A (en) * 2019-01-31 2019-06-14 南京大学 A kind of image characteristic extracting method and device for icing image fine grit classification
CN110189342A (en) * 2019-06-27 2019-08-30 中国科学技术大学 Glioma region automatic division method
CN110807752A (en) * 2019-09-23 2020-02-18 江苏艾佳家居用品有限公司 Image attention mechanism processing method based on convolutional neural network
CN110992352A (en) * 2019-12-13 2020-04-10 北京小白世纪网络科技有限公司 Automatic infant head circumference CT image measuring method based on convolutional neural network
CN111080657A (en) * 2019-12-13 2020-04-28 北京小白世纪网络科技有限公司 CT image organ segmentation method based on convolutional neural network multi-dimensional fusion
CN111340828A (en) * 2020-01-10 2020-06-26 南京航空航天大学 Brain glioma segmentation based on cascaded convolutional neural networks
CN111402219A (en) * 2020-03-11 2020-07-10 北京深睿博联科技有限责任公司 Old cerebral infarction detection method and device
CN111681219A (en) * 2020-06-03 2020-09-18 北京小白世纪网络科技有限公司 New coronary pneumonia CT image classification method, system and equipment based on deep learning
CN111738113A (en) * 2020-06-10 2020-10-02 杭州电子科技大学 Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint
CN111754520A (en) * 2020-06-09 2020-10-09 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN111950643A (en) * 2020-08-18 2020-11-17 创新奇智(上海)科技有限公司 Model training method, image classification method and corresponding device
CN111985551A (en) * 2020-08-14 2020-11-24 湖南理工学院 Stereo matching algorithm based on multiple attention networks
CN112017192A (en) * 2020-08-13 2020-12-01 杭州师范大学 Glandular cell image segmentation method and system based on improved U-Net network
CN112017191A (en) * 2020-08-12 2020-12-01 西北大学 Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
CN112102324A (en) * 2020-09-17 2020-12-18 中国科学院海洋研究所 Remote sensing image sea ice identification method based on depth U-Net model
CN112541909A (en) * 2020-12-22 2021-03-23 南开大学 Lung nodule detection method and system based on three-dimensional neural network of slice perception
CN112686903A (en) * 2020-12-07 2021-04-20 嘉兴职业技术学院 Improved high-resolution remote sensing image semantic segmentation model
CN112784924A (en) * 2021-02-08 2021-05-11 宁波大学 Rib fracture CT image classification method based on grouping aggregation deep learning model
CN112819831A (en) * 2021-01-29 2021-05-18 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN112863081A (en) * 2021-01-04 2021-05-28 西安建筑科技大学 Device and method for automatic weighing, classifying and settling vegetables and fruits
CN112950651A (en) * 2021-02-02 2021-06-11 广州柏视医疗科技有限公司 Automatic delineation method of mediastinal lymph drainage area based on deep learning network

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615002A (en) * 2018-04-22 2018-10-02 广州麦仑信息科技有限公司 A kind of palm vein authentication method based on convolutional neural networks
CN109509178B (en) * 2018-10-24 2021-09-10 苏州大学 OCT image choroid segmentation method based on improved U-net network
CN109871909B (en) * 2019-04-16 2021-10-01 京东方科技集团股份有限公司 Image recognition method and device
CN110309734A (en) * 2019-06-14 2019-10-08 暨南大学 A kind of microcirculation blood flow velocity measurement method and measuring system based on target identification
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN111597869A (en) * 2020-03-25 2020-08-28 浙江工业大学 Human activity recognition method based on grouping residual error joint space learning
CN111832620A (en) * 2020-06-11 2020-10-27 桂林电子科技大学 Image emotion classification method based on double-attention multilayer feature fusion
CN112150442A (en) * 2020-09-25 2020-12-29 帝工(杭州)科技产业有限公司 New crown diagnosis system based on deep convolutional neural network and multi-instance learning
CN112699950B (en) * 2021-01-06 2023-03-24 腾讯科技(深圳)有限公司 Medical image classification method, image classification network processing method, device and equipment

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network
CN109886321A (en) * 2019-01-31 2019-06-14 南京大学 A kind of image characteristic extracting method and device for icing image fine grit classification
CN110189342A (en) * 2019-06-27 2019-08-30 中国科学技术大学 Glioma region automatic division method
CN110807752A (en) * 2019-09-23 2020-02-18 江苏艾佳家居用品有限公司 Image attention mechanism processing method based on convolutional neural network
CN110992352A (en) * 2019-12-13 2020-04-10 北京小白世纪网络科技有限公司 Automatic infant head circumference CT image measuring method based on convolutional neural network
CN111080657A (en) * 2019-12-13 2020-04-28 北京小白世纪网络科技有限公司 CT image organ segmentation method based on convolutional neural network multi-dimensional fusion
CN111340828A (en) * 2020-01-10 2020-06-26 南京航空航天大学 Brain glioma segmentation based on cascaded convolutional neural networks
CN111402219A (en) * 2020-03-11 2020-07-10 北京深睿博联科技有限责任公司 Old cerebral infarction detection method and device
CN111681219A (en) * 2020-06-03 2020-09-18 北京小白世纪网络科技有限公司 New coronary pneumonia CT image classification method, system and equipment based on deep learning
CN111754520A (en) * 2020-06-09 2020-10-09 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN111738113A (en) * 2020-06-10 2020-10-02 杭州电子科技大学 Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint
CN112017191A (en) * 2020-08-12 2020-12-01 西北大学 Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
CN112017192A (en) * 2020-08-13 2020-12-01 杭州师范大学 Glandular cell image segmentation method and system based on improved U-Net network
CN111985551A (en) * 2020-08-14 2020-11-24 湖南理工学院 Stereo matching algorithm based on multiple attention networks
CN111950643A (en) * 2020-08-18 2020-11-17 创新奇智(上海)科技有限公司 Model training method, image classification method and corresponding device
CN112102324A (en) * 2020-09-17 2020-12-18 中国科学院海洋研究所 Remote sensing image sea ice identification method based on depth U-Net model
CN112686903A (en) * 2020-12-07 2021-04-20 嘉兴职业技术学院 Improved high-resolution remote sensing image semantic segmentation model
CN112541909A (en) * 2020-12-22 2021-03-23 南开大学 Lung nodule detection method and system based on three-dimensional neural network of slice perception
CN112863081A (en) * 2021-01-04 2021-05-28 西安建筑科技大学 Device and method for automatic weighing, classifying and settling vegetables and fruits
CN112819831A (en) * 2021-01-29 2021-05-18 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN112950651A (en) * 2021-02-02 2021-06-11 广州柏视医疗科技有限公司 Automatic delineation method of mediastinal lymph drainage area based on deep learning network
CN112784924A (en) * 2021-02-08 2021-05-11 宁波大学 Rib fracture CT image classification method based on grouping aggregation deep learning model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Position and Channel Attention for Image Inpainting by Semantic Structure;Jingjun Qiu et al;《 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI)》;20201224;第1290-1295页 *
基于多尺度密集网络与两级残差注意力的乳腺肿瘤分割方法研究;赵引;《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》;20210215;第2021年卷(第2期);E072-1657 *
基于机器学习的非限制条件下猪脸识别研究;燕红文;《中国博士学位论文全文数据库 农业科技辑》;20210615;第2021年卷(第6期);6、8 *

Also Published As

Publication number Publication date
CN113139627A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
US20220092789A1 (en) Automatic pancreas ct segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN110047082A (en) Pancreatic Neuroendocrine Tumors automatic division method and system based on deep learning
US20040184646A1 (en) Method, apparatus, and program for judging images
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN111986177A (en) Chest rib fracture detection method based on attention convolution neural network
CN113239755B (en) Medical hyperspectral image classification method based on space-spectrum fusion deep learning
US20230005140A1 (en) Automated detection of tumors based on image processing
CN116681958B (en) Fetal lung ultrasonic image maturity prediction method based on machine learning
CN111383215A (en) Focus detection model training method based on generation of confrontation network
CN112508884A (en) Comprehensive detection device and method for cancerous region
CN110570419A (en) Method and device for acquiring characteristic information and storage medium
Yang et al. A multi-stage progressive learning strategy for COVID-19 diagnosis using chest computed tomography with imbalanced data
Cai et al. Identifying architectural distortion in mammogram images via a se-densenet model and twice transfer learning
CN113139627B (en) Mediastinal lump identification method, system and device
CN115205306A (en) Medical image segmentation method based on graph convolution
Souid et al. Xception-ResNet autoencoder for pneumothorax segmentation
CN112967254A (en) Lung disease identification and detection method based on chest CT image
CN115631387B (en) Method and device for predicting lung cancer pathology high-risk factor based on graph convolution neural network
CN114821176B (en) Viral encephalitis classification system for MR (magnetic resonance) images of children brain
CN116341620A (en) Efficient neural network architecture method and system based on ERetinaNet
CN115471512A (en) Medical image segmentation method based on self-supervision contrast learning
CN113017670A (en) Mediastinal lump identification method and device based on 3D UNet and storage medium
Paul et al. Computer-Aided Diagnosis Using Hybrid Technique for Fastened and Accurate Analysis of Tuberculosis Detection with Adaboost and Learning Vector Quantization
CN117636064B (en) Intelligent neuroblastoma classification system based on pathological sections of children

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant