CN112634285B - Method for automatically segmenting abdominal CT visceral fat area - Google Patents

Method for automatically segmenting abdominal CT visceral fat area Download PDF

Info

Publication number
CN112634285B
CN112634285B CN202011542684.XA CN202011542684A CN112634285B CN 112634285 B CN112634285 B CN 112634285B CN 202011542684 A CN202011542684 A CN 202011542684A CN 112634285 B CN112634285 B CN 112634285B
Authority
CN
China
Prior art keywords
image
attention
visceral fat
net network
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011542684.XA
Other languages
Chinese (zh)
Other versions
CN112634285A (en
Inventor
彭博
左昊
贾维
张傲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202011542684.XA priority Critical patent/CN112634285B/en
Publication of CN112634285A publication Critical patent/CN112634285A/en
Application granted granted Critical
Publication of CN112634285B publication Critical patent/CN112634285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method for automatically segmenting an abdominal CT visceral fat area, which comprises the following steps: selecting a clinical abdomen CT image as a data set; preprocessing an image in the data set; extracting an image of a final visceral fat region; constructing an Attention U-net network; training an Attention U-net network; preprocessing an abdomen CT image needing to be predicted, and then inputting the preprocessed image into a trained Attention U-net network, wherein an image output by the Attention U-net network is a segmentation image. The invention can accelerate the speed of abdominal CT visceral fat area segmentation, simplify the segmentation steps, perform batch segmentation, greatly improve the segmentation efficiency, simplify the early operation of abdominal visceral fat amount calculation, and provide a better basis for the subsequent visceral fat amount calculation.

Description

Method for automatically segmenting abdominal CT visceral fat area
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a method for automatically segmenting an abdominal CT visceral fat area.
Background
An increase in the body fat in general and an increase in abdominal fat, particularly in the abdominal cavity, may cause cardiovascular diseases such as diabetes, hyperlipidemia, hypertension, insulin resistance, and hyperuricemia. Therefore, the method has important clinical value for preventing and treating related cardiovascular diseases by quantitatively analyzing the abdominal fat content. Currently, the most common abdominal fat measurement is performed by CT, which can accurately quantify the volume of the Adipose Tissue (AT) of the human body, particularly the area AT such as subcutaneous fat area (SA) of the abdominal wall or visceral fat area (VA) in the abdominal cavity. AT present, the application range of the CT measurement AT technology has been related to a plurality of fields such as clinical nutrition, geriatric medicine, epidemiology, genetics, especially endocrine metabolism and cardiovascular system.
In the process of measuring AT by CT, a region of interest (ROI) needs to be drawn when measuring SA and VA, and in many previous studies, segmentation is performed by a manual or conventional segmentation method. As used on siemens CT workstation (VE 40D) is a watershed-based segmentation method. The watershed segmentation method is used for the segmentation of the visceral fat, and when the abdominal wall muscle layer is thin or is less continuous, the visceral fat and the abdominal wall fat layer can be segmented together. At this time, manual adjustment is needed to delineate the visceral fat area, so the degree of automation is insufficient, the efficiency is not high, and the method is not suitable for carrying out general investigation.
Deep learning is currently applied in the field of segmentation of medical images. Long et al first proposed a full convolutional neural network model (FCN) that can achieve end-to-end image segmentation, extending image-level classification to pixel-level classification, and replacing the convolutional layer with a full connection layer in a classification network framework. However, the FCN results are not fine enough, do not take into account the pixel-to-pixel correlation, and are not sensitive to the details of the image. Ronneberger et al propose that a U-net network fuses high-level information and shallow-level information of an image by using a cascade operation between a decoder and an encoder, thereby avoiding the loss of high-level semantic information and achieving a good effect on segmentation tasks of some medical images. And the U-net network can effectively utilize the training data set, and reduce the pressure on sample requirements. Millettari et al propose to use a voxel-based full-convolution V-net network for the segmentation of three-dimensional medical images and to use a framework of residual networks on the part of the encoder to prevent gradient disappearance or gradient explosion while the network is continuously deepened. Zhao et al propose a pyramid scene parsing network (PSPNet). The PSPNet network obtains more context information by utilizing feature fusion on the basis of the FCN network, and then improves the obtaining capability of the global information by aggregating the context information of different areas.
Disclosure of Invention
The invention mainly overcomes the defects in the prior art and provides a method for automatically segmenting an abdominal CT visceral fat area.
The invention solves the technical problem, and the provided technical scheme is as follows: a method of automatically segmenting abdominal CT visceral fat regions, comprising:
s100, selecting clinical abdomen CT images of different age groups, different abdomen positions and different slice thicknesses as a data set;
s200, preprocessing an image in a data set to obtain a preprocessed data set image;
step S300, manually dividing a mask image of the visceral fat area according to the preprocessed data set image, and performing AND operation on pixel points corresponding to the divided mask image and the preprocessed data set image to extract a final visceral fat area image;
s400, constructing an Attention U-net network, and inputting the preprocessed data set image and the final image of the visceral fat area into the constructed Attention U-net network as training and predicting data;
s500, training an Attention U-net network according to the value of the loss function and the result of the precision;
and S600, preprocessing the abdominal CT image needing to be predicted, and inputting the preprocessed image into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmented image.
The method has the further technical scheme that in the step S100, midriff images of young and strong people and old people are selected, and the slice width is 1mm; and selecting images of the upper abdomen and the lower abdomen of young and middle-aged people and old people, wherein the slice width is 5mm.
The further technical scheme is that the specific process of the preprocessing in the step S200 is as follows: extracting pixel information in the DICOM file to obtain an original CT image, and performing image binarization on the original CT image to distinguish an visceral fat region from a background region.
The further technical scheme is that the gray value of the pixel value between 874 and 974 in the image binarization is set as 255, and the gray value of the pixel value greater than 974 or less than 874 is set as 0.
The further technical scheme is that the Attention U-net network in the step S500 comprises two parts, an encoding part and a decoding part;
the coding part comprises a 5-layer structure, the first 4-layer structure comprises two convolutional layers and a maximum pooling layer, the 5 th-layer structure comprises two convolutional layers, the convolutional layers use relu as an activation function, the size of a convolutional kernel is 3 x 3, the number of output channels of the first-layer structure is 32, and then the number of output channels of each-layer structure is 2 times that of the last layer;
the decoding part comprises 5 layers of structures, each layer of structure comprises a bilinear interpolation upsampling structure, an attention structure, three layers of convolution layers and a splicing structure, and the convolution layers all use relu as an activation function; the convolution kernel size is 3 x 3.
The further technical solution is that the process of step S600 is: extracting the pixel information in the DICOM file, then carrying out image binarization according to the preprocessing method in the step S200, inputting the image subjected to image binarization into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmentation image.
The invention has the following beneficial effects: the method can accelerate the speed of abdominal CT visceral fat area segmentation, simplify the segmentation steps, perform batch segmentation, and does not need manual adjustment, thereby greatly improving the segmentation efficiency, simplifying the early operation of abdominal visceral fat amount calculation, and providing a better basis for the subsequent visceral fat amount calculation.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a diagram of an Attention U-net network model;
FIG. 3 is an attention mechanism module;
FIG. 4 is a graph comparing results of visceral fat segmentation;
fig. 5 is a process diagram of CT image processing.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
As shown in fig. 1, a method for automatically segmenting a visceral fat region in abdominal CT according to the present invention comprises:
s100, selecting midriff and abdomen images of young and strong people (age is 30 +/-10, average value +/-10 standard deviation) and old people (60 +/-10), wherein the slice width is 1mm; selecting images of the upper abdomen and the lower abdomen of young and middle-aged people and old people, and taking the section width of 5mm as a data set;
s200, preprocessing an image in a data set to obtain a preprocessed data set image;
the specific process of the pretreatment comprises the following steps: firstly, extracting pixel information in a DICOM file to obtain an original CT image, and performing image binarization on the original CT image to distinguish a visceral fat area and a background area as shown in FIG. 5a, wherein in order to ensure that a set threshold value can distinguish the visceral fat area and a tissue with higher density through a pixel value; the threshold value set by the invention is as follows: the grey value between 874-974 for pixel values is set to 255 and the grey value greater than 974 or less than 874 is set to 0, the image being divided in two by a threshold value, as shown in fig. 5 b;
step S300, manually dividing a mask image of the visceral fat region according to the preprocessed data set image, as shown in fig. 5c, and then performing an and operation on pixel points corresponding to the divided mask image and the preprocessed data set image, to extract a final visceral fat region image, as shown in fig. 5 d;
s400, constructing an Attention U-net network, and inputting the preprocessed data set image and the final image of the visceral fat area into the constructed Attention U-net network as training and predicting data;
the AttentionU-net network includes an encoding portion and a decoding portion;
the coding part comprises a 5-layer structure, the first 4-layer structure comprises two convolutional layers and a maximum pooling layer, the 5 th-layer structure comprises two convolutional layers, each convolutional layer uses relu as an activation function, the size of a convolutional kernel is 3 x 3, the number of output channels of the first-layer structure is 32, and then the number of output channels of each-layer structure is 2 times that of the last layer;
the decoding part comprises 5 layers of structures, each layer of structure comprises a bilinear interpolation up-sampling, an attention structure, three layers of convolutional layers and a splicing structure, and each layer of convolutional layer uses relu as an activation function. The convolution kernel size is 3 × 3, and its attention structure (AG) is shown in fig. 3: taking the output of a down-sampling layer and the output of an up-sampling layer as input, enabling the two inputs to pass through a convolution layer (convolution kernel 1) and a batch normalization layer respectively, then performing addition operation, rule activation function, convolution layer (convolution kernel 1), batch normalization and sigmoid activation function, and finally performing multiplication operation on the output of the sigmoid activation function and the input of the down-sampling layer;
s500, training an Attention U-net network according to the value of the loss function and the result of the precision;
in the training stage of the whole network, in order to prevent the over-fitting problem and improve the generalization capability of the Attention U-Net model, a Dropout layer is added after two convolutions before and after the fourth down-sampling; the Dropot layer randomly subtracts some neurons in the training of each batch, and can set the probability of how many neurons are removed by each Dropot layer, when the training of the first batch is carried out, a part of neurons are removed according to the preset probability, then the training is started, and only the neurons which are not removed and the corresponding weight parameters are updated and retained; after all the parameters are updated, a part of neurons are removed again according to the corresponding probability, then training is carried out, and if the neuron which is used for training newly is trained, the parameters of the neuron are continuously updated; the neurons which are taken away for the second time, meanwhile, the parameters of the neurons which have been updated for the first time are kept and are not modified until the parameters of the neurons are not deleted when the batch carries out Dropot for the nth time; dropout needs to be added in the training stage to prevent overfitting from improving the generalization capability of the model, and a Dropout layer is not added in the testing stage; through cross validation, the effect is best when the Dropot rate is 0.5, and Dropot randomly generates the most network structures when the Dropot rate is 0.5;
in the whole network training process, iteration times of Attention U-net network training are 120 times, batch size is 2, and learning rate is set to be 1.0e-5; the iteration times of the U-net network training are 120 times, the batch size is 4, and the learning rate is set to be 1.0e-5; training the optimizer using Adam as a model, the tensor of the input data is 2018 × 1 × 256;
and S600, extracting pixel information in a DICOM file of the abdominal CT image needing to be predicted, performing image binarization according to the preprocessing method in the S200, inputting the image subjected to image binarization into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmented image.
Examples
FIG. 4 is a comparison of the manual segmentation of the doctor, the U-net network, and the Attention U-net network in the test set:
the segmentation Precision (SA), the under-segmentation rate (UR), the over-segmentation rate (OR), the Precision, and the Recall were calculated for the segmentation result and the golden standard image, and the calculation results are shown in table 1:
TABLE 1 comparison of visceral fat segmentation results
Figure BDA0002852739760000071
As can be seen from the calculation results in table 1, the accuracy of the two deep learning networks for segmenting the CT images of different people and different abdomen positions is relatively high, and the over-segmentation rate and the under-segmentation rate are both low. Because the Attention gate structure is added into the Attention U-Net network, the segmentation effect of the model is effectively enhanced, and the accuracy rate is higher compared with the U-Net network. And from the segmentation effect of FIG. 4, the extension U-Net segments less redundant area than the U-Net network. The OR of the Attention U-Net model is 1.87 percent less than that of the U-Net model, and the two networks have good segmentation on the detailed structure of the visceral fat area, so that the segmentation rate can be seen from the under-segmentation rate. However, the division is performed for the purpose of calculating the visceral fat area, which is calculated by the number of pixel points in the divided image, and therefore, the result is more accurate as the unnecessary area is divided, but the present invention divides the unnecessary area, but the divided parts are within an acceptable error range compared with the manual division. The method of the present invention can meet the requirement of automatic segmentation of visceral fat regions.
Although the present invention has been described with reference to the above embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present invention.

Claims (3)

1. A method for automatically segmenting abdominal CT visceral fat regions, comprising:
s100, selecting clinical abdomen CT images of different age groups, different abdomen positions and different slice thicknesses as a data set;
s200, preprocessing an image in a data set to obtain a preprocessed data set image;
the specific process of the preprocessing in the step S200 is as follows: extracting pixel information in the DICOM file to obtain an original CT image, and performing image binarization on the original CT image to distinguish an visceral fat area from a background area;
step S300, manually dividing a mask image of the visceral fat area according to the preprocessed data set image, and performing AND operation on pixel points corresponding to the divided mask image and the preprocessed data set image to extract a final visceral fat area image;
s400, constructing an Attention U-net network, and inputting the preprocessed data set image and the final image of the visceral fat area into the constructed Attention U-net network as training and predicting data;
s500, training an Attention U-net network according to the value of the loss function and the result of the precision;
the Attention U-net network comprises two parts, an encoding part and a decoding part;
the coding part comprises a 5-layer structure, the first 4-layer structure comprises two convolutional layers and a maximum pooling layer, the 5 th-layer structure comprises two convolutional layers, the convolutional layers use relu as an activation function, the size of a convolutional kernel is 3 x 3, the number of output channels of the first-layer structure is 32, and then the number of output channels of each-layer structure is 2 times that of the last layer;
the decoding part comprises 5 layers of structures, each layer of structure comprises a bilinear interpolation up-sampling structure, an attention structure, three layers of convolution layers and a splicing structure, and the convolution layers all use relu as an activation function; convolution kernel size is 3 x 3;
s600, preprocessing an abdomen CT image needing to be predicted, and then inputting the preprocessed image into a trained Attention U-net network, wherein an image output by the Attention U-net network is a segmented image;
the step S600 has the process of: extracting the pixel information in the DICOM file, then carrying out image binarization according to the preprocessing method in the step S200, inputting the image after the image binarization into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmentation image.
2. The method for automatically segmenting the abdominal CT visceral fat area of claim 1, wherein the midriff images of young and old people with a slice width of 1mm are selected in step S100; and selecting images of the upper abdomen and the lower abdomen of young and middle-aged people and old people, wherein the slice width is 5mm.
3. The method as claimed in claim 1, wherein the gray-level value between 874-974 is 255 for binarization of the image, and the gray-level value greater than 974 or less than 874 is 0 for binarization of the image.
CN202011542684.XA 2020-12-23 2020-12-23 Method for automatically segmenting abdominal CT visceral fat area Active CN112634285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011542684.XA CN112634285B (en) 2020-12-23 2020-12-23 Method for automatically segmenting abdominal CT visceral fat area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011542684.XA CN112634285B (en) 2020-12-23 2020-12-23 Method for automatically segmenting abdominal CT visceral fat area

Publications (2)

Publication Number Publication Date
CN112634285A CN112634285A (en) 2021-04-09
CN112634285B true CN112634285B (en) 2022-11-22

Family

ID=75321969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011542684.XA Active CN112634285B (en) 2020-12-23 2020-12-23 Method for automatically segmenting abdominal CT visceral fat area

Country Status (1)

Country Link
CN (1) CN112634285B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516624A (en) * 2021-04-28 2021-10-19 武汉联影智融医疗科技有限公司 Determination of puncture forbidden zone, path planning method, surgical system and computer equipment
CN114271796B (en) * 2022-01-25 2023-03-28 泰安市康宇医疗器械有限公司 Method and device for measuring human body components by using body state density method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001054066A1 (en) * 2000-01-18 2001-07-26 The University Of Chicago Automated method and system for the segmentation of lung regions in computed tomography scans
CN114092439A (en) * 2021-11-18 2022-02-25 深圳大学 Multi-organ instance segmentation method and system
CN114219943A (en) * 2021-11-24 2022-03-22 华南理工大学 CT image organ-at-risk segmentation system based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN111784713A (en) * 2020-07-26 2020-10-16 河南工业大学 Attention mechanism-introduced U-shaped heart segmentation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001054066A1 (en) * 2000-01-18 2001-07-26 The University Of Chicago Automated method and system for the segmentation of lung regions in computed tomography scans
CN114092439A (en) * 2021-11-18 2022-02-25 深圳大学 Multi-organ instance segmentation method and system
CN114219943A (en) * 2021-11-24 2022-03-22 华南理工大学 CT image organ-at-risk segmentation system based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AI学习笔记(十四)CNN之图像分割;Lee森;《https://blog.csdn.net/qq_35813161/article/details/111145981》;20201214;第1页 *
Multilabel Region Classification and Semantic Linking for Colon Segmentation in CT Colonography;Xiaoyun Yang等;《IEEE Transactions on Biomedical Engineering》;20141124;第62卷(第3期);第948-959页 *
基于V-Net的腹部多器官图像分割;李庆勃等;《数字技术与应用》;20190125;第37卷(第1期);第89+91页 *

Also Published As

Publication number Publication date
CN112634285A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN111627019B (en) Liver tumor segmentation method and system based on convolutional neural network
CN111369565B (en) Digital pathological image segmentation and classification method based on graph convolution network
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN111563902A (en) Lung lobe segmentation method and system based on three-dimensional convolutional neural network
CN112634285B (en) Method for automatically segmenting abdominal CT visceral fat area
CN113034505B (en) Glandular cell image segmentation method and glandular cell image segmentation device based on edge perception network
CN112132817A (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN107766874B (en) Measuring method and measuring system for ultrasonic volume biological parameters
CN110930378B (en) Emphysema image processing method and system based on low data demand
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN113205537A (en) Blood vessel image segmentation method, device, equipment and medium based on deep learning
CN115908241A (en) Retinal vessel segmentation method based on fusion of UNet and Transformer
CN114419000A (en) Femoral head necrosis index prediction system based on multi-scale geometric embedded convolutional neural network
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
CN111210398A (en) White blood cell recognition system based on multi-scale pooling
CN112785581A (en) Training method and device for extracting and training large blood vessel CTA (computed tomography angiography) imaging based on deep learning
CN116486156A (en) Full-view digital slice image classification method integrating multi-scale feature context
CN114140472B (en) Cross-level information fusion medical image segmentation method
CN115294151A (en) Lung CT interested region automatic detection method based on multitask convolution model
CN115019955A (en) Method and system for constructing traditional Chinese medicine breast cancer syndrome prediction model based on ultrasonic imaging omics characteristics
CN114863104A (en) Image segmentation method based on label distribution learning
CN112967269A (en) Pulmonary nodule identification method based on CT image
CN114359308A (en) Aortic dissection method based on edge response and nonlinear loss
CN114022485A (en) Computer-aided diagnosis method for colorectal cancer based on small sample learning
CN112435219A (en) Pavement crack identification method based on transposition neural network interlayer feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant