CN113435607B - Disease screening method based on federal learning - Google Patents
Disease screening method based on federal learning Download PDFInfo
- Publication number
- CN113435607B CN113435607B CN202110641862.2A CN202110641862A CN113435607B CN 113435607 B CN113435607 B CN 113435607B CN 202110641862 A CN202110641862 A CN 202110641862A CN 113435607 B CN113435607 B CN 113435607B
- Authority
- CN
- China
- Prior art keywords
- training
- model
- client
- data set
- sets
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 201000010099 disease Diseases 0.000 title claims abstract description 19
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 title claims abstract description 19
- 238000012216 screening Methods 0.000 title claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 32
- 238000012360 testing method Methods 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 abstract description 9
- 238000004364 calculation method Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 210000003484 anatomy Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Description
技术领域technical field
本发明属于联邦学习、深度学习技术领域,具体是指一种基于联邦学习的疾病筛查方法。The invention belongs to the technical field of federated learning and deep learning, and specifically refers to a disease screening method based on federated learning.
背景技术Background technique
医学图像是反映解剖区域内部结构或内部功能的图像,它是由一组图像元素——像素(2D)或立体像素(3D)组成的。医学图像是由采样或重建产生的离散性图像表征,它能将数值映射到不同的空间位置上。像素的数量是用来描述某一成像设备下的医学成像的,同时也是描述解剖及其功能细节的一种表达方式。像素所表达的具体数值是由成像设备、成像协议、影像重建以及后期加工所决定的。但是有时每位患者检查数据甚至多达上千幅图像,假设疾病在集中爆发时,图像更多。单凭医师诊断,导致效率慢。联邦学习结合深度学习对图像数据进行处理,能够综合利用数据,快速对图像进行分析,准确诊断。A medical image is an image that reflects the internal structure or internal function of an anatomical region, and it is composed of a group of image elements—pixels (2D) or voxels (3D). Medical images are discrete image representations generated by sampling or reconstruction, which can map values to different spatial locations. The number of pixels is used to describe medical imaging under a certain imaging device, and it is also a way of expressing the details of anatomy and its function. The specific value expressed by the pixel is determined by the imaging equipment, imaging protocol, image reconstruction and post-processing. But sometimes even as many as a thousand images are examined per patient, and even more, assuming a concentrated outbreak of disease. Relying solely on doctor's diagnosis leads to slow efficiency. Federated learning combines deep learning to process image data, and can comprehensively utilize data to quickly analyze images and make accurate diagnoses.
最接近本发明的技术有:The techniques closest to the present invention are:
1、基于深度学习的图像分类算法:图像分类是根据图像的语义信息将不同类别图像区分开来,是计算机视觉中重要的基本问题。深度学习中主要使用卷积神经网络(Convolution Neural Network,CNN)进行图像分类,将图像的像素信息作为输入,通过卷积操作进行特征的提取和高层抽象,模型输出直接是图像识别的结果。目前常见的图像分类CNN网络有Lenet、Alxnet、Vgg系列、Resnet系列、Inception系列、Densenet系列、Googlenet等。但该方法无法综合利用各地医院的数据进行综合处理,导致准确率较低性能较差。1. Image classification algorithm based on deep learning: Image classification is to distinguish different types of images according to the semantic information of images, which is an important basic problem in computer vision. In deep learning, the Convolution Neural Network (CNN) is mainly used for image classification. The pixel information of the image is used as input, and the feature extraction and high-level abstraction are performed through the convolution operation. The model output is directly the result of image recognition. Currently common image classification CNN networks include Lenet, Alxnet, Vgg series, Resnet series, Inception series, Densenet series, Googlenet, etc. However, this method cannot comprehensively utilize the data of hospitals in various places for comprehensive processing, resulting in low accuracy and poor performance.
2、DeCoVNet网络是弱监督学习算法。在疾病病例分布不均匀且总体数量较少的情况下,具有较重的现实意义。它具有运算速度快,求数据标签要求少的特点,但是仍然具有准确率不高的缺点以至于容易产生误诊。2. The DeCoVNet network is a weakly supervised learning algorithm. It is of heavier practical significance in cases where the distribution of disease cases is uneven and the overall number is small. It has the characteristics of fast calculation speed and less requirements for data labeling, but it still has the disadvantage of low accuracy so that misdiagnosis is easy to occur.
发明内容Contents of the invention
针对上述情况,为克服现有技术的缺陷,本发明提供一种基于联邦学习的疾病筛查方法,结合了联邦学习以及深度学习的知识,可以更充分利用各地数据,高效准确的进行疾病诊断。In view of the above situation, in order to overcome the defects of the existing technology, the present invention provides a disease screening method based on federated learning, which combines the knowledge of federated learning and deep learning, can make full use of data from various places, and perform disease diagnosis efficiently and accurately.
本发明采取的技术方案如下:本发明一种基于联邦学习的疾病筛查方法,包括如下步骤:The technical scheme that the present invention takes is as follows: a kind of disease screening method based on federated learning of the present invention, comprises the following steps:
1)服务器组建共享数据集,将初始模型(U-Net++、DeCoVNet)下发至参与训练客户端;1) The server builds a shared data set, and sends the initial model (U-Net++, DeCoVNet) to the participating training clients;
2)当客户端运行第一个epoch时,利用无监督学习训练得出带有标记的mask以及带有标记的训练集对U-Net++模型进行预训练;所有的训练集经过预训练之后的U-Net++模型,得出所有训练集的mask;所有训练集的mask、训练集以及标签共同进入判断模型DeCoVNet,对训练集进行训练;所有测试集的mask以及测试集进入判断模型DeCoVNet进行测试;计算本地精度,并上传服务器;2) When the client runs the first epoch, use the unsupervised learning training to obtain the marked mask and the marked training set to pre-train the U-Net++ model; after all the training sets are pre-trained, U -Net++ model to obtain the masks of all training sets; all the masks, training sets and labels of all training sets enter the judgment model DeCoVNet to train the training set; all the masks of the test set and the test set enter the judgment model DeCoVNet for testing; calculation Local accuracy, and upload to the server;
3)服务器收集参与训练的客户端的精度并求出精度均值;3) The server collects the accuracy of the clients participating in the training and calculates the mean value of the accuracy;
4)运行第二个epoch前,若某个客户端的精度小于精度均值,则下发m条数据至客户端。其公式如下:m=i×n,其中i=(accavg-acci)/(accavg-accmin),n为共享数据集大小,且其下发数据集不得大于其客户端本身数据量;4) Before running the second epoch, if the precision of a certain client is less than the average precision, send m pieces of data to the client. The formula is as follows: m=i×n, where i=(acc avg -acc i )/(acc avg -acc min ), n is the size of the shared data set, and the distributed data set should not be larger than the data volume of the client itself ;
5)从第二个epoch以后,拥有下发共享数据集的客户端将共享数据集与自身数据集一同训练模型,直至整体模型收敛。。5) From the second epoch onwards, the client with the shared data set will train the model together with the shared data set and its own data set until the overall model converges. .
采用上述结构本发明取得的有益效果如下:本方案一种基于联邦学习的疾病筛查方法,通过深度学习卷积神经网络对医学图像建立模型,有效提升疾病诊断准确率;利用联邦学习综合利用各医院节点数据,提升模型泛化能力;提出一种动态融合策略,提升系统整体准确率。The beneficial effects obtained by the present invention by adopting the above-mentioned structure are as follows: This scheme is a disease screening method based on federated learning, which establishes a model for medical images through a deep learning convolutional neural network, effectively improving the accuracy of disease diagnosis; using federated learning to comprehensively utilize various Hospital node data to improve the generalization ability of the model; a dynamic fusion strategy is proposed to improve the overall accuracy of the system.
附图说明Description of drawings
图1为本发明基于联邦学习的疾病筛查方法联邦学习融合方法框图;Fig. 1 is a block diagram of a federated learning fusion method for a disease screening method based on federated learning in the present invention;
图2为本发明基于联邦学习的疾病筛查方法医疗图像深度学习方法框图。Fig. 2 is a block diagram of a medical image deep learning method for a disease screening method based on federated learning in the present invention.
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。The accompanying drawings are used to provide a further understanding of the present invention, and constitute a part of the description, and are used together with the embodiments of the present invention to explain the present invention, and do not constitute a limitation to the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例;基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them; based on The embodiments of the present invention and all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
如图1-2所示,本发明基于联邦学习的疾病筛查方法,包括如下步骤:As shown in Figure 1-2, the disease screening method based on federated learning of the present invention includes the following steps:
1)服务器组建共享数据集,将初始模型(U-Net++、DeCoVNet)下发至参与训练客户端;1) The server builds a shared data set, and sends the initial model (U-Net++, DeCoVNet) to the participating training clients;
2)当客户端运行第一个epoch时,利用无监督学习训练得出带有标记的mask以及带有标记的训练集对U-Net++模型进行预训练;所有的训练集经过预训练之后的U-Net++模型,得出所有训练集的mask;所有训练集的mask、训练集以及标签共同进入判断模型DeCoVNet,对训练集进行训练;所有测试集的mask以及测试集进入判断模型DeCoVNet进行测试;计算本地精度,并上传服务器;2) When the client runs the first epoch, use the unsupervised learning training to obtain the marked mask and the marked training set to pre-train the U-Net++ model; after all the training sets are pre-trained, U -Net++ model to obtain the masks of all training sets; all the masks, training sets and labels of all training sets enter the judgment model DeCoVNet to train the training set; all the masks of the test set and the test set enter the judgment model DeCoVNet for testing; calculation Local accuracy, and upload to the server;
3)服务器收集参与训练的客户端的精度并求出精度均值;3) The server collects the accuracy of the clients participating in the training and calculates the mean value of the accuracy;
4)运行第二个epoch前,若某个客户端的精度小于精度均值,则下发m条数据至客户端。其公式如下:m=i×n,其中i=(accavg-acci)/(accavg-accmin),n为共享数据集大小,且其下发数据集不得大于其客户端本身数据量;4) Before running the second epoch, if the precision of a certain client is less than the average precision, send m pieces of data to the client. The formula is as follows: m=i×n, where i=(acc avg -acc i )/(acc avg -acc min ), n is the size of the shared data set, and the distributed data set should not be larger than the data volume of the client itself ;
5)从第二个epoch以后,拥有下发共享数据集的客户端将共享数据集与自身数据集一同训练模型,直至整体模型收敛。5) From the second epoch onwards, the client with the shared data set will train the model together with the shared data set and its own data set until the overall model converges.
具体使用时,用户首先设置共享数据集,然后在第一个epoch中,在客户端进行n次迭代以后,将精度上传至服务器,服务器收集参与训练客户端所有的精度,求出均值。若第i个客户端的精度低于精度均值,则求出下发共享数据的数据量(数据量不大于客户端本身数据量),若高于精度均值,则不下发共享数据值。最后,在剩下的epoch中拥有共享数据集的客户端一直训练,直至模型收敛。针对人口密集性城市导致每日诊断医疗图像多的特点,会导致医生面对众多医疗图像判断速度慢的情况,所以急需人工智能模型帮助医生判断疾病。我们提出了一种模型为利用U-Net++模型对医疗图像进行分割(判断疾病区域),然后利用判断模型(DeCoVNet)对分割部分进行判断是否存在疾病,以上便是本发明整体的工作流程,下次使用时重复此步骤即可。In specific use, the user first sets up a shared data set, and then in the first epoch, after the client performs n iterations, the accuracy is uploaded to the server, and the server collects all the accuracy of the client participating in the training, and calculates the mean value. If the precision of the i-th client is lower than the average precision, calculate the data volume of the shared data to be distributed (the data volume is not greater than the data volume of the client itself), and if it is higher than the average precision, the shared data value will not be issued. Finally, the clients with the shared dataset keep training for the remaining epochs until the model converges. In view of the characteristics of densely populated cities that lead to a large number of daily diagnostic medical images, it will cause doctors to face the situation that the judgment speed of many medical images is slow, so there is an urgent need for artificial intelligence models to help doctors judge diseases. We propose a model to use the U-Net++ model to segment the medical image (judging the disease area), and then use the judgment model (DeCoVNet) to judge whether there is a disease in the segmented part. The above is the overall workflow of the present invention. The following Repeat this step for the first use.
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that there is a relationship between these entities or operations. any such actual relationship or order exists between them. Furthermore, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements, but also includes elements not expressly listed. other elements of or also include elements inherent in such a process, method, article, or apparatus.
尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, those skilled in the art can understand that various changes, modifications and substitutions can be made to these embodiments without departing from the principle and spirit of the present invention. and modifications, the scope of the invention is defined by the appended claims and their equivalents.
以上对本发明及其实施方式进行了描述,这种描述没有限制性,附图中所示的也只是本发明的实施方式之一,实际的结构并不局限于此。总而言之如果本领域的普通技术人员受其启示,在不脱离本发明创造宗旨的情况下,不经创造性的设计出与该技术方案相似的结构方式及实施例,均应属于本发明的保护范围。The present invention and its implementations have been described above, and this description is not limiting. What is shown in the drawings is only one of the implementations of the present invention, and the actual structure is not limited thereto. All in all, if a person of ordinary skill in the art is inspired by it, without departing from the inventive concept of the present invention, without creatively designing a structure and an embodiment similar to the technical solution, it shall fall within the scope of protection of the present invention.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110641862.2A CN113435607B (en) | 2021-06-09 | 2021-06-09 | Disease screening method based on federal learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110641862.2A CN113435607B (en) | 2021-06-09 | 2021-06-09 | Disease screening method based on federal learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113435607A CN113435607A (en) | 2021-09-24 |
CN113435607B true CN113435607B (en) | 2023-08-29 |
Family
ID=77755500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110641862.2A Active CN113435607B (en) | 2021-06-09 | 2021-06-09 | Disease screening method based on federal learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113435607B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019136946A1 (en) * | 2018-01-15 | 2019-07-18 | 中山大学 | Deep learning-based weakly supervised salient object detection method and system |
WO2019200535A1 (en) * | 2018-04-17 | 2019-10-24 | 深圳华大生命科学研究院 | Artificial intelligence-based ophthalmic disease diagnostic modeling method, apparatus, and system |
CN112116571A (en) * | 2020-09-14 | 2020-12-22 | 中国科学院大学宁波华美医院 | An automatic localization method for X-ray lung diseases based on weakly supervised learning |
CN112201342A (en) * | 2020-09-27 | 2021-01-08 | 博雅正链(北京)科技有限公司 | Federated Learning-Based Medical Aided Diagnosis Method, Apparatus, Equipment and Storage Medium |
-
2021
- 2021-06-09 CN CN202110641862.2A patent/CN113435607B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019136946A1 (en) * | 2018-01-15 | 2019-07-18 | 中山大学 | Deep learning-based weakly supervised salient object detection method and system |
WO2019200535A1 (en) * | 2018-04-17 | 2019-10-24 | 深圳华大生命科学研究院 | Artificial intelligence-based ophthalmic disease diagnostic modeling method, apparatus, and system |
CN112116571A (en) * | 2020-09-14 | 2020-12-22 | 中国科学院大学宁波华美医院 | An automatic localization method for X-ray lung diseases based on weakly supervised learning |
CN112201342A (en) * | 2020-09-27 | 2021-01-08 | 博雅正链(北京)科技有限公司 | Federated Learning-Based Medical Aided Diagnosis Method, Apparatus, Equipment and Storage Medium |
Also Published As
Publication number | Publication date |
---|---|
CN113435607A (en) | 2021-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110503654B (en) | A method, system and electronic device for medical image segmentation based on generative adversarial network | |
US20220148191A1 (en) | Image segmentation method and apparatus and storage medium | |
CN110808096B (en) | Heart disease automatic detection system based on convolutional neural network | |
CN114897914B (en) | Semi-supervised CT image segmentation method based on adversarial training | |
CN108205806B (en) | Automatic analysis method for three-dimensional craniofacial structure of cone beam CT image | |
CN107730507A (en) | A kind of lesion region automatic division method based on deep learning | |
CN116110597B (en) | Digital twinning-based intelligent analysis method and device for patient disease categories | |
CN115690072A (en) | Chest radiography feature extraction and disease classification method based on multi-mode deep learning | |
CN113662664B (en) | Instrument tracking-based objective and automatic evaluation method for surgical operation quality | |
CN116245828A (en) | Chest X-ray quality evaluation method integrating knowledge in medical field | |
CN112669925A (en) | Report template for CT (computed tomography) reexamination of new coronary pneumonia and forming method | |
CN109727197A (en) | A medical image super-resolution reconstruction method | |
CN116051849A (en) | Method and device for feature extraction of brain network data | |
CN116563533A (en) | Medical image segmentation method and system based on prior information of target position | |
CN114581459A (en) | Improved 3D U-Net model-based segmentation method for image region of interest of preschool child lung | |
CN113421228A (en) | Thyroid nodule identification model training method and system based on parameter migration | |
CN115496732B (en) | A semi-supervised cardiac semantic segmentation algorithm | |
Zheng et al. | Fully convolutional neural networks for high-precision medical image analysis | |
CN115409812A (en) | CT image automatic classification method based on fusion time attention mechanism | |
CN111340794A (en) | Method and device for quantifying coronary artery stenosis | |
CN113435607B (en) | Disease screening method based on federal learning | |
CN118692705A (en) | Physical health status monitoring method and system based on big data | |
CN110033848B (en) | A z-axis interpolation method for 3D medical images based on unsupervised learning | |
CN112102234A (en) | Ear sclerosis focus detection and diagnosis system based on target detection neural network | |
CN111798455A (en) | A Real-time Segmentation Method for Thyroid Nodules Based on Fully Convolutional Dense Hollow Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |