WO2024060416A1 - 一种病理图像的端到端弱监督语义分割标注方法 - Google Patents

一种病理图像的端到端弱监督语义分割标注方法 Download PDF

Info

Publication number
WO2024060416A1
WO2024060416A1 PCT/CN2022/137682 CN2022137682W WO2024060416A1 WO 2024060416 A1 WO2024060416 A1 WO 2024060416A1 CN 2022137682 W CN2022137682 W CN 2022137682W WO 2024060416 A1 WO2024060416 A1 WO 2024060416A1
Authority
WO
WIPO (PCT)
Prior art keywords
branch
classification
segmentation
classification branch
semantic segmentation
Prior art date
Application number
PCT/CN2022/137682
Other languages
English (en)
French (fr)
Inventor
秦文健
贾小琦
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2024060416A1 publication Critical patent/WO2024060416A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to the technical field of medical image processing, and more specifically, to an end-to-end weakly supervised semantic segmentation and annotation method for pathological images.
  • Pathological diagnosis is the gold standard for clinical diagnosis and one of the medical methods to evaluate the patient’s cancer staging and prognosis. Pathological diagnosis is a highly repetitive task, time-consuming and labor-intensive, and is highly influenced by personal subjectivity. These problems have prompted researchers to try to automate the process, that is, using automatic detection methods to segment abnormal areas and special histological structures from pathological images, such as tumors, inflammation, and necrosis areas in pathological tissue images, which can provide more objective and unified information. Efficiently assist doctors in diagnosis or conduct histopathology-related research.
  • common weak labels include borders, graffiti, points, and category labels.
  • the category label is the type of label that contains the least information, because it only represents the category of the input image, but lacks the location information of the target object. Since category labels are the least labor-intensive and easiest to obtain, this application only focuses on using image-level category labels to train a weakly supervised semantic segmentation annotation model for pathological images.
  • the existing technology mainly has the following shortcomings:
  • Model training is complex.
  • mainstream image-level category label weakly supervised semantic segmentation annotation models require two or more steps of training.
  • the classification network is trained, and then the class activation map generated according to the classification network is used as a pseudo segmentation label to train the segmentation network.
  • Some algorithms also use Post-processing methods optimize results. This multi-step training method is not only complex to operate, but also the results of the segmentation network cannot be fed back to the classification network, which is prone to category confusion.
  • the purpose of the present invention is to overcome the above-mentioned shortcomings of the prior art, provide an end-to-end weakly supervised semantic segmentation and annotation method for pathological images, and use an end-to-end weakly supervised learning network based on multi-task learning to automatically perform automatic analysis on different tissues of pathological images. Segmentation to solve the problems of complex training, category confusion, and under-segmentation of target areas that often occur in pathological weakly supervised segmentation.
  • the technical solution of the present invention is to provide an end-to-end weakly supervised semantic segmentation and annotation method for pathological images.
  • the method includes the following steps:
  • a weakly supervised semantic segmentation annotation model for pathological images including a feature extraction network, a first classification branch, a second classification branch and a segmentation branch.
  • the feature extraction network is used to extract feature maps from the original input image, which will be used as The input signals of the first classification branch, the second classification branch and the segmentation branch;
  • the first classification branch is used to obtain the category prediction of the input image;
  • the second classification branch is used to obtain the category prediction of the input image after applying noise;
  • the segmentation branch is used to Perform pixel-level semantic segmentation prediction of the input image, and use the fusion result of the class activation map of the first classification branch and the second classification branch as the pseudo label for segmentation prediction;
  • the trained semantic segmentation annotation model is used for segmentation prediction.
  • the advantage of the present invention is that it provides an end-to-end weakly supervised semantic segmentation annotation model for pathological images, and only uses image-level category labels to train a multi-task convolutional neural network.
  • This model achieves end-to-end learning of weakly supervised semantic segmentation through multi-task learning that combines classification and segmentation tasks, reduces the number of hyperparameters that need to be manually set, and simplifies the training steps. Due to the joint learning of classification and segmentation tasks, the results of the segmentation network can be fed back to the classification network, and the classification network can help the segmentation network reduce the problem of category confusion.
  • the present invention designs a consistency constraint that is more suitable for pathological images. It adds disturbance noise without changing the original content and structure of the pathological image, thereby improving the accuracy of weakly supervised semantic segmentation.
  • Figure 1 is a flow chart of a method for weakly supervised semantic segmentation and annotation of pathological images according to one embodiment of the present invention
  • Figure 2 is a comparison diagram of two-stage weakly supervised semantic segmentation and the end-to-end model of the present invention
  • Figure 3 is a schematic process diagram of a method for weakly supervised semantic segmentation and annotation of pathological images according to one embodiment of the present invention
  • any specific values are to be construed as illustrative only and not as limiting. Accordingly, other examples of the exemplary embodiments may have different values.
  • the provided weakly supervised semantic segmentation annotation method for pathological images includes the following steps:
  • Step 1 Collect pathological image data sets and annotate them.
  • a data set of digital pathology images stained by hematoxylin-eosin is collected and image-level category annotation is performed on them.
  • Step 2 Preprocess the pathological image data set to construct a training set, validation set and test set.
  • step 2 includes:
  • Step 21 Obtain the original digital pathology image from the open source data set, use a sliding window with a width and height of 512 pixels and a step size of 256 pixels to sample on the 40x magnified digital pathology image, and give the image of the sample by a pathological expert Level category label.
  • the samples are divided into training set, validation set and test set according to the ratio of 7:1:2.
  • the verification set and test set are additionally annotated with pixel-level segmentation labels by pathological experts for model verification and performance testing in the laboratory.
  • Step 22 The pathological expert selects the ideally stained digital pathology image in the sample as the staining target image I t , and the remaining images are the images to be normalized I s .
  • Step 23 In order to reduce the color difference between the data, in one embodiment, the Vahadane staining normalization method is used, that is, the pathological image is decomposed into a sparse and non-negative staining density map in an unsupervised manner to define the tissue structure. Model physical phenomena.
  • I ⁇ R m ⁇ n be the RGB (red, green and blue) intensity of the image
  • I 0 ⁇ R m ⁇ n be the illumination light intensity of the sample
  • W ⁇ R m ⁇ r be the dyeing matrix
  • H ⁇ R r ⁇ n is the coloring density map.
  • n is the number of pixels.
  • Formula (2) can be expressed as:
  • the relative optical density matrix V can be obtained, and then the corresponding staining matrix W and staining density can be calculated using the dictionary learning method Figure H, the calculation model is as follows:
  • Step 24 According to the model established in step 23, for any dyeing target image It and the image to be normalized I s , the corresponding dyeing matrices W t and W s can be obtained, as well as the corresponding dyeing density maps H t and Hs . For the image I s to be normalized, only its staining matrix is changed and its staining density map is retained.
  • the I s normalization process can be expressed as:
  • Step 25 Use color perturbation, blur, noise, and rotation to perform data enhancement on the training samples to improve the accuracy and generalization ability of subsequent model training.
  • Step 3 Construct a weakly supervised semantic segmentation annotation model for pathological images and train with the set overall loss function as the target.
  • a weakly supervised semantic segmentation and annotation model for pathological images based on multi-task learning is established, and the preprocessed pathological images are used to train the semantic segmentation and annotation model.
  • Figure 3 is an example of a weakly supervised semantic segmentation annotation model for pathological images, which is suitable for samples containing category labels.
  • the model mainly consists of three branches, namely classification branch 1, classification branch 2 and segmentation branch.
  • the three branches share the structure and parameters of the feature extraction network, but each contains unique convolutional layers.
  • classification branch 1 and classification branch 2 have the same structure, but the input image of classification branch 2 contains noise to improve the overall robustness of the model, suppress over-fitting, and discover more effective data information.
  • Hadamard product is used to fuse the class activation maps generated respectively by classification branch 1 and classification branch 2 as pseudo labels of the segmentation network.
  • the segmentation branch can use a skip connection decoder to obtain the segmentation prediction result.
  • the overall loss function of the model contains four parts, namely the consistency loss between class activation maps, the classification loss of classification branch 1, the classification loss of classification branch 2 and the segmentation loss of the segmentation branch.
  • the weighted result of the four is used as the overall loss value of the model, which is used for network back propagation and parameter update.
  • step 3 includes:
  • Step 31 Use the residual network as a feature extraction network for multi-task weakly supervised semantic segmentation.
  • the feature extraction network performs three pooling operations on the input image, and the length and width of the obtained feature map are 1/8 of the input image. Classification tasks and segmentation tasks share the parameters of the feature extraction network.
  • Step 32 For the constructed semantic segmentation annotation model, it is assumed that the input image of classification branch 1 does not undergo any processing, while the input image of classification branch 2 undergoes Gaussian blur, color perturbation, etc. without changing the original structure.
  • Step 33 The classifier of the classification branch network is composed of three fully connected layers. The results of the last layer are processed by the sigmoid function to obtain the final classification result.
  • the sigmoid function is expressed as:
  • Step 34 Calculate the multi-classification cross entropy loss value L cls between the prediction results of classification branch 1 and classification branch 2 and the true label respectively:
  • y ic represents whether the category of sample i is c. If the category of sample i is c, then y ic takes the value 1, otherwise it is 0. pic represents the probability that the network predicts that sample i belongs to class c.
  • Step 36 Calculate the consistency loss of the class activation maps of classification branch 1 and classification branch 2. For example, use the mean square error loss to measure the similarity of the two class activation maps.
  • the mean square error calculation formula is as follows:
  • C 1i and C 2i are the class activation maps generated by sample i in classification branch 1 and classification branch 2 respectively.
  • Step 35 Extract the class activation maps of classification branch 1 and classification branch 2, then use the Hadamard product to fuse the two, and use the fusion result as the pseudo label of the segmentation branch.
  • Step 36 The decoder of the segmentation branch adopts a layer-by-layer skip connection followed by deconvolution structure.
  • the output segmentation prediction result input image has the same length and width, and the number of channels is the same as the number of categories.
  • Step 37 Calculate the pixel-level cross-entropy loss value between the prediction result of the segmentation branch and the real label.
  • the calculation method is the same as formula (7).
  • Step 38 Set the overall loss of the model to include four parts, namely the classification loss of classification branch 1, the classification loss of classification branch 2, the class activation map consistency loss and the segmentation loss of the segmentation branch.
  • the four are added with weights of 0.5, 0.5, 1.0 and 1.0 respectively to obtain the overall loss value, which is used for network backward propagation and parameter updating.
  • Step 39 Use the training set data to train the established weakly supervised semantic segmentation annotation model until convergence.
  • model parameters are saved every 1000 iterations during training. After training, verify the performance of the previously saved model on the validation set, and select the model with the highest intersection ratio as the final training result.
  • Step 4 Use the trained semantic segmentation annotation model to predict the segmentation of the target image.
  • model parameters that meet the overall loss value standard such as weights, biases, etc.
  • the performance of the obtained weakly supervised semantic segmentation annotation model can be verified on the test data set.
  • the trained semantic segmentation annotation model to segment and identify target images, tumors, inflammation or necrosis areas in pathological tissue images can be obtained to assist doctors in diagnosis or conduct pathological research.
  • the present invention only uses image-level category labels of pathological images to train a pixel-level semantic segmentation annotation model, improves the accuracy of weakly supervised semantic segmentation annotation of pathological images, reduces the training steps of weakly supervised semantic segmentation annotation, and can significantly reduce clinical pathology quantification.
  • the workload of manual annotation in analysis is only uses image-level category labels of pathological images to train a pixel-level semantic segmentation annotation model, improves the accuracy of weakly supervised semantic segmentation annotation of pathological images, reduces the training steps of weakly supervised semantic segmentation annotation, and can significantly reduce clinical pathology quantification.
  • the model training process involved in the present invention can be performed offline on a server or cloud, and real-time target image classification and recognition can be achieved by embedding the trained model into an electronic device.
  • the electronic device may be a terminal device or a server, and the terminal device includes any terminal device such as a mobile phone, a tablet computer, a personal digital assistant (PDA), a point-of-sale terminal (POS), a vehicle-mounted computer, a smart wearable device, etc.
  • Servers include but are not limited to application servers or web servers, and can be independent servers, cluster servers, cloud servers, etc.
  • Vahadane staining normalization method for pathological images can be replaced by other staining normalization methods, such as Macenko, Reinhard, etc.
  • the loss function type can also adopt other forms.
  • the weight of each item in the overall loss function can be set according to the actual application scenario.
  • Weakly supervised semantic segmentation algorithm for digital pathology images uses multi-task learning to integrate classification and segmentation tasks, achieving an end-to-end weakly supervised semantic segmentation and annotation model, reducing the number of hyperparameters and simplifying the training process.
  • the segmentation and classification tasks share the feature extraction network, and the segmentation and classification tasks are learned together, thereby reducing the category confusion and target area under-segmentation problems that occur in the segmentation network.
  • the results of the segmentation network can be fed back to the classification network.
  • the classification network can help the segmentation network reduce the problem of category confusion.
  • a two-branch classification network is used to obtain a more robust feature representation by applying noise perturbations that have no major impact on the content and structure of pathological images, thereby improving class activation.
  • the quality of the map is improved.
  • the present invention only adds noise perturbations that do not change the image structure, and then calculates the consistency loss between perturbations and non-perturbations, making the model more suitable for weak pathological images.
  • Supervised semantic segmentation tasks
  • the invention may be a system, method and/or computer program product.
  • a computer program product may include a computer-readable storage medium having computer-readable program instructions thereon for causing a processor to implement various aspects of the invention.
  • Computer-readable storage media may be tangible devices that can retain and store instructions for use by an instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the above. More specific examples (non-exhaustive list) of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) or Flash memory), Static Random Access Memory (SRAM), Compact Disk Read Only Memory (CD-ROM), Digital Versatile Disk (DVD), Memory Stick, Floppy Disk, Mechanical Coding Device, such as a printer with instructions stored on it.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • Flash memory Static Random Access Memory
  • CD-ROM Compact Disk Read Only Memory
  • DVD Digital Versatile Disk
  • Memory Stick
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., light pulses through fiber optic cables), or through electrical wires. transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to various computing/processing devices, or to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage on a computer-readable storage medium in the respective computing/processing device .
  • Computer program instructions for performing operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • the computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through the Internet). connect).
  • LAN local area network
  • WAN wide area network
  • an external computer such as an Internet service provider through the Internet. connect
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA)
  • the electronic circuit can Computer readable program instructions are executed to implement various aspects of the invention.
  • These computer-readable program instructions may be provided to a processor of a general-purpose computer, a special-purpose computer, or other programmable data processing apparatus, thereby producing a machine that, when executed by the processor of the computer or other programmable data processing apparatus, , resulting in an apparatus that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium. These instructions cause the computer, programmable data processing device and/or other equipment to work in a specific manner. Therefore, the computer-readable medium storing the instructions includes An article of manufacture that includes instructions that implement aspects of the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • Computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other equipment, causing a series of operating steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executed on a computer, other programmable data processing apparatus, or other equipment to implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions that embody one or more elements for implementing the specified logical function(s).
  • Executable instructions may occur out of the order noted in the figures. For example, two consecutive blocks may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts. , or can be implemented using a combination of specialized hardware and computer instructions. It is well known to those skilled in the art that implementation through hardware, implementation through software, and implementation through a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种病理图像的端到端弱监督语义分割标注方法。该方法包括:构建端到端的病理图像弱监督的语义分割标注模型,包括特征提取网络、第一分类分支、第二分类分支和分割分支,特征提取网络从原始输入图像提取特征图,该特征图将作为其他分支的输入信号;第一分类分支用于获得输入图像的类别预测;第二分类分支用于获得施加噪声后的输入图像的类别预测;分割分支用于进行输入图像的像素级语义分割预测,并将上述两个分类分支的类激活映射图的融合结果作为分割预测的伪标签;以优化设定的总体损失函数为目标训练所述语义分割标注模型;本发明提升了病理图像的弱监督语义分割标注的精确度,缩减了弱监督语义分割标注的训练步骤。

Description

一种病理图像的端到端弱监督语义分割标注方法 技术领域
本发明涉及医学图像处理技术领域,更具体地,涉及一种病理图像的端到端弱监督语义分割标注方法。
背景技术
癌症是全世界范围的公共卫生问题,以高发病率和高死亡率著称。早期诊断和治疗干预能控制癌症患者数量,有效降低癌症的死亡率。病理诊断是临床诊断的金标准,是评估患者的癌症分期,预后效果的医学手段之一。病理诊断是重复性极高的工作,费时费力且受个人主观影响较大。这些问题促使学者尝试流程自动化,即使用自动的检测方法从病理图片中分割异常区域和特殊的组织学结构,例如病理组织图像中的肿瘤,发炎,坏死区域,能提供更为客观统一的信息,高效地辅助医生诊断或进行组织病理学相关的研究。
近年来,深度学习被广泛应用于自然图像和医学图像的定量分析。深度学习方法的性能除了依靠模型提取特征的能力,还依赖于大量数据标签的指导。获取数据的标签需要耗费大量的人工成本,尤其是需要经验丰富的专家逐像素勾画标签的组织病理图像。医院每天都会产生大量的医学影像,要求医学专家逐个标注是不现实的。如何减弱模型对数据标签的依赖性,利用弱标签或少量标签在海量数据中找出目标的模式,是深度学习未来发展的重要研究领域之一。
在病理图像的弱监督语义分割领域中,常见的弱标签有边框,涂鸦,点和类别标签等。类别标签是包含信息最少的一种标签,因为它仅表征输入图像的类别,但缺少了目标物体的位置信息。由于类别标签耗费人力最少,也最容易获取,故本申请只关注于使用图像级别的类别标签训练病理图像弱监督的语义分割标注模型。
在现有技术中,大多数图像级别的病理图像弱监督的语义分割标注模型都需要在训练环节进行两步或多步处理。首先,利用类别标签训练一个分 类网络,根据分类网络得到类激活映射图,然后将类激活映射图作为输入图像的伪分割标签训练一个分割网络。因此,类激活映射图的质量决定了弱监督语义分割标注模型的性能。然而,类激活映射图虽然可以识别特定类别所在的区域,但是识别的结果不够精确,边缘部分非常粗糙,与真实目标掩码差距较大,容易出现类别混淆和欠分割问题。为了提升类激活映射图的质量,一些模型使用擦除类激活映射图的激活最高的区域,迫使分类模型找到更多的目标区域,但这种方式需要人为设定擦除阈值和擦除次数。还有一些算法采用一致性约束以缩小全监督和弱监督之间的性能差异,例如将输入图像切分为四个子图像,计算输入图像和子图像的类激活映射图的一致性损失。但这些方法仅适用于自然图像,因为改变输入图像的结构和内容会导致病理图像类别完全改变,或者出现不符合实际的图像,最终导致模型性能下降。
综上,现有技术主要存在以下缺陷:
(1)模型训练复杂。目前,主流图像级类别标签的弱监督语义分割标注模型需要两步或多步训练,首先训练分类网络,再将根据分类网络生成的类激活映射图作为伪分割标签训练分割网络,一些算法还使用后处理方法优化结果。这样多步训练方式不但操作复杂,而且分割网络的结果无法反馈至分类网络,容易出现类别混淆的问题。
(2)人为设定参数多。例如,擦除高激活区域需要人为设定擦除阈值和擦除次数,常用来优化结果的条件随机场也需要人为设定多个超参数。
(3)仅适用于自然图像。目前,弱监督语义分割网络大多数工作集中于自然图像。自然图像的目标物体往往占据图像大部分面积,因此改变部分图像内容或结构对图像原本的类别影响较小。而病理图像的感兴趣区域,如炎症,坏死,肿瘤等,具有较高的异质性,并且病理图像类别对内容变化和上下文信息敏感。
发明内容
本发明的目的是克服上述现有技术的缺陷,提供一种病理图像的端到端弱监督语义分割标注方法,采用基于多任务学习的端到端弱监督学习网络对病理图像的不同组织进行自动分割,以解决病理弱监督分割常出现的训练 复杂,类别混淆,目标区域欠分割的问题。
本发明的技术方案是,提供一种病理图像的端到端弱监督语义分割标注方法。该方法包括以下步骤:
构建病理图像弱监督的语义分割标注模型,包括特征提取网络、第一分类分支、第二分类分支和分割分支,其中,特征提取网络用于从原始输入图像中提取特征图,该特征图将作为第一分类分支、第二分类分支和分割分支的输入信号;第一分类分支用于获得输入图像的类别预测;第二分类分支用于获得施加噪声后的输入图像的类别预测;分割分支用于进行输入图像的像素级语义分割预测,并将第一分类分支和第二分类分支的类激活映射图的融合结果作为分割预测的伪标签;
以优化设定的总体损失函数为目标训练所述语义分割标注模型;
针对目标病理图像,采用经训练的语义分割标注模型进行分割预测。
与现有技术相比,本发明的优点在于,提供用于病理图像的端到端弱监督语义分割标注模型,仅利用图像级类别标签训练多任务卷积神经网络。该模型通过将分类和分割任务结合的多任务学习,实现了弱监督语义分割的端到端学习,减少了需要人工设定的超参数数量,简化了训练步骤。由于实现了分类和分割任务共同学习,分割网络的结果可以反馈至分类网络,分类网络能帮助分割网络减少类别混淆的问题。此外,本发明设计了更适用于病理图像的一致性约束,在加入扰动噪声的同时不改变病理图像原本的内容和结构,从而提升了弱监督语义分割的精确度。
通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。
附图说明
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。
图1是根据本发明一个实施例的病理图像弱监督语义分割标注方法的流程图;
图2是两阶段弱监督语义分割与本发明的端到端模型的对比图;
图3是根据本发明一个实施例的病理图像弱监督语义分割标注方法的过程示意图;
具体实施方式
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
结合图1和图3所示,所提供的病理图像弱监督语义分割标注方法包括以下步骤:
步骤1,搜集病理图像数据集并进行标注。
例如,搜集由苏木精—伊红染色的数字病理图像数据集,并对其进行图像级类别标注。
步骤2,对病理图像数据集进行预处理,以构建训练集、验证集和测试集。
在该步骤中,对病理图像进行预处理,包括读取数字病理数据集原始文件,使用滑动窗口采样方法对病理图像进行切分等。具体地,步骤2包括:
步骤21:从开源数据集中获取原始数字病理图像,使用宽高为512像素,步长为256像素的滑动窗口,在40倍放大的数字病理图像上进行采样,并由病理专家给出样本的图像级类别标注。按照7:1:2的比例将样本划分训练集,验证集和测试集。验证集和测试集由病理专家额外标注像素级分割标 签,用于实验室的模型验证和性能测试。
步骤22:由病理专家挑选出样本中染色较为理想的数字病理图像作为染色目标图像I t,其余图像为待归一化图像I s
步骤23:为减弱数据间的颜色差异,在一个实施例中,使用Vahadane染色归一化方法,即使用无监督的方式将病理图像分解成稀疏且非负的染色密度图,从而对定义组织结构的物理现象进行建模。
例如,根据朗伯比尔定律,令I∈R m×n为图像的RGB(红绿蓝)强度,I 0∈R m×n为样本的照明光强度,W∈R m×r为染色矩阵,H∈R r×n为染色密度图。其中m=3代表了RGB的三通道,r=2代表了两种染色剂,n是像素的数量。I可以被表示为:
I=I 0exp(-WH)  (1)
令V为相对光密度矩阵,有:
Figure PCTCN2022137682-appb-000001
公式(2)可被表示为:
V=WH   (3)
因此,给定输入图像I并设定样本的照明光强度I 0为255(2 8-1),即能得到相对光密度矩阵V,进而使用字典学习的方法计算对应的染色矩阵W和染色密度图H,该计算模型如下:
Figure PCTCN2022137682-appb-000002
步骤24:根据步骤23建立的模型,对于任意的染色目标图像I t和待归一化图像I s,都能得到其对应的染色矩阵W t和W s,以及对应的染色密度图H t和H s。对于待归一化图像I s,只改变其染色矩阵而保留其染色密度图。I s归一化过程可表示为:
Figure PCTCN2022137682-appb-000003
其中
Figure PCTCN2022137682-appb-000004
表示I s颜色归一化后的结果,
Figure PCTCN2022137682-appb-000005
表示H s归一化后的结果。
步骤25:使用颜色扰动、模糊、噪声、旋转对训练样本进行数据增强,以提高后续模型训练的精确性和泛化能力。
步骤3,构建病理图像弱监督语义分割标注模型,并以设定的总体损 失函数为目标进行训练。
在此步骤3中,建立基于多任务学习的病理图像弱监督语义分割标注模型,并使用预处理后的病理图像训练语义分割标注模型。
图3是病理图像弱监督语义分割标注模型示例,适用于包含类别标签的样本。模型主要由三个分支构成,分别是分类分支1,分类分支2和分割分支。三个分支共享特征提取网络的结构和参数,但又各自包含独有的卷积层。
对于分类分支而言,分类分支1和分类分支2具有相同结构,但分类分支2的输入图像包含噪声,以提升模型整体的鲁棒性、抑制过拟合和发掘更多有效的数据信息。在一个实施例中,使用哈达玛积将由分类分支1和分类分支2分别生成的类激活映射图进行融合,作为分割网络的伪标签。对于分割分支而言,分割分支可采用跳跃连接式的解码器得到分割预测结果。
模型整体的损失函数包含四个部分,分别是类激活映射图间的一致性损失,分类分支1的分类损失,分类分支2的分类损失和分割分支的分割损失。四者加权后的结果作为模型整体的损失值,用于网络后向传播和参数更新。
具体地,步骤3包括:
步骤31:使用残差网络作为多任务弱监督语义分割的特征提取网络。该特征提取网络对输入图像进行三次池化操作,得到的特征图长宽大小是输入图像的1/8。分类任务和分割任务共享特征提取网络的参数。
步骤32:针对所构建的语义分割标注模型,设分类分支1的输入图像不经过任何处理,而分类分支2的输入图像经过高斯模糊,颜色扰动等不改变原本结构的处理。
步骤33:分类分支网络的分类器由三层全连接层构成,最后一层的结果经sigmoid函数处理后得到最终分类结果,sigmoid函数表示为:
Figure PCTCN2022137682-appb-000006
步骤34:分别计算分类分支1和分类分支2的预测结果与真实标签间的多分类交叉熵损失值L cls
Figure PCTCN2022137682-appb-000007
其中n为样本总数,m为类别总数。y ic代表样本i的类别是否为c,若样本i的类别为c,则y ic取值为1,反之则为0。p ic代表网络预测样本i属于c类的概率。
步骤36:计算分类分支1和分类分支2的类激活映射图的一致性损失,例如采用均方误差损失度量二者类激活映射图的相似性,均方误差计算式如下:
Figure PCTCN2022137682-appb-000008
其中n为样本总数,C 1i和C 2i是样本i分别在分类分支1和分类分支2中生成的类激活映射图。
步骤35:提取分类分支1和分类分支2的类激活映射图,再使用哈达玛积将二者融合,将融合后的结果作为分割分支的伪标签。
步骤36:分割分支的解码器采用逐层跳跃连接后反卷积的结构,输出的分割预测结果输入图像长宽大小相同,通道数与类别数相同。
步骤37:计算分割分支的预测结果与真实标签间的像素级交叉熵损失值,计算方式与公式(7)相同。
步骤38:设定模型的总体损失包含四部分,分别是分类分支1的分类损失,分类分支2的分类损失,类激活映射图一致性损失以及分割分支的分割损失。
例如,四者分别以0.5,0.5,1.0和1.0的权重相加得到整体的损失值,用于网络向后传播和参数更新。
步骤39:使用训练集数据对所建立的弱监督语义分割标注模型进行训练,直至收敛。
例如,训练过程中每1000次迭代保存一次模型参数。训练结束后在验证集上验证之前保存的模型的性能,选取交并比数值最高的模型作为最终的训练结果。
步骤4,利用经训练的语义分割标注模型对目标图像进行分割预测。
训练完成后,即可获得满足总体损失值标准的模型参数,如权重、偏置等。进一步地,可采用测试数据集上验证所得到的弱监督语义分割标注模型的性能。在实际应用中,利用经训练的语义分割标注模型对目标图像进行分割识别,即可获得病理组织图像中的肿瘤,发炎或坏死区域等,以用于辅助医生诊断或进行病理学研究。
本发明仅使用病理图像的图像级类别标签训练像素级语义分割标注模型,提升了病理图像的弱监督语义分割标注的精确度,缩减了弱监督语义分割标注的训练步骤,能够显著减少临床病理定量分析中人工标注的工作量。
本发明涉及的模型训练过程可在服务器或云端离线进行,将经训练的模型嵌入到电子设备即可实现实时的目标图像分类识别。该电子设备可以是终端设备或者服务器,终端设备包括手机、平板电脑、个人数字助理(PDA)、销售终端(POS)、车载电脑、智能可穿戴设备等任意终端设备。服务器包括但不限于应用服务器或Web服务器,可以为独立服务器、集群服务器或云服务器等。
应理解的是,在不违背本发明精神和范围的前提下,本领域技术人员可对上述实施例进行适当的改变或变型。例如,病理图像Vahadane染色归一化方式可被其他的染色归一化方式替代,如Macenko,Reinhard等。又如,损失函数类型除了均方误差损失或交叉熵损失外,也可采用其他形式。此外,总体损失函数中各项的权重可根据实际应用场景进行设定。
综上所述,相对于现有技术,本发明的技术效果主要体现在以下方面:
1)、针对数字病理图像的弱监督语义分割算法,使用多任务学习整合了分类和分割任务,实现了端到端的的弱监督语义分割标注模型,减少了超参数数量,简化了训练过程。
2)、多任务学习框架中分割和分类任务共享特征提取网络,且分割和分类任务共同学习,从而减少了分割网络出现的类别混淆和目标区域欠分割问题。并且,采用多任务共同训练的方式,分割网络的结果可以反馈至分类网络,分类网络能帮助分割网络减少类别混淆的问题。
3)、为了构建适合于病理图像的一致性约束,使用双支分类网络,通过施加对病理图像内容与结构无较大影响的噪声扰动,获取更为鲁棒的特征 表示,从而提升了类激活映射图的质量。
4)、根据病理图像的结构更改对类别影响大的特点,本发明仅加入了不改变图像结构的噪声扰动,然后计算扰动与非扰动间的一致性损失,使得模型更为适合病理图像的弱监督语义分割任务。
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述 编程语言包括面向对象的编程语言—诸如Smalltalk、C++、Python等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法 和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。

Claims (10)

  1. 一种病理图像弱监督语义分割标注方法,包括以下步骤:
    构建病理图像弱监督的语义分割标注模型,包括特征提取网络、第一分类分支、第二分类分支和分割分支,其中,特征提取网络用于从原始输入图像中提取特征图,该特征图将作为第一分类分支、第二分类分支和分割分支的输入信号;第一分类分支用于获得输入图像的类别预测;第二分类分支用于获得施加噪声后的输入图像的类别预测;分割分支用于进行输入图像的像素级语义分割预测,并将第一分类分支和第二分类分支的类激活映射图的融合结果作为分割预测的伪标签;
    以优化设定的总体损失函数为目标训练所述语义分割标注模型;
    针对目标病理图像,采用经训练的语义分割标注模型进行分割预测。
  2. 根据权利要求1所述的方法,其特征在于,所述总体损失函数是第一分类分支和第二分类分支的类激活映射图间的一致性损失、第一分类分支的分类损失、第二分类分支的分类损失以及所述分割分支的分割损失的加权和。
  3. 根据权利要求2所述的方法,其特征在于,所述第一分类分支和第二分类分支的类激活映射图间的一致性损失值L c表示为:
    Figure PCTCN2022137682-appb-100001
    其中,n是样本总数,C 1i是样本i在第一分类分支生成的类激活映射图,C 2i是样本i在第二分类分支生成的类激活映射图。
  4. 根据权利要求2所述的方法,其特征在于,所述第一分类分支的分类损失、所述第二分类分支的分类损失以及所述分割分支的分割损失均采用交叉熵损失L cls,表示为:
    Figure PCTCN2022137682-appb-100002
    其中n是样本总数,m是样本的类别总数,y ic表示样本i的类别是否为c,若样本i的类别为c,则y ic取值为1,若样本i的类别不为c,则y ic取值0,p ic表示预测样本i属于c类别的概率。
  5. 根据权利要求1所述的方法,其特征在于,所述分割分支包括多层 卷积层和解码器,其中解码器采用逐层跳跃连接后反卷积的结构,输出的分割预测结果与输入图像长宽大小相同,通道数与预测类别数相同。
  6. 根据权利要求1所述的方法,其特征在于,所述第二分类分支的输入图像经过高斯模糊、颜色扰动的噪声加扰处理。
  7. 根据权利要求1所述的方法,其特征在于,所述第一分类分支和所述第二分类分支具有相同结构,对应的分类器均包含多层全连接层,且最后一层的结果经sigmoid函数处理后得到最终分类结果,并且使用哈达玛积将所述第一分类分支和所述第二分类分支分别生成的类激活映射图进行融合。
  8. 根据权利要求1所述的方法,其特征在于,训练所述语义分割标注模型的样本数据集根据以下步骤构建:
    搜集数字病理图像数据集,并对其进行图像级类别标注,获得样本图像;
    照预定比例将样本图像划分训练集,验证集和测试集;
    对样本图像进行染色归一化处理;
    对样本图像进行数据增强,包括颜色扰动、模糊、噪声、旋转。
  9. 一种计算机可读存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现根据权利要求1至8中任一项所述方法的步骤。
  10. 一种计算机设备,包括存储器和处理器,在所述存储器上存储有能够在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述的方法的步骤。
PCT/CN2022/137682 2022-09-22 2022-12-08 一种病理图像的端到端弱监督语义分割标注方法 WO2024060416A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211157945.5 2022-09-22
CN202211157945.5A CN115482221A (zh) 2022-09-22 2022-09-22 一种病理图像的端到端弱监督语义分割标注方法

Publications (1)

Publication Number Publication Date
WO2024060416A1 true WO2024060416A1 (zh) 2024-03-28

Family

ID=84394043

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/137682 WO2024060416A1 (zh) 2022-09-22 2022-12-08 一种病理图像的端到端弱监督语义分割标注方法

Country Status (2)

Country Link
CN (1) CN115482221A (zh)
WO (1) WO2024060416A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152575B (zh) * 2023-04-18 2023-07-21 之江实验室 基于类激活采样引导的弱监督目标定位方法、装置和介质
CN116704248A (zh) * 2023-06-07 2023-09-05 南京大学 一种基于多语义不平衡学习的血清样本图像分类方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488921A (zh) * 2020-03-30 2020-08-04 中国科学院深圳先进技术研究院 一种全景数字病理图像智能分析系统及方法
US20210150281A1 (en) * 2019-11-14 2021-05-20 Nec Laboratories America, Inc. Domain adaptation for semantic segmentation via exploiting weak labels
CN114418946A (zh) * 2021-12-16 2022-04-29 中国科学院深圳先进技术研究院 一种医学图像分割方法、系统、终端以及存储介质
CN114820655A (zh) * 2022-04-26 2022-07-29 中国地质大学(武汉) 可靠区域作为注意力机制监督的弱监督建筑物分割方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210150281A1 (en) * 2019-11-14 2021-05-20 Nec Laboratories America, Inc. Domain adaptation for semantic segmentation via exploiting weak labels
CN111488921A (zh) * 2020-03-30 2020-08-04 中国科学院深圳先进技术研究院 一种全景数字病理图像智能分析系统及方法
CN114418946A (zh) * 2021-12-16 2022-04-29 中国科学院深圳先进技术研究院 一种医学图像分割方法、系统、终端以及存储介质
CN114820655A (zh) * 2022-04-26 2022-07-29 中国地质大学(武汉) 可靠区域作为注意力机制监督的弱监督建筑物分割方法

Also Published As

Publication number Publication date
CN115482221A (zh) 2022-12-16

Similar Documents

Publication Publication Date Title
CN111476284B (zh) 图像识别模型训练及图像识别方法、装置、电子设备
US10303979B2 (en) System and method for classifying and segmenting microscopy images with deep multiple instance learning
CN111488921B (zh) 一种全景数字病理图像智能分析系统及方法
WO2024060416A1 (zh) 一种病理图像的端到端弱监督语义分割标注方法
Sahasrabudhe et al. Self-supervised nuclei segmentation in histopathological images using attention
Salman et al. Automated prostate cancer grading and diagnosis system using deep learning-based Yolo object detection algorithm
Cai et al. Saliency-guided level set model for automatic object segmentation
WO2023131301A1 (zh) 消化系统病理图像识别方法、系统及计算机存储介质
Zanjani et al. Cancer detection in histopathology whole-slide images using conditional random fields on deep embedded spaces
Lv et al. Nuclei R-CNN: improve mask R-CNN for nuclei segmentation
CN114511710A (zh) 一种基于卷积神经网络的图像目标检测方法
Tian et al. Object localization via evaluation multi-task learning
CN117015796A (zh) 处理组织图像的方法和用于处理组织图像的系统
Xu et al. Histopathological tissue segmentation of lung cancer with bilinear cnn and soft attention
Wang et al. Feature extraction and segmentation of pavement distress using an improved hybrid task cascade network
Dabass et al. A hybrid U-Net model with attention and advanced convolutional learning modules for simultaneous gland segmentation and cancer grade prediction in colorectal histopathological images
Khattar et al. Computer assisted diagnosis of skin cancer: a survey and future recommendations
Maurya et al. A global context and pyramidal scale guided convolutional neural network for pavement crack detection
Al-Huda et al. Asymmetric dual-decoder-U-Net for pavement crack semantic segmentation
Fang et al. BAF-Net: Bidirectional attention fusion network via CNN and transformers for the pepper leaf segmentation
Zhang et al. MPMR: multi-scale feature and probability map for melanoma recognition
Liu et al. Att-MoE: attention-based mixture of experts for nuclear and cytoplasmic segmentation
Tang et al. Salient object detection via two-stage absorbing Markov chain based on background and foreground
Leo et al. Improving colon carcinoma grading by advanced cnn models
Zhang et al. A novel CapsNet neural network based on MobileNetV2 structure for robot image classification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22959416

Country of ref document: EP

Kind code of ref document: A1