WO2022147940A1 - 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置 - Google Patents

一种面向多源数据的乳腺肿瘤图像分类预测方法及装置 Download PDF

Info

Publication number
WO2022147940A1
WO2022147940A1 PCT/CN2021/094088 CN2021094088W WO2022147940A1 WO 2022147940 A1 WO2022147940 A1 WO 2022147940A1 CN 2021094088 W CN2021094088 W CN 2021094088W WO 2022147940 A1 WO2022147940 A1 WO 2022147940A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
source data
classification
prediction
breast tumor
Prior art date
Application number
PCT/CN2021/094088
Other languages
English (en)
French (fr)
Inventor
潘志方
茹劲涛
陈高翔
林晔智
Original Assignee
温州医科大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 温州医科大学 filed Critical 温州医科大学
Publication of WO2022147940A1 publication Critical patent/WO2022147940A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the invention relates to the technical field of medical image processing, in particular to a multi-source data-oriented breast tumor image classification and prediction method and device.
  • v3_DCNN architecture which first used the inception_v3 model to pre-select tumor regions, and then used the semantic segmentation model DCNN to accurately segment tumors in breast pathology images.
  • MNPNet model which encodes context information, detail information, and semantic information through a multi-layered nested pyramid structure to finally achieve segmentation.
  • the model combining the fully convolutional network FCN and the bidirectional long short-term memory network Bi-LSTM is used by researchers to classify breast cancer on pathological images; or DBN-LSTM also combines the two networks.
  • NN classification model that utilizes an unsupervised deep belief network connected to a supervised neural network using the Levenberg-Marquardt algorithm to achieve breast cancer classification.
  • the technical problem to be solved by the embodiments of the present invention is to provide a multi-source data-oriented breast tumor image classification prediction method and device, which can reduce the heterogeneity between different source data sets and realize the multi-source data breast tumor image classification predict.
  • an embodiment of the present invention provides a multi-source data-oriented breast tumor image classification and prediction method, and the method includes the following steps:
  • the codec structure is determined by using It is composed of an encoder for extracting image feature vector layer by layer from image sample features and a decoder for realizing image reconstruction by receiving information from each layer of the encoder and finally outputting the ROI segmentation result;
  • S3 input the obtained ROI segmentation result into the encoder part in the teacher-student network segmentation model, obtain the image feature vector of deep learning, and perform feature extraction on the obtained ROI segmentation result to obtain the image geometric texture feature vector , and further combine the non-image data vector obtained by preprocessing of non-image data, and use the pre-trained cascade random forest classification model to perform classification prediction to obtain the corresponding classification result; wherein, the classification result is malignant or benign.
  • the method further includes:
  • the ROI segmentation result performs feature extraction based on Gabor direction field, Gabor response amplitude, expected direction and original gray value, and extracts mean, standard deviation, energy, entropy, and contrast based on curvelet transformation to obtain the image geometric texture Feature vector.
  • the non-image data is quantized and regularized to obtain a non-image data vector.
  • the teacher-student network segmentation model of the encoder-decoder structure is semi-supervised training based on particle swarm optimization algorithm.
  • the embodiment of the present invention also provides a multi-source data-oriented breast tumor image classification and prediction device, including:
  • a multi-source data acquisition unit for acquiring multi-source data of the breast tumor image to be tested, the multi-source data including labeled image data and unlabeled image data;
  • the semi-supervised training unit is used to perform semi-supervised training on the teacher-student network segmentation model of the preset encoder-decoder structure using the acquired multi-source data of the breast tumor image to be tested to obtain the ROI segmentation result;
  • the decoder structure is composed of an encoder for extracting image feature vector layer by layer from image sample features and a decoder for image reconstruction by receiving information from each layer of the encoder and finally outputting the ROI segmentation result;
  • the first feature fusion prediction unit is used to input the obtained ROI segmentation result into the encoder part in the teacher-student network segmentation model, obtain the image feature vector of deep learning, and perform feature extraction on the obtained ROI segmentation result , obtain the image geometric texture feature vector, and further combine the non-image data vector obtained by the non-image data preprocessing, and use the pre-trained cascade random forest classification model to perform classification prediction, and obtain the corresponding classification result;
  • the classification result is malignant or benign.
  • the second feature fusion prediction unit is used for combining the image geometric texture feature vector and the image feature vector, using a preset trained ki-67 index regression prediction model to perform prediction, and obtain a ki-67 index prediction result.
  • the decoder is composed of a plurality of sub-decoders, and is in one-to-one correspondence with multi-source data.
  • the present invention aims at the existence of a large number of unlabeled images in the breast tumor images due to the reasons such as the chaotic and extensive sources, different imaging protocols and scanners, etc., which lead to certain differences in image data, and the existence of a large number of unlabeled images that are directly available.
  • a semi-supervised teacher-student network framework combined with multi-source data migration and sharing is proposed for the first time, making full use of labeled data and unlabeled data, so as to solve the problem of insufficient generalization performance of a single data source model or simple mixing of multi-source data. Problems that lead to poor model performance, thereby mitigating heterogeneity between different source datasets and enabling multi-source data breast tumor image classification prediction;
  • the present invention innovatively proposes to input the three sets of vectors of the automatically extracted image feature vector, the image geometric texture feature vector and the quantized and regularized non-image data into the cascade random forest classification model through fusion, so as to realize the classification tasks such as benign and malignant. , and the non-invasive prediction task of replacing the Ki-67 indicator through image feature vectors;
  • the present invention studies acceleration strategies and optimization algorithms, reduces the time spent in network training, and also speeds up the search for the minimum value of the loss function, thereby obtaining the optimal solution of model parameters faster.
  • FIG. 1 is a flowchart of a multi-source data-oriented breast tumor image classification and prediction method provided by an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a cascade random forest classification model in a multi-source data-oriented breast tumor image classification and prediction method provided by an embodiment of the present invention
  • FIG. 3 is an application scenario diagram of a multi-source data-oriented breast tumor image classification and prediction method provided by an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a multi-source data-oriented breast tumor image classification and prediction apparatus according to an embodiment of the present invention.
  • a multi-source data-oriented breast tumor image classification and prediction method is provided, and the method includes the following steps:
  • Step S1 acquiring multi-source data of the breast tumor image to be tested, the multi-source data including labeled image data and unlabeled image data;
  • the specific process is that after the multi-source data of the breast tumor image to be tested is grouped, multiple sets of labeled image data and unlabeled image data are obtained, and further preprocessing, such as image enhancement, is performed.
  • Step S2 using the acquired multi-source data of the breast tumor image to be tested, perform semi-supervised training on the teacher-student network segmentation model of the preset codec structure, and obtain the ROI segmentation result;
  • the codec structure is composed of: It is composed of an encoder for extracting image feature vector layer by layer from image sample features and a decoder for image reconstruction by receiving information from each layer of the encoder and finally outputting the ROI segmentation result;
  • the specific process is to select a general segmentation model to build a teacher-student network, and use the encoder-decoder structure to complete operations such as convolution, pooling, upsampling, and deconvolution; among them, the encoder realizes the image samples.
  • the image feature vector is extracted layer by layer, and the decoder realizes image reconstruction by receiving information from each layer of the encoder and finally outputs the ROI (region of interest) segmentation result.
  • the decoder is composed of multiple sub-decoders, which correspond to the multi-source data one-to-one. After multiple iterative updates, the output of the teacher network is used as the ROI segmentation result.
  • the teacher network model is expressed as
  • the student network model is expressed as ,in is noise, are model parameters.
  • the decoder module derives m+n sub-decoders, each set of data is trained separately, student network
  • the loss function of each branch is set as or , ⁇ is used to adjust the weight of the loss.
  • the specific calculation method will use the common cross entropy loss function.
  • the second step is to train the sub-decoder of the student network by minimizing the loss function, and according to the iterative to update the model parameters corresponding to the teacher network.
  • the third step is to transfer the multi-source knowledge of each branch to the total decoder for sharing, and transfer the loss function , represents the total decoder output probability map pixel value, represents the binary image pixel value output by each branch sub-decoder, is the total pixel value.
  • the fourth step is to minimize complete the knowledge transfer from the sub-branches and synthesize an overall decoder, To adjust the parameters, dice loss representing the ground-truth labels versus the total decoder predictions. Calculate loss function for multinomial pooling Improve the performance of teacher-student network semi-supervised segmentation.
  • NB Neuron subset
  • NSS is defined as the similarity between the real performance and the ideal state, and the i-th batch similarity score is calculated as .
  • the number of reinforcement training will depend on the value of the previous NSS. The process is as follows: 1 The training samples are divided into 2) After one round of training, find out the batch with poor performance; 3) Do multiple reinforcement training for the batch with poor performance according to the NSS value; 4) Go back to step 2 until the iteration is completed.
  • the training of the network is all by minimizing the loss function to determine the optimal hyperparameters.
  • the particle swarm optimization algorithm is an optimization method based on population behavior, it is inspired by the foraging of flocks of birds, and simulates massless particles with two attributes of speed and position as birds in the flock. Each particle is individually searched in the solution space and recorded as the current individual extreme value pBest. Each individual extreme value is shared with other particles in the particle swarm to determine the current global optimal gBest.
  • Optimal solution to adjust your speed with location ,in is the total number of particles, rand is a random number between (0,1), and In order to learn the factor, the optimal solution is continuously approached by updating in this way.
  • the optimization algorithm of particle swarm is introduced into the network training of breast tumor classification and segmentation to explore faster convergence speed.
  • Step S3 input the obtained ROI segmentation result into the encoder part in the teacher-student network segmentation model, obtain the image feature vector of deep learning, and perform feature extraction on the obtained ROI segmentation result to obtain the image geometric texture feature vector, and further combine the non-image data vector obtained by preprocessing of non-image data, and use the pre-trained cascade random forest classification model to perform classification prediction to obtain the corresponding classification result; wherein, the classification result is malignant or benign .
  • the specific process is: first, input the obtained ROI segmentation result into the encoder part of the teacher-student network segmentation model to obtain the image feature vector of deep learning;
  • the Gabor filter can achieve optimal localization in both the spatial and frequency domains, it can well describe the local structural information corresponding to spatial scale, spatial location and orientation selectivity. Therefore, the obtained ROI segmentation results can be extracted based on Gabor direction field, Gabor response amplitude, expected direction, original gray value and other features, and synthesized.
  • Amplitude-weighted Glitch Directional Divergence Metrics in Regions is the magnitude of the Gabor amplitude response, is the Gabor direction, for the desired direction, use Weighting can reduce the influence of unstructured pixels in local areas.
  • texture features such as mean, standard deviation, energy, entropy, and contrast can also be extracted from the ROI segmentation results based on methods such as curvelet transformation. subbands, and extract the various texture features mentioned above from the subbands.
  • the non-image data including age, immunohistochemistry and other information is processed by quantization, regularization, etc., to form a set of feature vectors, which are used as non-image data vectors.
  • image feature vector, image geometric texture feature vector and non-image data vector are fused by nonlinear dimension reduction using t-SNE method to appropriately reduce the computational cost.
  • the cascading forest has a total of N+1 layers, N is an integer multiple of 3, and each layer consists of k random forests and k random forests. Completely random forest composition.
  • N is an integer multiple of 3
  • each layer consists of k random forests and k random forests.
  • Completely random forest composition after the image feature vector is input into the first layer of random forest, multiple class probability vectors will be obtained. After splicing these class probability vectors with the image feature vector as the input of the next layer of random forest, multiple class probability vectors can be obtained. Output, these class probability vectors are spliced with the image geometric texture feature vector and input to the third layer of random forest, and so on, and the loop operation is performed to obtain the final class decision.
  • the cascaded random forest classification model is trained by the above-mentioned three kinds of feature vectors in history, and is based on the particle swarm optimization algorithm to speed up the convergence speed (see step S2).
  • the non-invasive prediction task of replacing the Ki-67 index can also be implemented by the image feature vector. Therefore, the method further includes:
  • the pre-trained ki-67 index regression prediction model is used for prediction, and the ki-67 index prediction result is obtained.
  • the model parameters of the previous stage are retained as the initialization parameters
  • the ROI is input to the encoder structure in the teacher network
  • the branch 1 combines the obtained feature vector with the manually extracted image geometric texture features to realize the prediction task of replacing the Ki-67 indicator
  • the second branch additionally fuses non-image data including age, menopausal status, immunohistochemistry and other information to achieve the classification task of benign and malignant tumors and specific lesions.
  • a multi-source data-oriented breast tumor image classification and prediction device including:
  • a multi-source data acquisition unit 110 configured to acquire multi-source data of the breast tumor image to be tested, the multi-source data including labeled image data and unlabeled image data;
  • the semi-supervised training unit 120 is used to perform semi-supervised training on the teacher-student network segmentation model of the preset encoder-decoder structure using the acquired multi-source data of the breast tumor image to be tested to obtain the ROI segmentation result;
  • the The codec structure consists of an encoder for extracting image feature vector layer by layer from image sample features and a decoder for image reconstruction by receiving information from each layer of the encoder and finally outputting the ROI segmentation result;
  • the first feature fusion prediction unit 130 is used to input the obtained ROI segmentation result into the encoder part in the teacher-student network segmentation model to obtain the image feature vector of deep learning, and to characterize the obtained ROI segmentation result. Extraction, obtain the image geometric texture feature vector, and further combine the non-image data vector obtained by the non-image data preprocessing, and use the pre-trained cascade random forest classification model to perform classification prediction, and obtain the corresponding classification result;
  • the classification result is malignant or benign.
  • the second feature fusion prediction unit is used for combining the image geometric texture feature vector and the image feature vector, using a preset trained ki-67 index regression prediction model to perform prediction, and obtain a ki-67 index prediction result.
  • the decoder is composed of a plurality of sub-decoders, and is in one-to-one correspondence with multi-source data.
  • the present invention aims at the existence of a large number of unlabeled images in the breast tumor images due to the reasons such as the chaotic and extensive sources, different imaging protocols and scanners, etc., which lead to certain differences in image data, and the existence of a large number of unlabeled images that are directly available.
  • a semi-supervised teacher-student network framework combined with multi-source data migration and sharing is proposed for the first time, making full use of labeled data and unlabeled data, so as to solve the problem of insufficient generalization performance of a single data source model or simple mixing of multi-source data. Problems that lead to poor model performance, thereby mitigating heterogeneity between different source datasets and enabling multi-source data breast tumor image classification prediction;
  • the present invention innovatively proposes to input the three sets of vectors of the automatically extracted image feature vector, the image geometric texture feature vector and the quantized and regularized non-image data into the cascade random forest classification model through fusion, so as to realize the classification tasks such as benign and malignant. , and the non-invasive prediction task of replacing the Ki-67 indicator through image feature vectors;
  • the present invention studies acceleration strategies and optimization algorithms, reduces the time spent in network training, and also speeds up the search for the minimum value of the loss function, thereby obtaining the optimal solution of model parameters faster.
  • the units included are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, the specific names of the functional units It is only for the convenience of distinguishing from each other, and is not used to limit the protection scope of the present invention.

Abstract

本发明提供一种面向多源数据的乳腺肿瘤图像分类预测方法,获取待测乳腺肿瘤图像的多源数据,包括有标签的图像数据与无标签的图像数据;使用多源数据对预设的编解码器结构的teacher-student网络分割模型进行半监督训练,得到ROI分割结果;将ROI分割结果进行特征提取,得到图像几何纹理特征向量,以及将ROI分割结果输入teacher-student网络分割模型中的编码器部分,得到深度学习的图像特征向量,并结合由非图像数据预处理所得的非图像数据向量,使用预先训练好的级联随机森林分类模型进行分类预测,得到相应的分类结果为恶性或良性。实施本发明,能减轻不同源数据集之间的异质性,并实现多源数据乳腺肿瘤图像分类预测。

Description

一种面向多源数据的乳腺肿瘤图像分类预测方法及装置 技术领域
本发明涉及医学图像处理技术领域,尤其涉及一种面向多源数据的乳腺肿瘤图像分类预测方法及装置。
背景技术
目前,利用计算机图像技术来实现乳腺肿瘤图像(或其它医学图像)分类及预测任务具有十分重要的研究意义。
近年来,人工智能在包含乳腺肿瘤图像在内的医学数据上的应用十分广泛。例如,Marwa等人提出一种模糊轮廓的分割方法,先确定感兴趣区域(ROI),形成模糊轮廓后用Chan-Vese模型精炼成最终分割结果;又如,有研究者提出一种基于核的模糊C均值聚类方法KFCM,能够对乳腺钼靶图像中的肿块进行分割;此外,又如,有研究指出一种乳腺超声图像的肿瘤良恶性分类方法,用双聚类发掘算法对BI-RADS进行特征筛选,并通过AdaBoost集成学习将不同特征空间下的分类器结合。然而,上述方法均属于传统的机器学习方法,需要进行特征的人工提取以及筛选等比较繁琐的操作。在深度学习领域则省略了这些步骤。
为了解决传统机器学习方法所存在的问题,提出了深度学习法,一般以卷积神经网络为基础框架。例如,有研究者提出一种v3_DCNN架构,先使用inception_v3模型做肿瘤区域的预选择,再利用语义分割模型DCNN精确分割出乳腺病理图像中的肿瘤。又如,Wang等人提出一种MNPNet模型,通过多层嵌套的金字塔结构对上下文信息、细节信息、语义信息进行编码最终实现分割。
同时,在分类方面,结合了全卷积网络FCN和双向长短期记忆网络Bi-LSTM的模型被研究者用于在病理图像上做乳腺癌的分类;或者同样结合两种网络的还有DBN-NN分类模型,该模型利用无监督深度信念网络,连接使用Levenberg-Marquardt算法的有监督神经网络,来实现乳腺癌的分类。
虽然人工智能的应用使得医学图像的分类、分割、预测等任务取得了更佳的精度,但作为一种数据驱动型方法,非常依赖带标签数据集的质量与数量,然而单一来源的高质量数据的收集与标注在医学图像领域是一个不小的挑战。因此,对基于有限标签与多源数据的深度学习方法的探索是很有必要的,有必要提出一种面向多源数据的深度学习方法来对乳腺肿瘤图像分类预测。
技术问题
本发明实施例所要解决的技术问题在于,提供一种面向多源数据的乳腺肿瘤图像分类预测方法及装置,能减轻不同源数据集之间的异质性,并实现多源数据乳腺肿瘤图像分类预测。
技术解决方案
为了解决上述技术问题,本发明实施例提供了一种面向多源数据的乳腺肿瘤图像分类预测方法,所述方法包括以下步骤:
S1、获取待测乳腺肿瘤图像的多源数据,所述多源数据包括有标签的图像数据与无标签的图像数据;
S2、使用所获取待测乳腺肿瘤图像的多源数据,对预设的编解码器结构的teacher-student网络分割模型进行半监督训练,得到ROI分割结果;其中,所述编解码器结构由用于实现对图像样本特征逐层提取图像特征向量的编码器和用于通过接收来自编码器各层信息实现图像重构并最终输出ROI分割结果的解码器组成;
S3、将所得到的ROI分割结果输入所述teacher-student网络分割模型中的编码器部分,得到深度学习的图像特征向量,以及将所得到的ROI分割结果进行特征提取,得到图像几何纹理特征向量,且进一步结合由非图像数据预处理所得的非图像数据向量,并使用预先训练好的级联随机森林分类模型进行分类预测,得到相应的分类结果;其中,所述分类结果为恶性或良性。
其中,所述方法进一步包括:
结合所述图像几何纹理特征向量与所述图像特征向量,使用预设训练好的ki-67指标回归预测模型进行预测,得到ki-67指标预测结果。
其中,所述ROI分割结果基于Gabor方向场、Gabor响应幅值、预期方向以及原始灰度值进行特征提取,以及基于curvelet变换提取均值、标准差、能量、熵、对比度,得到所述图像几何纹理特征向量。
其中,所述非图像数据经过量化及正则化处理得到非图像数据向量。
其中,所述编解码器结构的teacher-student网络分割模型是基于粒子群优化算法进行半监督训练。
本发明实施例还提供了一种面向多源数据的乳腺肿瘤图像分类预测装置,包括:
多源数据获取单元,用于获取待测乳腺肿瘤图像的多源数据,所述多源数据包括有标签的图像数据与无标签的图像数据;
半监督训练单元,用于使用所获取待测乳腺肿瘤图像的多源数据,对预设的编解码器结构的teacher-student网络分割模型进行半监督训练,得到ROI分割结果;其中,所述编解码器结构由用于实现对图像样本特征逐层提取图像特征向量的编码器和用于通过接收来自编码器各层信息实现图像重构并最终输出ROI分割结果的解码器组成;
第一特征融合预测单元,用于将所得到的ROI分割结果输入所述teacher-student网络分割模型中的编码器部分,得到深度学习的图像特征向量,以及将所得到的ROI分割结果进行特征提取,得到图像几何纹理特征向量,且进一步结合由非图像数据预处理所得的非图像数据向量,并使用预先训练好的级联随机森林分类模型进行分类预测,得到相应的分类结果;其中,所述分类结果为恶性或良性。
其中,还包括:
第二特征融合预测单元,用于结合所述图像几何纹理特征向量与所述图像特征向量,使用预设训练好的ki-67指标回归预测模型进行预测,得到ki-67指标预测结果。
其中,所述解码器由多个子解码器构成,并与多源数据一一对应。
有益效果
实施本发明实施例,具有如下有益效果:
1、本发明针对乳腺肿瘤图像中来源杂乱广泛、成像协议与扫描仪不尽相同等原因导致图像数据存在一定的差异性,以及人工标注的专业性导致了大量无标签图像的存在而导致直接可用数据量较少等问题,首次提出结合多源数据迁移共享的半监督teacher-student网络框架,充分利用有标签数据和无标签数据,从而解决单一数据源模型泛化性能不足或多源数据简单混合导致模型表现不佳的问题,从而能减轻不同源数据集之间的异质性,并实现多源数据乳腺肿瘤图像分类预测;
2、本发明创新地提出将自动提取的图像特征向量、图像几何纹理特征向量及量化、正则化处理的非图像数据这三组向量经过融合输入级联随机森林分类模型,实现良恶性等分类任务,以及通过图像特征向量实现替代Ki-67指标的无创预测任务;
3、本发明研究加速策略与优化算法,减少网络训练所耗时间,也加速对损失函数极小值的搜寻,更快获取模型参数最优解。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,根据这些附图获得其他的附图仍属于本发明的范畴。
图1为本发明实施例提供的一种面向多源数据的乳腺肿瘤图像分类预测方法的流程图;
图2为本发明实施例提供的一种面向多源数据的乳腺肿瘤图像分类预测方法中级联随机森林分类模型的结构示意图;
图3为本发明实施例提供的一种面向多源数据的乳腺肿瘤图像分类预测方法的应用场景图;
图4为本发明实施例提供的一种面向多源数据的乳腺肿瘤图像分类预测装置的结构示意图。
本发明的最佳实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述。
如图1所示,为本发明实施例中,提供的一种面向多源数据的乳腺肿瘤图像分类预测方法,所述方法包括以下步骤:
步骤S1、获取待测乳腺肿瘤图像的多源数据,所述多源数据包括有标签的图像数据与无标签的图像数据;
具体过程为,待测乳腺肿瘤图像的多源数据经过分组后,将得到多组标签的图像数据与无标签的图像数据,并进一步进行预处理,如图像增强等。
步骤S2、使用所获取待测乳腺肿瘤图像的多源数据,对预设的编解码器结构的teacher-student网络分割模型进行半监督训练,得到ROI分割结果;其中,所述编解码器结构由用于实现对图像样本特征逐层提取图像特征向量的编码器和用于通过接收来自编码器各层信息实现图像重构并最终输出ROI分割结果的解码器组成;
具体过程为,拟选取通用的分割模型来构建teacher-student网络,并用编码器-解码器结构来完成卷积、池化、上采样、反卷积等操作;其中,由编码器实现对图像样本特征的逐层提取图像特征向量,解码器则通过接收来自编码器各层的信息实现图像的重构并最终输出ROI(感兴趣区域)分割结果。同时,解码器由多个子解码器构成,并与多源数据一一对应,经过多次迭代更新后以teacher网络的输出为ROI分割结果。
在一个实施例中,把多源数据D分为两组,分别为有标签数据S={S1,S2,…Sm}和无标签数据U={U1,U2,…Un},目标是使用数据D=S∪U来训练teacher-student网络。此时,teacher网络模型表示为
Figure 565640dest_path_image001
,student网络模型表示为
Figure 13939dest_path_image002
,其中
Figure 324834dest_path_image003
为噪声,
Figure 1934dest_path_image004
为模型参数。
第一步、考虑到医学图像因扫描仪、成像协议的不同而存在的差异性,为更充分利用好多源数据,解码器模块衍生出m+n个子解码器,每组数据分别训练,student网络的各分支损失函数设置为
Figure 613044dest_path_image005
Figure 966665dest_path_image006
,λ用于调整损失的权重。具体计算方法将采用常见的交叉熵损失函数。
第二步、通过最小化损失函数来训练student网络的子解码器,并根据迭代式
Figure 515589dest_path_image007
来更新对应teacher网络的模型参数。
第三步、将各分支多源知识迁移至总解码器进行共享,迁移损失函数
Figure 245648dest_path_image008
Figure 445685dest_path_image009
表示总解码器输出概率图像素值,
Figure 970207dest_path_image010
表示每个分支子解码器输出的二值图像素值,
Figure 803165dest_path_image011
为总像素值。
第四步、通过最小化
Figure 71336dest_path_image012
完成从子分支的知识迁移并合成一个总的解码器,
Figure 139261dest_path_image013
为调节参数,
Figure 100264dest_path_image014
表示真实标签与总解码器预测结果的dice损失。计算多项合并的损失函数
Figure 138627dest_path_image015
提升teacher-student网络半监督分割的表现。
应当说明的是,由于网络的复杂度过高和参数过多会影响训练的速度,需要应用一些策略来加速训练的过程。NB(Neutrosophic Boosting)策略汲取了中子集(NS)的思想,用NS 来确定强化训练的次数。
我们把每批次训练精度记为
Figure 492379dest_path_image016
Figure 667009dest_path_image017
。一次迭代内所有
Figure 798913dest_path_image016
的均值记为
Figure 75304dest_path_image018
,令
Figure 685277dest_path_image019
。定义NSS为现实表现与理想状态下的相似度,第i批次相似度评分计算为
Figure 979992dest_path_image020
。强化训练的次数将取决于前一次NSS的值,流程如下:①训练样本分为
Figure 33530dest_path_image021
个批次;②训练完一轮,找出表现较差的批次;③根据NSS值针对表现较差的批次做多次强化训练;④回到第②步,直至迭代完成。
网络的训练都是通过最小化损失函数来确定最优超参数。由于粒子群优化算法是一种基于种群行为的优化方法,它的灵感来源于鸟群的觅食,把具有速度和位置两个属性的无质量粒子模拟为鸟群中的鸟。每个粒子都单独在解空间中搜寻,并记为当前个体极值pBest,每个个体极值都与粒子群中其它粒子共享,以确定当前全局最优gBest,每个粒子会根据当前全局最优解来调整自己的速度
Figure 515327dest_path_image022
与位置
Figure 928991dest_path_image023
,其中
Figure 812633dest_path_image024
Figure 317300dest_path_image025
为粒子总数,rand是(0,1)之间的随机数,
Figure 286393dest_path_image026
Figure 238168dest_path_image027
为学习因子,通过这种方法更新不断逼近最优解。
因此,将粒子群的优化算法引入到乳腺肿瘤分类分割的网络训练中,探索更快的收敛速度。
步骤S3、将所得到的ROI分割结果输入所述teacher-student网络分割模型中的编码器部分,得到深度学习的图像特征向量,以及将所得到的ROI分割结果进行特征提取,得到图像几何纹理特征向量,且进一步结合由非图像数据预处理所得的非图像数据向量,并使用预先训练好的级联随机森林分类模型进行分类预测,得到相应的分类结果;其中,所述分类结果为恶性或良性。
具体过程为,首先,将所得到的ROI分割结果输入teacher-student网络分割模型中的编码器部分来得到深度学习的图像特征向量;
其次,由于考虑到Gabor滤波器可以同时在空域和频域上取得最优局部化,能很好地描述对应于空间尺度、空间位置及方向选择性的局部结构信息。因此,可以对将所得到的ROI分割结果提取基于Gabor方向场、Gabor响应幅值、预期方向、原始灰度值等多种特征,并合成
Figure 992629dest_path_image028
区域内幅值加权的毛刺方向散度度量
Figure 637237dest_path_image029
Figure 359205dest_path_image030
为Gabor幅值响应的大小,
Figure 114672dest_path_image031
是Gabor方向,
Figure 458059dest_path_image032
为预期的方向,用
Figure 273569dest_path_image030
加权可以降低局部区域中无结构像素的影响。同时,还可对ROI分割结果基于curvelet变换等方法提取均值、标准差、能量、熵、对比度等纹理特征,具体为:ROI分割结果经过curvelet变换后,可以得到一个粗尺度和多个细尺度的子带,并从子带提取中上述各种纹理特征。
接着,对包含了年龄、免疫组化等信息的非图像数据进行量化、正则化等处理,形成为一组特征向量并作为非图像数据向量。
最后,将图像特征向量、图像几何纹理特征向量和非图像数据向量,采用t-SNE方法进行非线性降维融合以适当减小计算开销。
之后,通过上图2所示预先训练好的级联随机森林分类模型进行预测,该级联森林共有N+1层,N为3的整数倍,每一层都由k个随机森林和k个完全随机森林构成。其中,图像特征向量输入到第一层随机森林后将得到的多个类概率向量,将这些类概率向量与图像特征向量拼接后作为下一层随机森林的输入,可以得到多个类概率向量的输出,将这些类概率向量与图像几何纹理特征向量拼接后输入到第三层随机森林,以此类推,循环操作,得到对最终类别的决策。
应当说明的是,级联随机森林分类模型是通过历史的上述三种特征向量进行训练,并基于粒子群优化算法来加快收敛速度(可参见步骤S2)。
在本发明实施例中,还可以图像特征向量实现替代Ki-67指标的无创预测任务。因此,所述方法进一步包括:
结合图像几何纹理特征向量与图像特征向量,使用预设训练好的ki-67指标回归预测模型进行预测,得到ki-67指标预测结果。
如图3所示,对本发明实施例中的一种面向多源数据的乳腺肿瘤图像分类预测方法的应用场景做进一步说明:
在图3中,第一阶段,多源数据经过分组后,将得到多组有标签图像数据与无标签图像数据,将其预处理后输入到编解码器结构的teacher-student网络,进行多源知识共享的半监督训练。
第二阶段,保留上一阶段的模型参数作为初始化参数,ROI输入teacher网络中的编码器结构,分支一将获得的特征向量融合手动提取的图像几何纹理特征后实现替代Ki-67指标的预测任务;分支二则额外融合包含了年龄、绝经状态、免疫组化等信息在内的非图像数据实现对肿瘤良恶性、具体病变的分类任务。
如图4所示,为本发明实施例中,提供的一种面向多源数据的乳腺肿瘤图像分类预测装置,包括:
多源数据获取单元110,用于获取待测乳腺肿瘤图像的多源数据,所述多源数据包括有标签的图像数据与无标签的图像数据;
半监督训练单元120,用于使用所获取待测乳腺肿瘤图像的多源数据,对预设的编解码器结构的teacher-student网络分割模型进行半监督训练,得到ROI分割结果;其中,所述编解码器结构由用于实现对图像样本特征逐层提取图像特征向量的编码器和用于通过接收来自编码器各层信息实现图像重构并最终输出ROI分割结果的解码器组成;
第一特征融合预测单元130,用于将所得到的ROI分割结果输入所述teacher-student网络分割模型中的编码器部分,得到深度学习的图像特征向量,以及将所得到的ROI分割结果进行特征提取,得到图像几何纹理特征向量,且进一步结合由非图像数据预处理所得的非图像数据向量,并使用预先训练好的级联随机森林分类模型进行分类预测,得到相应的分类结果;其中,所述分类结果为恶性或良性。
其中,还包括:
第二特征融合预测单元,用于结合所述图像几何纹理特征向量与所述图像特征向量,使用预设训练好的ki-67指标回归预测模型进行预测,得到ki-67指标预测结果。
其中,所述解码器由多个子解码器构成,并与多源数据一一对应。
实施本发明实施例,具有如下有益效果:
1、本发明针对乳腺肿瘤图像中来源杂乱广泛、成像协议与扫描仪不尽相同等原因导致图像数据存在一定的差异性,以及人工标注的专业性导致了大量无标签图像的存在而导致直接可用数据量较少等问题,首次提出结合多源数据迁移共享的半监督teacher-student网络框架,充分利用有标签数据和无标签数据,从而解决单一数据源模型泛化性能不足或多源数据简单混合导致模型表现不佳的问题,从而能减轻不同源数据集之间的异质性,并实现多源数据乳腺肿瘤图像分类预测;
2、本发明创新地提出将自动提取的图像特征向量、图像几何纹理特征向量及量化、正则化处理的非图像数据这三组向量经过融合输入级联随机森林分类模型,实现良恶性等分类任务,以及通过图像特征向量实现替代Ki-67指标的无创预测任务;
3、本发明研究加速策略与优化算法,减少网络训练所耗时间,也加速对损失函数极小值的搜寻,更快获取模型参数最优解。
值得注意的是,上述装置实施例中,所包括的各个单元只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,所述的存储介质,如ROM/RAM、磁盘、光盘等。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (8)

  1. 一种面向多源数据的乳腺肿瘤图像分类预测方法,其特征在于,所述方法包括以下步骤:
    S1、获取待测乳腺肿瘤图像的多源数据,所述多源数据包括有标签的图像数据与无标签的图像数据;
    S2、使用所获取待测乳腺肿瘤图像的多源数据,对预设的编解码器结构的teacher-student网络分割模型进行半监督训练,得到ROI分割结果;其中,所述编解码器结构由用于实现对图像样本特征逐层提取图像特征向量的编码器和用于通过接收来自编码器各层信息实现图像重构并最终输出ROI分割结果的解码器组成;
    S3、将所得到的ROI分割结果输入所述teacher-student网络分割模型中的编码器部分,得到深度学习的图像特征向量,以及将所得到的ROI分割结果进行特征提取,得到图像几何纹理特征向量,且进一步结合由非图像数据预处理所得的非图像数据向量,并使用预先训练好的级联随机森林分类模型进行分类预测,得到相应的分类结果;其中,所述分类结果为恶性或良性。
  2. 如权利要求1所述的面向多源数据的乳腺肿瘤图像分类预测方法,其特征在于,所述方法进一步包括:
    结合所述图像几何纹理特征向量与所述图像特征向量,使用预设训练好的ki-67指标回归预测模型进行预测,得到ki-67指标预测结果。
  3. 如权利要求1所述的面向多源数据的乳腺肿瘤图像分类预测方法,其特征在于,所述ROI分割结果基于Gabor方向场、Gabor响应幅值、预期方向以及原始灰度值进行特征提取,以及基于curvelet变换提取均值、标准差、能量、熵、对比度,得到所述图像几何纹理特征向量。
  4. 如权利要求1所述的面向多源数据的乳腺肿瘤图像分类预测方法,其特征在于,所述非图像数据经过量化及正则化处理得到非图像数据向量。
  5. 如权利要求1所述的面向多源数据的乳腺肿瘤图像分类预测方法,其特征在于,所述编解码器结构的teacher-student网络分割模型是基于粒子群优化算法进行半监督训练。
  6. 一种面向多源数据的乳腺肿瘤图像分类预测装置,其特征在于,包括:
    多源数据获取单元,用于获取待测乳腺肿瘤图像的多源数据,所述多源数据包括有标签的图像数据与无标签的图像数据;
    半监督训练单元,用于使用所获取待测乳腺肿瘤图像的多源数据,对预设的编解码器结构的teacher-student网络分割模型进行半监督训练,得到ROI分割结果;其中,所述编解码器结构由用于实现对图像样本特征逐层提取图像特征向量的编码器和用于通过接收来自编码器各层信息实现图像重构并最终输出ROI分割结果的解码器组成;
    第一特征融合预测单元,用于将所得到的ROI分割结果输入所述teacher-student网络分割模型中的编码器部分,得到深度学习的图像特征向量,以及将所得到的ROI分割结果进行特征提取,得到图像几何纹理特征向量,且进一步结合由非图像数据预处理所得的非图像数据向量,并使用预先训练好的级联随机森林分类模型进行分类预测,得到相应的分类结果;其中,所述分类结果为恶性或良性。
  7. 如权利要求6所述的面向多源数据的乳腺肿瘤图像分类预测装置,其特征在于,还包括:
    第二特征融合预测单元,用于结合所述图像几何纹理特征向量与所述图像特征向量,使用预设训练好的ki-67指标回归预测模型进行预测,得到ki-67指标预测结果。
  8. 如权利要求6所述的面向多源数据的乳腺肿瘤图像分类预测装置,其特征在于,所述解码器由多个子解码器构成,并与多源数据一一对应。
PCT/CN2021/094088 2021-01-08 2021-05-17 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置 WO2022147940A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110024038.2A CN112734723B (zh) 2021-01-08 2021-01-08 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置
CN202110024038.2 2021-01-08

Publications (1)

Publication Number Publication Date
WO2022147940A1 true WO2022147940A1 (zh) 2022-07-14

Family

ID=75591315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/094088 WO2022147940A1 (zh) 2021-01-08 2021-05-17 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置

Country Status (2)

Country Link
CN (1) CN112734723B (zh)
WO (1) WO2022147940A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117708706A (zh) * 2024-02-06 2024-03-15 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) 一种端到端特征增强与选择的乳腺肿瘤分类方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734723B (zh) * 2021-01-08 2023-06-30 温州医科大学 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置
CN114581382B (zh) * 2022-02-21 2023-02-21 北京医准智能科技有限公司 一种针对乳腺病灶的训练方法、装置及计算机可读介质
CN115879008B (zh) * 2023-03-02 2023-05-26 中国空气动力研究与发展中心计算空气动力研究所 一种数据融合模型训练方法、装置、设备及存储介质
CN117672463B (zh) * 2024-02-02 2024-04-05 吉林大学 用于放射治疗的数据处理系统及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304889A (zh) * 2018-03-05 2018-07-20 南方医科大学 一种基于深度学习的全数字乳腺成像图像放射组学方法
US20180214105A1 (en) * 2017-01-31 2018-08-02 Siemens Healthcare Gmbh System and method breast cancer detection with x-ray imaging
CN111695644A (zh) * 2020-08-10 2020-09-22 华侨大学 基于光密度变换的肿瘤超声图像分类方法、装置及介质
CN112734723A (zh) * 2021-01-08 2021-04-30 温州医科大学 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010063010A2 (en) * 2008-11-26 2010-06-03 Guardian Technologies International Inc. System and method for texture visualization and image analysis to differentiate between malignant and benign lesions
CN108764241A (zh) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 分割股骨近端的方法、装置、计算机设备和存储介质
CN109009110A (zh) * 2018-06-26 2018-12-18 东北大学 基于mri影像的腋窝淋巴结转移预测系统
CN109615614B (zh) * 2018-11-26 2020-08-18 北京工业大学 基于多特征融合的眼底图像中血管的提取方法与电子设备
CN110428426A (zh) * 2019-07-02 2019-11-08 温州医科大学 一种基于改进随机森林算法的mri图像自动分割方法
CN110533683B (zh) * 2019-08-30 2022-04-29 东南大学 一种融合传统特征与深度特征的影像组学分析方法
CN110766670A (zh) * 2019-10-18 2020-02-07 厦门粉红思黛医学科技有限公司 一种基于深度卷积神经网络的乳腺钼靶图像肿瘤定位算法
CN111563897B (zh) * 2020-04-13 2024-01-05 北京理工大学 基于弱监督学习的乳腺核磁影像肿瘤分割的方法及装置
CN112150478B (zh) * 2020-08-31 2021-06-22 温州医科大学 一种构建半监督图像分割框架的方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180214105A1 (en) * 2017-01-31 2018-08-02 Siemens Healthcare Gmbh System and method breast cancer detection with x-ray imaging
CN108304889A (zh) * 2018-03-05 2018-07-20 南方医科大学 一种基于深度学习的全数字乳腺成像图像放射组学方法
CN111695644A (zh) * 2020-08-10 2020-09-22 华侨大学 基于光密度变换的肿瘤超声图像分类方法、装置及介质
CN112734723A (zh) * 2021-01-08 2021-04-30 温州医科大学 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117708706A (zh) * 2024-02-06 2024-03-15 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) 一种端到端特征增强与选择的乳腺肿瘤分类方法及系统

Also Published As

Publication number Publication date
CN112734723B (zh) 2023-06-30
CN112734723A (zh) 2021-04-30

Similar Documents

Publication Publication Date Title
WO2022147940A1 (zh) 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置
CN111191660B (zh) 一种基于多通道协同胶囊网络的结肠癌病理学图像分类方法
Wang et al. Automatic recognition of mild cognitive impairment and alzheimers disease using ensemble based 3d densely connected convolutional networks
Chen et al. Self-supervised noisy label learning for source-free unsupervised domain adaptation
Xing et al. Dynamic image for 3d mri image alzheimer’s disease classification
Jin et al. Deep learning-based framework for expansion, recognition and classification of underwater acoustic signal
CN111242288B (zh) 一种用于病变图像分割的多尺度并行深度神经网络模型构建方法
Cao et al. Training vision transformers with only 2040 images
Tsai et al. Deep learning of topological phase transitions from entanglement aspects
Wan et al. Generative adversarial multi-task learning for face sketch synthesis and recognition
Gehlot et al. Ednfc-net: Convolutional neural network with nested feature concatenation for nuclei-instance segmentation
CN114176607B (zh) 一种基于视觉Transformer的脑电信号分类方法
Wang et al. Aircraft image recognition network based on hybrid attention mechanism
KR20210095671A (ko) 이미지 처리 방법 및 관련 장치
Menaka et al. Chromenet: A CNN architecture with comparison of optimizers for classification of human chromosome images
CN114240955A (zh) 一种半监督的跨领域自适应的图像分割方法
Zhang et al. A small target detection method based on deep learning with considerate feature and effectively expanded sample size
Moataz et al. Skin cancer diseases classification using deep convolutional neural network with transfer learning model
Hu et al. A novel framework of CNN integrated with AdaBoost for remote sensing scene classification
CN115047423A (zh) 基于对比学习无监督预训练-微调式的雷达目标识别方法
Li et al. Classification of Alzheimer’s disease in MRI images using knowledge distillation framework: an investigation
El Alaoui et al. Deep stacked ensemble for breast cancer diagnosis
CN116912253B (zh) 基于多尺度混合神经网络的肺癌病理图像分类方法
CN116580225A (zh) 一种基于空间信息驱动的直肠癌ct图像分类方法
Serpa et al. Milestones and new frontiers in deep learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21916990

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21916990

Country of ref document: EP

Kind code of ref document: A1