CN111832625B - Full-scan image analysis method and system based on weak supervised learning - Google Patents

Full-scan image analysis method and system based on weak supervised learning Download PDF

Info

Publication number
CN111832625B
CN111832625B CN202010560283.0A CN202010560283A CN111832625B CN 111832625 B CN111832625 B CN 111832625B CN 202010560283 A CN202010560283 A CN 202010560283A CN 111832625 B CN111832625 B CN 111832625B
Authority
CN
China
Prior art keywords
image
full
scan
scan image
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010560283.0A
Other languages
Chinese (zh)
Other versions
CN111832625A (en
Inventor
邹霜梅
王书浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Thorough Future Technology Co ltd
Cancer Hospital and Institute of CAMS and PUMC
Original Assignee
Touche Image Beijing Technology Co ltd
Cancer Hospital and Institute of CAMS and PUMC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Touche Image Beijing Technology Co ltd, Cancer Hospital and Institute of CAMS and PUMC filed Critical Touche Image Beijing Technology Co ltd
Priority to CN202010560283.0A priority Critical patent/CN111832625B/en
Publication of CN111832625A publication Critical patent/CN111832625A/en
Application granted granted Critical
Publication of CN111832625B publication Critical patent/CN111832625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The invention provides a full-scan image analysis method and system based on weak supervised learning, wherein the method comprises the following steps: equally dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, and simultaneously obtaining full-scan image-level labels; enhancing the full-scanning image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N multiplied by N times of supervised information; and directly assigning the image block-level label to each pixel point of the corresponding region of the full-scan image, and training a pixel-level prediction model by using an image segmentation model in a supervised mode. The system comprises modules corresponding to the method steps.

Description

Full-scan image analysis method and system based on weak supervised learning
Technical Field
The invention provides a full-scan image analysis method and system based on weak supervised learning, and belongs to the technical field of image prediction.
Background
Pathology is the process of disease diagnosis by analyzing samples of a patient's tissues, cells, or body fluids, known as the "gold standard" of medicine, and is the most important basis for the diagnosis of all neoplastic diseases. The diagnosis level of the pathology department is an important reference index of the overall diagnosis and treatment level of the hospital. With the continuous development of remote diagnosis, digital pathology scanners are beginning to enter pathology departments, and more pathological sections are digitized and stored as full-scan images. With the continuous development of artificial intelligence pathology, doctors can obtain auxiliary diagnosis results of machines through full-scan images.
Unlike radiological images such as CT and X-ray, pathological images are usually 500MB to 2GB in volume, and pixel resolution is usually 200,000 × 100,000, which puts high demands on the labeling process of training data.
Disadvantages of the prior art
1. Although the label at the image level can save a certain marking difficulty, a large amount of marking time is still needed;
the Gabrile Campanella et al method requires a significant amount of computing resources and is a "brute force" solution to this problem. In the method, each iteration needs to calculate all image blocks in the whole full-scan image, the single iteration period is long, and in order to shorten the calculation time, only a mode of increasing calculation resources is needed;
3. the existing method only solves the problem of image-level classification, outputs a chessboard-like result, and cannot accurately give a prediction result of a pixel level.
Disclosure of Invention
The invention provides a full-scan image analysis method and system based on weak supervised learning, which are used for solving the problems of long iteration period, overlarge calculation resource amount and low pixel-level prediction accuracy of an image: the technical scheme is as follows:
a method of full scan image analysis based on weakly supervised learning, the method comprising:
equally dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, and simultaneously obtaining full-scan image-level labels; enhancing the full-scanning image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N multiplied by N times of supervised information;
and directly assigning the image block-level label to each pixel point of the corresponding region of the full-scan image, and training a pixel-level prediction model by using an image segmentation model in a supervised mode.
Further, the full-scan image is equally divided into a plurality of image blocks with the size smaller than that of the full-scan image, and each image block is labeled to obtain a full-scan image-level label; and enhancing the full-scan image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N × N times of supervised information, including:
equally dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, simultaneously obtaining full-scan image-level labels, and sampling the image blocks of the full-scan image by using an attention mechanism;
training by using an image block selection criterion to generate a deep learning model; respectively sending image blocks in the full-scan image into the deep learning models to obtain the prediction result of each image block, selecting a representative image block through the image block selection criterion, and simultaneously excluding the image blocks with different prediction results of the two deep learning models;
training a new image classifier by using the representative image blocks, and predicting the image blocks in all the full-scan images of the training set by using the trained image classifier to obtain a hot spot area prediction result of each full-scan image;
equally dividing the hot spot area into N multiplied by N hot spot area image blocks with equal size and size ratio, and labeling the hot spot area image blocks to obtain image block-level labels; and the image block level data corresponding to the image block level label is the supervised information multiplied by N times.
Further, the equally dividing the full-scan image into a plurality of image blocks with sizes smaller than the full-scan image, labeling each image block, and sampling the full-scan image by using an attention mechanism includes:
dividing each full-scan image into N multiplied by N image blocks with equal size, and labeling each image block; wherein, N is M/M, and M represents the side length size of the full scanning image; m represents the side length size of the image block;
establishing an attention matrix for each full-scan image, wherein the attention matrix stores attention weights of all image blocks corresponding to the full-scan image; and the initial value of the attention matrix is N/A, wherein N/A represents None;
sampling image blocks of coordinates of a full-scan image (i, j) by using a weight M (i, j) and a probability p for each full-scan image in each iteration, and sampling p × N image blocks in total, wherein the weight M (i, j) of the full-scan image satisfies that M (i, j) ≠ N/A; then, (1-p) × N image blocks are sampled with the same probability p among the image blocks having the weight M (i, j) ═ N/a;
in the model training process, the prediction result M '(i, j) of the model is updated by each iteration to cover the original matrix elements, i.e., M (i, j) ═ M' (i, j).
Further, the image block selection criterion adopts a Max-Max criterion or a Max-Min criterion;
the Max-Max criterion is as follows: for the full-scan images with hot spot areas and non-hot spot areas, adopting the image blocks with the maximum prediction probability as representative image blocks;
the Max-Min criterion is that the image block with the maximum prediction probability is adopted as a representative image block for the full-scan image with the hot spot area, and the image block with the minimum prediction probability is adopted as a representative image block for the full-scan image without the hot spot area.
Further, the process of determining the size of the image block comprises:
step 1, acquiring a data matrix reflecting pixel color categories in the full-scan image by using the full-scan image;
A=[a1 a2 …… an]
wherein A represents a data matrix of pixel color classes in the full scan image; a is1、a2……anRepresenting pixel color classes contained in the data matrix, each pixel color class corresponding to a pathological tissue region;
step 2, obtaining the color intensity corresponding to each pathological tissue part in the full-scan image, and determining the number of pathological tissue areas in the full-scan image according to the color intensity and the data matrix;
Figure BDA0002545846700000031
wherein Num represents the number of pathological tissue areas; a isiAnd i ═ 1,2, … … n represents the pixel color classes contained in the data matrix; biRepresenting the number of color intensity levels corresponding to each pixel color category;
step 3, calculating the average brightness value of the full-scan image by using the brightness component of each pixel in the full-scan image; then, calculating the average brightness value of each pathological tissue part by using the brightness component of each pixel of the image area of each pathological tissue part;
and 4, acquiring the size of the image block according to the number of the pathological tissue areas and the following formula:
Figure BDA0002545846700000032
wherein L is0An average luminance value representing the full scan image; l isiRepresents the average luminance value of each pathological tissue portion.
A system for full scan image analysis based on weakly supervised learning, the system comprising:
the monitoring information enhancement module is used for equally dividing the full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, labeling each image block and obtaining full-scan image-level labels; enhancing the full-scanning image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N multiplied by N times of supervised information;
and the supervised model training module is used for directly assigning the image block-level labels to each pixel point of the corresponding region of the full-scan image and training a pixel-level prediction model in a supervised mode by utilizing an image segmentation model.
Further, the supervisory information enhancement module includes:
the image block sampling module is used for equally dividing the full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, simultaneously obtaining full-scan image-level labels and performing image block sampling on the full-scan image by using an attention mechanism;
the prediction result acquisition module is used for training by utilizing an image block selection criterion to generate a deep learning model; respectively sending image blocks in the full-scan image into the deep learning models to obtain the prediction result of each image block, selecting a representative image block through the image block selection criterion, and simultaneously excluding the image blocks with different prediction results of the two deep learning models;
the classifier training module is used for training a new image classifier by using the representative image blocks, predicting the image blocks in all full-scan images of a training set by using the trained image classifier, and obtaining a hot spot area prediction result of each full-scan image;
the supervision information acquisition module is used for equally dividing the hot spot area into N multiplied by N hot spot area image blocks with equal size and size ratio, labeling the hot spot area image blocks and obtaining image block-level labels; and the image block level data corresponding to the image block level label is the supervised information multiplied by N times.
Further, the image block sampling module comprises:
the segmentation module is used for dividing each full-scan image into N multiplied by N image blocks with equal size and labeling each image block; wherein, N is M/M, and M represents the side length size of the full scanning image; m represents the side length size of the image block;
the matrix establishing module is used for establishing an attention matrix for each full-scan image, and the attention matrix stores attention weights of all image blocks corresponding to the full-scan image; and the initial value of the attention matrix is N/A, wherein N/A represents None;
the sampling module is used for sampling image blocks of coordinates of a full-scan image (i, j) by using a weight M (i, j) and a probability p for each full-scan image in each iteration, and sampling p × N image blocks in total, wherein the weight M (i, j) of the full-scan image meets the condition that M (i, j) ≠ N/A; then, (1-p) × N image blocks are sampled with the same probability p among the image blocks having the weight M (i, j) ═ N/a;
and the covering module is used for covering the original matrix elements with the prediction result M '(i, j) of each iteration updating model in the model training process, namely M (i, j) ═ M' (i, j).
Further, the image block selection criterion adopts a Max-Max criterion and a Max-Min criterion;
the Max-Max criterion is as follows: for the full-scan images with hot spot areas and non-hot spot areas, adopting the image blocks with the maximum prediction probability as representative image blocks;
the Max-Min criterion is that the image block with the maximum prediction probability is adopted as a representative image block for the full-scan image with the hot spot area, and the image block with the minimum prediction probability is adopted as a representative image block for the full-scan image without the hot spot area.
Further, the image block sampling module further comprises:
and the image block size obtaining module is used for obtaining the size of the image block when the full-scanning image is equally divided into a plurality of image blocks.
The invention has the beneficial effects that:
the method and the system for analyzing the full-scan image based on the weak supervised learning can complete the establishment of a pixel-level image segmentation model based on the label of the full-scan image level, and are suitable for various types of full-scan images and models (including image classification and image segmentation models). Meanwhile, the problem of overlong training period of the weak supervised learning is solved by a sampling method based on an attention mechanism. Model iteration is not needed to be carried out on all image blocks in the whole full-scan image in each iteration, and only a sampled hot spot image block iteration model is needed, so that the training time is greatly shortened, and the requirements on computing resources are reduced. In addition, the accuracy of the multi-instance learning model is improved by combining two criteria (Max-Max and Max-Min).
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a sampling method based on an attention mechanism according to the present invention;
FIG. 3 is a system block diagram of the system of the present invention;
FIG. 4 is a schematic diagram of the system of the present invention;
fig. 5 is a schematic diagram of two of the image block criteria of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The embodiment of the invention provides a full-scan image analysis method and system based on weak supervised learning, which are used for solving the problems of long iteration period, overlarge calculation resource amount and low pixel-level prediction accuracy of an image.
The embodiment of the invention provides a full-scan image analysis method based on weak supervised learning, which comprises the following steps of:
s1, equally dividing the full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, and labeling each image block to obtain full-scan image-level labels; enhancing the full-scanning image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N multiplied by N times of supervised information;
and S2, directly assigning the image block-level labels to each pixel point of the corresponding region of the full-scan image, and training a pixel-level prediction model in a supervised mode by using an image segmentation model (such as deep Lab and U-Net).
The working principle of the technical scheme is as follows: the method comprises two steps: supervised information enhancement and supervised model training.
The method comprises the steps of dividing a full-scanning image into smaller image blocks at equal intervals in a supervision information enhancement process, automatically labeling each block in the supervision information enhancement process, and converting weak supervision into a supervised problem.
The effectiveness of the method depends on the quality of the image block label after the supervision information is enhanced, and a combined multi-instance learning (cMIL) method is provided for improving the accuracy of the label. In the training process of cMIL, a representative image block in an image needs to be found, and the prediction result can be regarded as a classification label of the whole full-scan image. In actual practice, each full-scan image is divided into N × N equal-sized blocks (M and M denote the sizes of the full-scan image and the image block, respectively, and N ═ M/M is a scaling factor). Sampling the image blocks by using an attention mechanism, and selecting a representative image block by using an image block criterion. Then, the representative image block is labeled to obtain image-level labels. And finally, training a new classifier by using the selected labeled image blocks, and predicting the image blocks in all the full-scan images of the training set by using the trained classifier to obtain a rough hot spot region prediction result of each full-scan image. So far, the label of the full scanning image level is enhanced to the image block level, and the supervising information of NXN times is obtained.
Finally, the label of the image block level is directly assigned to each pixel point, so that the existing image segmentation models, such as deep Lab and U-Net (various image segmentation models are all suitable), can be used for training the pixel level prediction model in a supervision mode.
In this embodiment, a full scan image (WSI) refers to a multi-level visual image obtained by scanning and collecting a high-resolution digital image through a full-automatic microscope or an optical amplification system and performing high-precision multi-view seamless splicing and processing through a computer. The weak supervised learning means that the invention mainly solves the weak supervised learning problem of 'uncertain supervised learning', namely, the model training can not be carried out in a supervised learning mode because the coarse granularity of the marking data in the data is too large. Multiple Instance Learning (MIL) refers to multiple instance learning, which is a weakly supervised learning method, and defines a "package" as a collection of multiple instances, where instead of a learner receiving a set of individually labeled instances, a learner receives a set of labeled packages, each having multiple instances. In the simple case of multi-instance binary classification, a packet may be marked negative if all instances in the packet are negative. On the other hand, if at least one of the packets is positive, the packet is marked as positive.
The effect of the above technical scheme is as follows: the method can complete the establishment of a pixel-level image segmentation model based on the label of the full-scan image level, and is suitable for various types of full-scan images and models (including image classification and image segmentation models). Meanwhile, the problem of overlong training period of the weak supervised learning is solved by a sampling method based on an attention mechanism. Model iteration is not needed to be carried out on all image blocks in the whole full-scan image in each iteration, and only a sampled hot spot image block iteration model is needed, so that the training time is greatly shortened, and the requirements on computing resources are reduced. In addition, the accuracy of the multi-instance learning model is improved by the image block criterion method.
According to one embodiment of the invention, the full-scan image is equally divided into a plurality of image blocks with the size smaller than that of the full-scan image, and each image block is labeled to obtain a full-scan image-level label; and enhancing the full-scan image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N × N times of supervised information, including:
s101, equally dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, labeling each image block, and sampling the image blocks of the full-scan image by using an attention mechanism;
s102, training by using image block selection criteria to generate a deep learning model; respectively sending image blocks in the full-scan image into the deep learning models to obtain the prediction result of each image block, selecting a representative image block through the image block selection criterion, and simultaneously excluding the image blocks with different prediction results of the two deep learning models;
s103, training a new image classifier by using the representative image blocks, and predicting the image blocks in all full-scan images of a training set by using the trained image classifier to obtain a hot spot area prediction result of each full-scan image;
s104, equally dividing the hot spot area into N multiplied by N hot spot area image blocks which are equal in size and are larger than the hot spot area, and labeling the hot spot area image blocks to obtain image block-level labels; and the image block level data corresponding to the image block level label is the supervised information multiplied by N times.
The working principle of the technical scheme is as follows: firstly, equally dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, labeling each image block, and sampling the image blocks of the full-scan image by using an attention mechanism; then, training by using an image block selection criterion to generate a deep learning model; respectively sending image blocks in the full-scan image into the deep learning models to obtain the prediction result of each image block, selecting a representative image block through the image block selection criterion, and simultaneously excluding the image blocks with different prediction results of the two deep learning models; then, training a new image classifier by using the representative image blocks, and predicting the image blocks in all the full-scan images of the training set by using the trained image classifier to obtain a hot spot area prediction result of each full-scan image; finally, equally dividing the hot spot area into N × N hot spot area image blocks with equal size and size ratio to the hot spot area, and labeling the hot spot area image blocks to obtain image block-level labels; and the image block level data corresponding to the image block level label is the supervised information multiplied by N times.
The effect of the above technical scheme is as follows: the complete full-scan image analysis system based on the weak supervised learning can complete the learning from the full-scan image level label to the pixel level prediction; the method can complete the establishment of the pixel-level image segmentation model based on the full-scan image-level label, is suitable for various types of full-scan images and models (including image classification and image segmentation models), and has strong universality. Meanwhile, the problem of overlong training period of the weak supervised learning is solved by a sampling method based on an attention mechanism. Model iteration is not needed to be carried out on all image blocks in the whole full-scan image in each iteration, and only a sampled hot spot image block iteration model is needed, so that the training time is greatly shortened, and the requirements on computing resources are reduced. In addition, the accuracy of the multi-instance learning model is effectively improved by the image block criterion method.
In an embodiment of the present invention, as shown in fig. 2, the dividing a full-scan image into a plurality of image blocks with a size smaller than that of the full-scan image at equal intervals, labeling each image block, and performing image block sampling on the full-scan image by using an attention mechanism includes:
s1011, dividing each full-scan image into N multiplied by N image blocks with equal size, and labeling each image block; wherein, N is M/M, and M represents the side length size of the full scanning image; m represents the side length size of the image block;
s1012, establishing an attention matrix for each full-scan image, wherein the attention matrix stores attention weights of all image blocks corresponding to the full-scan image; and the initial value of the attention matrix is N/A, wherein N/A represents None;
s1013, in each iteration, sampling image blocks of coordinates of the full-scan image (i, j) by using a weight M (i, j) and a probability p for each full-scan image, and sampling p × N image blocks in total, wherein the weight M (i, j) of the full-scan image satisfies that M (i, j) ≠ N/A; then, (1-p) × N image blocks are sampled with the same probability p among the image blocks having the weight M (i, j) ═ N/a;
s1014, in the model training process, the prediction result M '(i, j) of the model is updated every iteration to cover the original matrix elements, i.e., M (i, j) ═ M' (i, j).
The working principle of the technical scheme is as follows: and establishing an attention matrix for each full-scanning image, and storing the attention weights of all image blocks corresponding to the matrix. The weight stores the latest prediction probability of the classification model to the image block, and if the latest prediction probability is not predicted, the weight defaults to N/A. Defining the sampling number as N (N < < N), and sampling the image blocks according to the attention weights with a certain probability p (between 0.0 and 1.0) in each iteration, namely, averagely sampling p multiplied by N image blocks of which the attention weights are not N/A. In order to ensure that the image blocks which are not predicted can also obtain abstract learning, sampling is carried out in the image blocks with the weight of N/A at random with the probability of 1-p, namely (1-p) x N image blocks with the attention weight of N/A are sampled on average. In conclusion, a single full-scan image is sampled for n image blocks in each iteration for training a classification model.
The effect of the above technical scheme is as follows: by the sampling method based on the attention mechanism, the problem that the weak supervised learning training period is too long is solved. Model iteration is not needed to be carried out on all image blocks in the whole full-scan image in each iteration, and only a sampled hot spot image block iteration model is needed, so that the training time is greatly shortened, and the requirements on computing resources are reduced.
In an embodiment of the present invention, as shown in fig. 5, the image block selection criterion is a Max-Max criterion or a Max-Min criterion;
the Max-Max criterion is as follows: for the full-scan images with hot spot areas and non-hot spot areas, adopting the image blocks with the maximum prediction probability as representative image blocks;
the Max-Min criterion is that the image block with the maximum prediction probability is adopted as a representative image block for the full-scan image with the hot spot area, and the image block with the minimum prediction probability is adopted as a representative image block for the full-scan image without the hot spot area.
The working principle of the technical scheme is as follows: if a full-scan image contains hot spot regions, it can be inferred that at least one image block contains hot spot regions. On the contrary, if a full-scan image has no hot spot area, all image blocks have no hot spot. Wherein, the hot spot area refers to an area related to the predicted label. cMIL uses two different image block selection criteria (i.e., Max-Max and Max-Min), Max-Max uses the image block with the largest prediction probability as the representative image block for the full-scan images with and without hot spot regions, and Max-Min uses the image block with the largest and the smallest prediction probability as the representative image block for the full-scan images with and without hot spot regions.
The effect of the above technical scheme is as follows: by combining the two image block criteria, the accuracy of the prediction result is effectively improved.
In an embodiment of the present invention, the process of determining the size of the image block includes:
step 1, acquiring a data matrix reflecting pixel color categories in the full-scan image by using the full-scan image;
A=[a1 a2 …… an]
wherein A represents a data matrix of pixel color classes in the full scan image; a is1、a2……anRepresenting pixel color classes contained in the data matrix, each pixel color class corresponding to a pathological tissue region;
step 2, obtaining the color intensity corresponding to each pathological tissue part in the full-scan image, and determining the number of pathological tissue areas in the full-scan image according to the color intensity and the data matrix;
Figure BDA0002545846700000081
wherein Num represents the number of pathological tissue areas; a isiAnd i ═ 1,2, … … n represents the pixel color classes contained in the data matrix; biRepresenting the number of color intensity levels corresponding to each pixel color category;
step 3, calculating the average brightness value of the full-scan image by using the brightness component of each pixel in the full-scan image; then, calculating the average brightness value of each pathological tissue part by using the brightness component of each pixel of the image area of each pathological tissue part;
and 4, acquiring the size of the image block according to the number of the pathological tissue areas and the following formula:
Figure BDA0002545846700000091
wherein L is0An average luminance value representing the full scan image; l isiRepresents the average luminance value of each pathological tissue portion.
The working principle of the technical scheme is as follows; and calculating to obtain the image block size of the full-scan image during segmentation according to the pixel color category in the full-scan image and the data indexes such as the color intensity corresponding to each pathological tissue part in the full-scan image. The color intensity level number refers to the number of levels from strong to weak of the color staining degree in the same pixel color category in each pixel color category.
The effect of the above technical scheme is as follows: the size of the cut image block is obtained according to the specific image condition of each dyeing area in the full-scan image, and the processing precision of the full-scan image can be effectively improved. Meanwhile, the image blocks with the same size are obtained by integrating the actual image conditions of each dyeing area, and the accuracy and precision of the prediction of the subsequent image blocks can be effectively improved by obtaining the number of the image blocks.
A full scan image analysis system based on weak supervised learning, as shown in fig. 3 and 4, the system comprising:
the monitoring information enhancement module is used for equally dividing the full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, labeling each image block and obtaining full-scan image-level labels; enhancing the full-scanning image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N multiplied by N times of supervised information;
and the supervised model training module is used for directly assigning the image block-level labels to each pixel point of the corresponding region of the full-scan image and training a pixel-level prediction model in a supervised mode by utilizing an image segmentation model (such as deep Lab and U-Net).
The working principle of the technical scheme is as follows: the system equally divides a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image by using a supervision information enhancement module, and labels each image block to obtain full-scan image-level labels; enhancing the full-scanning image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N multiplied by N times of supervised information; and directly assigning the image block-level labels to each pixel point of the corresponding region of the full-scan image through a supervised model training module, and training a pixel-level prediction model in a supervised mode by utilizing an image segmentation model (such as deep Lab and U-Net).
The effect of the above technical scheme is as follows: the method can complete the establishment of a pixel-level image segmentation model based on the label of the full-scan image level, and is suitable for various types of full-scan images and models (including image classification and image segmentation models). Meanwhile, the problem of overlong training period of the weak supervised learning is solved by a sampling method based on an attention mechanism. Model iteration is not needed to be carried out on all image blocks in the whole full-scan image in each iteration, and only a sampled hot spot image block iteration model is needed, so that the training time is greatly shortened, and the requirements on computing resources are reduced. In addition, the accuracy of the multi-instance learning model is improved by the image block criterion method.
In an embodiment of the present invention, the monitoring information enhancing module includes:
the image block sampling module is used for equally dividing a full-scan image into a plurality of image blocks with the sizes smaller than that of the full-scan image, labeling each image block, and performing image block sampling on the full-scan image by using an attention mechanism;
the prediction result acquisition module is used for training by utilizing an image block selection criterion to generate a deep learning model; respectively sending image blocks in the full-scan image into the deep learning models to obtain the prediction result of each image block, selecting a representative image block through the image block selection criterion, and simultaneously excluding the image blocks with different prediction results of the two deep learning models;
the classifier training module is used for training a new image classifier by using the representative image blocks, predicting the image blocks in all full-scan images of a training set by using the trained image classifier, and obtaining a hot spot area prediction result of each full-scan image;
the supervision information acquisition module is used for equally dividing the hot spot area into N multiplied by N hot spot area image blocks with equal size and size ratio, labeling the hot spot area image blocks and obtaining image block-level labels; and the image block level data corresponding to the image block level label is the supervised information multiplied by N times.
The working principle of the technical scheme is as follows: equally dividing a full-scan image into a plurality of image blocks with the sizes smaller than that of the full-scan image through an image block sampling module, labeling each image block, and sampling the image blocks of the full-scan image by using an attention mechanism; training by using an image block selection criterion by adopting a prediction result acquisition module to generate a deep learning model; respectively sending image blocks in the full-scan image into the deep learning models to obtain the prediction result of each image block, selecting a representative image block through the image block selection criterion, and simultaneously excluding the image blocks with different prediction results of the two deep learning models; training a new image classifier by using the representative image blocks through a classifier training module, and predicting the image blocks in all full-scan images of a training set by using the trained image classifier to obtain a hot spot area prediction result of each full-scan image; equally dividing the hot spot region into N multiplied by N hot spot region image blocks with equal size and size ratio by using a supervision information acquisition module, and labeling the hot spot region image blocks to obtain image block-level labels; and the image block level data corresponding to the image block level label is the supervised information multiplied by N times.
In one embodiment of the present invention, the image block sampling module comprises:
the segmentation module is used for dividing each full-scan image into N multiplied by N image blocks with equal size and labeling each image block; wherein, N is M/M, and M represents the side length size of the full scanning image; m represents the side length size of the image block;
the matrix establishing module is used for establishing an attention matrix for each full-scan image, and the attention matrix stores attention weights of all image blocks corresponding to the full-scan image; and the initial value of the attention matrix is N/A, wherein N/A represents None;
the sampling module is used for sampling image blocks of coordinates of a full-scan image (i, j) by using a weight M (i, j) and a probability p for each full-scan image in each iteration, and sampling p × N image blocks in total, wherein the weight M (i, j) of the full-scan image meets the condition that M (i, j) ≠ N/A; then, (1-p) × N image blocks are sampled with the same probability p among the image blocks having the weight M (i, j) ═ N/a;
and the covering module is used for covering the original matrix elements with the prediction result M '(i, j) of each iteration updating model in the model training process, namely M (i, j) ═ M' (i, j).
The working principle of the technical scheme is as follows: dividing each full-scan image into N multiplied by N image blocks with equal size through a dividing module, and labeling each image block; wherein, N is M/M, and M represents the side length size of the full scanning image; m represents the side length size of the image block; establishing an attention matrix for each full-scan image by using a matrix establishing module, wherein the attention matrix stores attention weights of all image blocks corresponding to the full-scan image; and the initial value of the attention matrix is N/a. Sampling image blocks of coordinates of a full-scan image (i, j) by using a sampling module according to the weight M (i, j) and the probability p of each full-scan image in each iteration, and sampling p multiplied by N image blocks in total, wherein the weight M (i, j) of the full-scan image meets the condition that M (i, j) is not equal to N/A; then, (1-p) × N image blocks are sampled with the same probability p among the image blocks having the weight M (i, j) ═ N/a; finally, in the process of training the model, the covering module is used for covering the original matrix elements with the prediction result M '(i, j) of each iteration updating model, namely M (i, j) ═ M' (i, j).
The effect of the above technical scheme is as follows: by the sampling method based on the attention mechanism, the problem that the weak supervised learning training period is too long is solved. Model iteration is not needed to be carried out on all image blocks in the whole full-scan image in each iteration, and only a sampled hot spot image block iteration model is needed, so that the training time is greatly shortened, and the requirements on computing resources are reduced.
In an embodiment of the present invention, as shown in fig. 5, the image block selection criterion adopts a Max-Max criterion and a Max-Min criterion;
the Max-Max criterion is as follows: for the full-scan images with hot spot areas and non-hot spot areas, adopting the image blocks with the maximum prediction probability as representative image blocks;
the Max-Min criterion is that the image block with the maximum prediction probability is adopted as a representative image block for the full-scan image with the hot spot area, and the image block with the minimum prediction probability is adopted as a representative image block for the full-scan image without the hot spot area.
The working principle of the technical scheme is as follows: if a full-scan image contains hot spot regions, it can be inferred that at least one image block contains hot spot regions. On the contrary, if a full-scan image has no hot spot area, all image blocks have no hot spot. Wherein, the hot spot area refers to an area related to the predicted label. cMIL uses two different image block selection criteria (i.e., Max-Max and Max-Min), Max-Max uses the image block with the largest prediction probability as the representative image block for the full-scan images with and without hot spot regions, and Max-Min uses the image block with the largest and the smallest prediction probability as the representative image block for the full-scan images with and without hot spot regions.
The effect of the above technical scheme is as follows: by combining the two image block criteria, the accuracy of the prediction result is effectively improved.
In one embodiment of the present invention, the image block sampling module further comprises:
and the image block size obtaining module is used for obtaining the size of the image block when the full-scanning image is equally divided into a plurality of image blocks.
The image block size obtaining module includes:
the matrix forming module is used for acquiring a data matrix reflecting the pixel color category in the full-scan image by using the full-scan image;
the quantity acquisition module is used for acquiring the color intensity corresponding to each pathological tissue part in the full-scan image and determining the quantity of pathological tissue areas in the full-scan image according to the color intensity and the data matrix;
the brightness average value acquisition module is used for calculating the average brightness value of the full-scan image by using the brightness component of each pixel in the full-scan image; then, calculating the average brightness value of each pathological tissue part by using the brightness component of each pixel of the image area of each pathological tissue part;
and the size acquisition module is used for acquiring the size of the image block according to the number of the pathological tissue areas and the following formula.
The working principle of the technical scheme is as follows: the execution process of the image block size obtaining module comprises the following steps:
step 1, acquiring a data matrix reflecting the pixel color category in the full-scan image by using the full-scan image through a matrix forming module;
A=[a1 a2 …… an]
wherein A represents a data matrix of pixel color classes in the full scan image; a is1、a2……anRepresenting pixel color classes contained in the data matrix, each pixel color class corresponding to a pathological tissue region;
step 2, acquiring the color intensity corresponding to each pathological tissue part in the full-scan image by adopting a quantity acquisition module, and determining the quantity of pathological tissue areas in the full-scan image according to the color intensity and the data matrix;
Figure BDA0002545846700000121
wherein Num represents the number of pathological tissue areas; a isiAnd i ═ 1,2, … … n represents the pixel color classes contained in the data matrix; biRepresenting the number of color intensity levels corresponding to each pixel color category;
step 3, calculating the average brightness value of the full-scan image by using the brightness component of each pixel in the full-scan image through a brightness average value acquisition module; then, calculating the average brightness value of each pathological tissue part by using the brightness component of each pixel of the image area of each pathological tissue part;
and 4, acquiring the size of the image block by using a size acquisition module according to the number of the pathological tissue areas and the following formula:
Figure BDA0002545846700000131
wherein L is0An average luminance value representing the full scan image; l isiRepresents the average luminance value of each pathological tissue portion.
The effect of the above technical scheme is as follows: the size of the cut image block is obtained according to the specific image condition of each dyeing area in the full-scan image, and the processing precision of the full-scan image can be effectively improved. Meanwhile, the image blocks with the same size are obtained by integrating the actual image conditions of all the dyeing areas, and the number of the obtained image blocks can effectively improve the strength of the supervision information enhancement and the accuracy and precision of the subsequent image block prediction.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. A full-scan image analysis method based on weak supervised learning is characterized by comprising the following steps:
equally dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, and simultaneously obtaining full-scan image-level labels; enhancing the full-scanning image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N multiplied by N times of supervised information;
directly assigning the image block-level label to each pixel point of the corresponding region of the full-scan image, and training a pixel-level prediction model in a supervised mode by using an image segmentation model;
the method comprises the steps of dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image at equal intervals, and simultaneously obtaining full-scan image-level labels; and enhancing the full-scan image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N × N times of supervised information, including:
equally dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, simultaneously obtaining full-scan image-level labels, and sampling the image blocks of the full-scan image by using an attention mechanism;
training by utilizing an image block selection criterion to generate two deep learning models; respectively sending image blocks in the full-scan image into the deep learning models to obtain the prediction result of each image block, selecting a representative image block through the image block selection criterion, and simultaneously excluding the image blocks with different prediction results of the two deep learning models;
training a new image classifier by using the representative image blocks, and predicting the image blocks in all the full-scan images of the training set by using the trained image classifier to obtain a hot spot area prediction result of each full-scan image;
equally dividing the hot spot area into N multiplied by N hot spot area image blocks which are equal in size and smaller than the hot spot area, and labeling the hot spot area image blocks to obtain image block-level labels; the image block level data corresponding to the image block level label is the supervised information multiplied by N times;
equally dividing a full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, simultaneously obtaining full-scan image-level labels, and sampling the image blocks of the full-scan image by using an attention mechanism, the method comprises the following steps:
dividing each full-scan image into N multiplied by N image blocks with equal size, and labeling each image block; wherein, N is M/M, and M represents the side length size of the full scanning image; m represents the side length size of the image block;
establishing an attention matrix for each full-scan image, wherein the attention matrix stores attention weights of all image blocks corresponding to the full-scan image; and the initial value of the attention matrix is N/A, wherein N/A represents None;
in each iteration, sampling an image block with a coordinate of (i, j) in a full-scan image by using a weight M (i, j) and a probability p for each full-scan image, and sampling p × N image blocks, wherein the weight M (i, j) of the full-scan image satisfies that M (i, j) ≠ N/A; then, (1-p) × N image blocks are sampled with the same probability p among the image blocks having the weight M (i, j) ═ N/a; wherein n represents the number of image blocks which are sampled by a single full-scanning image in each iteration;
in the model training process, the prediction result M '(i, j) of the model is updated by each iteration to cover the original matrix elements, i.e., M (i, j) ═ M' (i, j).
2. The method of claim 1, wherein the image block selection criterion is a Max-Max criterion or a Max-Min criterion;
the Max-Max criterion is as follows: for the full-scan images with hot spot areas and non-hot spot areas, adopting the image blocks with the maximum prediction probability as representative image blocks;
the Max-Min criterion is that the image block with the maximum prediction probability is adopted as a representative image block for the full-scan image with the hot spot area, and the image block with the minimum prediction probability is adopted as a representative image block for the full-scan image without the hot spot area.
3. The method according to claim 1, wherein the image block size determination process comprises:
step 1, acquiring a data matrix reflecting pixel color categories in the full-scan image by using the full-scan image;
A=[a1 a2……ak]
wherein A represents a data matrix of pixel color classes in the full scan image; a is1、a2……akRepresenting pixel color classes contained in the data matrix, each pixel color class corresponding to a pathological tissue region;
step 2, obtaining the color intensity corresponding to each pathological tissue part in the full-scan image, and determining the number of pathological tissue areas in the full-scan image according to the color intensity and the data matrix;
Figure FDA0002928694490000021
wherein Num represents the number of pathological tissue areas; a isiAnd i ═ 1,2, … … k represents the pixel color classes contained in the data matrix; biRepresenting the color intensity corresponding to each pixel color category;
step 3, calculating the average brightness value of the full-scan image by using the brightness component of each pixel in the full-scan image; then, calculating the average brightness value of each pathological tissue part by using the brightness component of each pixel of the image area of each pathological tissue part;
and 4, acquiring the size of the image block according to the number of the pathological tissue areas and the following formula:
Figure FDA0002928694490000022
wherein L is0An average luminance value representing the full scan image; l isiRepresents the average luminance value of each pathological tissue portion.
4. A system for full scan image analysis based on weakly supervised learning, the system comprising:
the monitoring information enhancement module is used for equally dividing the full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, labeling each image block and obtaining full-scan image-level labels; enhancing the full-scanning image-level label to an image block-level label by using a sampling method based on an attention mechanism and an image block criterion to acquire N multiplied by N times of supervised information;
the supervised model training module is used for directly assigning the image block-level labels to each pixel point of the corresponding region of the full-scan image and training a pixel-level prediction model in a supervised mode by utilizing an image segmentation model;
wherein the supervisory information enhancement module comprises:
the image block sampling module is used for equally dividing the full-scan image into a plurality of image blocks with the size smaller than that of the full-scan image, simultaneously obtaining full-scan image-level labels and performing image block sampling on the full-scan image by using an attention mechanism;
the prediction result acquisition module is used for generating two deep learning models by utilizing the training of the image block selection criterion; respectively sending image blocks in the full-scan image into the deep learning models to obtain the prediction result of each image block, selecting a representative image block through the image block selection criterion, and simultaneously excluding the image blocks with different prediction results of the two deep learning models;
the classifier training module is used for training a new image classifier by using the representative image blocks, predicting the image blocks in all full-scan images of a training set by using the trained image classifier, and obtaining a hot spot area prediction result of each full-scan image;
the supervision information acquisition module is used for equally dividing the hot spot area into N multiplied by N hot spot area image blocks which are equal in size and smaller than the hot spot area, labeling the hot spot area image blocks and obtaining image block-level labels; the image block level data corresponding to the image block level label is the supervised information multiplied by N times;
wherein the image block sampling module comprises:
the segmentation module is used for dividing each full-scan image into N multiplied by N image blocks with equal size and obtaining full-scan image-level labels at the same time; wherein, N is M/M, and M represents the side length size of the full scanning image; m represents the side length size of the image block;
the matrix establishing module is used for establishing an attention matrix for each full-scan image, and the attention matrix stores attention weights of all image blocks corresponding to the full-scan image; and the initial value of the attention matrix is N/A, wherein N/A represents None;
the sampling module is used for sampling an image block with a coordinate of (i, j) in a full-scan image by using a weight M (i, j) and a probability p for each full-scan image in each iteration, and sampling p × N image blocks, wherein the weight M (i, j) of the full-scan image meets the condition that M (i, j) ≠ N/A; then, (1-p) × N image blocks are sampled with the same probability p among the image blocks having the weight M (i, j) ═ N/a; wherein n represents the number of image blocks which are sampled by a single full-scanning image in each iteration;
and the covering module is used for covering the original matrix elements with the prediction result M '(i, j) of each iteration updating model in the model training process, namely M (i, j) ═ M' (i, j).
5. The system of claim 4, wherein the image block selection criteria employs a Max-Max criterion and a Max-Min criterion;
the Max-Max criterion is as follows: for the full-scan images with hot spot areas and non-hot spot areas, adopting the image blocks with the maximum prediction probability as representative image blocks;
the Max-Min criterion is that the image block with the maximum prediction probability is adopted as a representative image block for the full-scan image with the hot spot area, and the image block with the minimum prediction probability is adopted as a representative image block for the full-scan image without the hot spot area.
6. The system of claim 4, wherein the tile sampling module further comprises:
and the image block size obtaining module is used for obtaining the size of the image block when the full-scanning image is equally divided into a plurality of image blocks.
CN202010560283.0A 2020-06-18 2020-06-18 Full-scan image analysis method and system based on weak supervised learning Active CN111832625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010560283.0A CN111832625B (en) 2020-06-18 2020-06-18 Full-scan image analysis method and system based on weak supervised learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010560283.0A CN111832625B (en) 2020-06-18 2020-06-18 Full-scan image analysis method and system based on weak supervised learning

Publications (2)

Publication Number Publication Date
CN111832625A CN111832625A (en) 2020-10-27
CN111832625B true CN111832625B (en) 2021-04-27

Family

ID=72897801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010560283.0A Active CN111832625B (en) 2020-06-18 2020-06-18 Full-scan image analysis method and system based on weak supervised learning

Country Status (1)

Country Link
CN (1) CN111832625B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665491A (en) * 2017-10-10 2018-02-06 清华大学 The recognition methods of pathological image and system
CN108542390A (en) * 2018-03-07 2018-09-18 清华大学 Vascular plaque ingredient recognition methods based on more contrast nuclear magnetic resonance images
CN110265142A (en) * 2019-06-11 2019-09-20 透彻影像(北京)科技有限公司 A kind of assistant diagnosis system and method for lesion region restored map

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336969B (en) * 2013-05-31 2016-08-24 中国科学院自动化研究所 A kind of image, semantic analytic method based on Weakly supervised study
CN108876796A (en) * 2018-06-08 2018-11-23 长安大学 A kind of lane segmentation system and method based on full convolutional neural networks and condition random field
CN109508671B (en) * 2018-11-13 2023-06-06 深圳龙岗智能视听研究院 Video abnormal event detection system and method based on weak supervision learning
CN110349148A (en) * 2019-07-11 2019-10-18 电子科技大学 A kind of image object detection method based on Weakly supervised study

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665491A (en) * 2017-10-10 2018-02-06 清华大学 The recognition methods of pathological image and system
CN108542390A (en) * 2018-03-07 2018-09-18 清华大学 Vascular plaque ingredient recognition methods based on more contrast nuclear magnetic resonance images
CN110265142A (en) * 2019-06-11 2019-09-20 透彻影像(北京)科技有限公司 A kind of assistant diagnosis system and method for lesion region restored map

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CAMEL:A Weakly Supervised Learning Framework for Histopathology Image Segmentation;Gang Xu et al.;《arXiv》;20190828;摘要、第1-5节、图1-3 *
The Application of Two-level Attention Models in Deep Convolutional Neural Network for Fine-grained Image Classification;Tianjun Xiao et al.;《IEEE》;20151231;全文 *

Also Published As

Publication number Publication date
CN111832625A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN112163634B (en) Sample screening method and device for instance segmentation model, computer equipment and medium
Tellez et al. Whole-slide mitosis detection in H&E breast histology using PHH3 as a reference to train distilled stain-invariant convolutional networks
CN108288506A (en) A kind of cancer pathology aided diagnosis method based on artificial intelligence technology
Han et al. Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning
CN113808738B (en) Disease identification system based on self-identification image
CN102096917A (en) Automatic eliminating method for redundant image data of capsule endoscope
Tang et al. CNN-based qualitative detection of bone mineral density via diagnostic CT slices for osteoporosis screening
CN109685765A (en) A kind of X-ray pneumonia prediction of result device based on convolutional neural networks
US11464466B2 (en) Methods and systems for periodontal disease screening
CN103914852A (en) CUDA-based DICOM medical image dynamic nonlinear window modulation method
Tekin et al. An enhanced tooth segmentation and numbering according to FDI notation in bitewing radiographs
CN111784704A (en) MRI coxitis disease segmentation and classification automatic quantitative grading sequential method
WO2023155488A1 (en) Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion
CN112614573A (en) Deep learning model training method and device based on pathological image labeling tool
CN115719334A (en) Medical image evaluation method, device, equipment and medium based on artificial intelligence
CN110456050B (en) Portable intelligent digital parasite in vitro diagnostic instrument
CN113643297A (en) Computer-aided age analysis method based on neural network
CN111832625B (en) Full-scan image analysis method and system based on weak supervised learning
CN114140437A (en) Fundus hard exudate segmentation method based on deep learning
CN115880266B (en) Intestinal polyp detection system and method based on deep learning
Lin et al. How much can AI see in early pregnancy: A multi‐center study of fetus head characterization in week 10–14 in ultrasound using deep learning
Bermudez et al. A first glance to the quality assessment of dental photostimulable phosphor plates with deep learning
CN115719333A (en) Image quality control evaluation method, device, equipment and medium based on neural network
CN113011514B (en) Intracranial hemorrhage sub-type classification algorithm applied to CT image based on bilinear pooling
Hsieh et al. A mask R-CNN based automatic assessment system for nail psoriasis severity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230216

Address after: 17 Panjiayuan Nanli, Chaoyang District, Beijing

Patentee after: CANCER HOSPITAL, CHINESE ACEDEMY OF MEDICAL SCIENCES

Patentee after: Beijing Thorough Future Technology Co.,Ltd.

Address before: 17 Panjiayuan Nanli, Chaoyang District, Beijing

Patentee before: CANCER HOSPITAL, CHINESE ACEDEMY OF MEDICAL SCIENCES

Patentee before: TOUCHE IMAGE (BEIJING) TECHNOLOGY Co.,Ltd.