CN110322396B - Pathological section color normalization method and system - Google Patents

Pathological section color normalization method and system Download PDF

Info

Publication number
CN110322396B
CN110322396B CN201910533229.4A CN201910533229A CN110322396B CN 110322396 B CN110322396 B CN 110322396B CN 201910533229 A CN201910533229 A CN 201910533229A CN 110322396 B CN110322396 B CN 110322396B
Authority
CN
China
Prior art keywords
image
network
color
generation
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910533229.4A
Other languages
Chinese (zh)
Other versions
CN110322396A (en
Inventor
刘秀丽
余江盛
余静雅
陈西豪
程胜华
曾绍群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiguang Intelligent Technology Wuhan Co ltd
Original Assignee
Huaiguang Intelligent Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiguang Intelligent Technology Wuhan Co ltd filed Critical Huaiguang Intelligent Technology Wuhan Co ltd
Priority to CN201910533229.4A priority Critical patent/CN110322396B/en
Publication of CN110322396A publication Critical patent/CN110322396A/en
Application granted granted Critical
Publication of CN110322396B publication Critical patent/CN110322396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04

Abstract

The invention discloses a pathological section color normalization method and a pathological section color normalization system. In order to reduce the difference between the generated picture of the non-target style picture and the target style picture, the generated picture of the non-target style picture and the target style picture are identified through another identification network, and inter-domain learning and identification confrontation training is performed, so that the difference between the generated picture and the target style picture is further reduced, and the performance of the generation network is optimized. The invention performs color normalization on pathological section data with different color styles, and solves the technical problems that a depth model trained under a single color style is difficult to have the same or similar performance in data of another color style, and the model is difficult to converge when pathological sections with different color styles are used as data training depth models.

Description

Pathological section color normalization method and system
Technical Field
The invention belongs to the field of medical cytopathology image analysis, and particularly relates to a method and a system for color normalization of cytopathology slices from different sources.
Background
In recent years, the technology of artificial intelligence is rapidly developed, and the problem of shortage of doctor resources can be relieved by combining artificial intelligence with medical treatment. In the field of medical cytopathology, a large amount of pathological section data is accumulated to provide a big data background for analysis of medical cytopathology images, and in the process of processing big data samples, because the analysis processing capability of a deep learning algorithm is generally higher than that of other traditional analysis algorithms, the deep learning is widely applied to the field of analysis of the medical cytopathology images of big data.
The analysis of medical cytopathology images by deep learning requires training a deep model with classification, recognition or segmentation effects by learning a large amount of label data. However, in reality, the color style of the pathological section is greatly different due to the difference of shooting instruments, the difference of instrument parameters, the difference of pathological section staining methods and the like (the color style difference comprises the difference of image attributes such as hue, saturation, brightness and the like). The color style difference can cause some problems to the model, such as: the depth model trained under a single color style has difficulty in having the same or similar performance in data of another color style; different color style pathological sections as data can cause difficulty in converging the model when training the depth model.
The color style difference of the medical cytopathology image requires that the depth model has better generalization capability and can adapt to data with different color styles. The existing method improves the generalization capability of the model by enhancing and expanding training data through data, adding noise in the data and the like, but the application range of the model trained by the methods is always limited, and the model can not be guaranteed to be better represented on data of any color style. There are also some methods to match the distribution between different color style data by analyzing color and spatial information, and this way of normalization can only reduce the difference between different color styles to a certain extent, and cannot really achieve the consistency of color styles. Because the color style difference in reality is often complex, pathological images with different color styles are difficult to analyze accurately in distribution.
In summary, the depth model with good generalization capability is more stable in practical application, and the color styles of the medical cytopathology images from different sources are different. Although depth models can analyze medical cytopathology images, there are still difficulties in analyzing data with differences in color styles. There is still a need to improve the generalization capability of the model by color normalization, etc. to meet the requirements on different color style data.
Disclosure of Invention
The invention provides a method and a system for color normalization of pathological sections based on the defects or urgent technical requirements of the prior art, and aims to perform color normalization on pathological section data of different color styles, so that the technical problems that a depth model trained under a single color style is difficult to have the same or similar performance in data of another color style, and the model is difficult to converge when pathological sections of different color styles are used as data training depth models are solved.
A pathological section staining normalization method is characterized in that the color style of a pathological section image A is a target color style, a pathological section image B with another color style is normalized to the target color style through a confrontation generating model, and the confrontation generating model is constructed in the following mode:
1) Sample image preprocessing:
converting the pathological section sample images A and B into a gray level image and a red-blue coded image as an input image C for generating a network G A And C B
2) Intra-domain confrontation generation training step
Using sample image C A Training the generation network G to enable the generation network G to generate an image A 'close to the image A, distinguishing the true and false of the A and the false of the A' by the identification network D1, and continuously performing countermeasure learning of generation and identification to construct a countermeasure generation network G;
step 3) inter-domain confrontation generation learning step
Using sample image C B Training is continued by taking the generation network G as a starting point, an image B 'close to the image A is generated, the authentication network D2 distinguishes the truth of the image A and the truth of the image B', and therefore counterstudy of generation and authentication is continuously carried out, and the countergeneration network G is countered.
Further, the loss function adopted by the step 2) intra-domain antagonism generation training is as follows:
Figure BDA0002100402790000031
in the formula (I), the compound is shown in the specification,
Figure BDA0002100402790000032
Figure BDA0002100402790000033
wherein, G * Optimal generator for the countertraining, λ GAN1 ,λ L1 Is a hyper-parameter used to balance importance between different loss functions; e A []Is under A distribution]The expectation of an internal expression is that,
Figure BDA0002100402790000034
is at C A 2 in distribution]The expectation of an internal expression is that,
Figure BDA0002100402790000035
is at A, C A Bottom of cloth]Expectation of inner expression, G is generator, D1 is intra-domain discriminator, A is original color image of target color style, C A A grey scale map of G and a red-blue code map. Further, the loss function adopted by the inter-domain countermeasure generation learning in the step 3) is as follows:
Figure BDA0002100402790000036
wherein, E A []Is under A distribution]The expectation of an internal expression, D2 is the inter-domain discriminator,
Figure BDA0002100402790000037
is at C B 2 in distribution]Expectation of internal expression, C B A gray scale map of a pathological image expected to be subjected to color normalization and a red-blue code map. Further, in the step 1), the pathological section images a and B are respectively subjected to red-blue coding in the sample image preprocessing step, and a binary image is obtained through coding.
Further, the method also comprises a step 4) of task supervision learning:
pre-imaging with imagesA is a task network T for executing a specified task obtained by training a training sample; image C A Inputting the confrontation generation network G obtained in the step 3), and outputting an image A' by the confrontation generation network G; and inputting the image A' into a task network T, comparing the difference between the output result of the task network T and the task label corresponding to the image A, and further optimizing and confronting the difference to generate a network G as loss feedback.
Further, the loss function is expressed as:
Figure BDA0002100402790000041
wherein the content of the first and second substances,
Figure BDA0002100402790000042
is at A, C A ,Y A Bottom of cloth]The expectation G of the inner expression is a generator, and T is a task network; a is the original color image of the target color style, C A A gray scale image and a red-blue code image, Y A Task tag of A, C B A gray scale map of a pathology image and a red-blue code map which are expected to be subjected to color normalization.
A confrontation generator training system for pathological section staining normalization, which takes a color style of a pathological section image A as a target color style and normalizes a pathological section image B of another color style to the target color style through a confrontation generation model, the confrontation generator training system comprising:
a sample image preprocessing module for converting the pathological section sample images A and B into a gray level image and a red-blue encoding image as an input image C for resisting the generation network G A And C B
An intra-domain antagonism generation training module for using the sample image C A Training a generating network G to enable the generating network G to generate an image A 'close to the image A, distinguishing the truth of the image A from the truth of the image A' by a distinguishing network D1, and continuously performing antagonistic learning of generation and distinction to construct the generating network G;
inter-domain confrontation generation learning module for utilizing samplesImage C B And continuing training by taking the generation network G as a starting point to generate an image B 'close to the image A, and distinguishing the truth of the A and the truth of the B' by the identification network D2, so that counterstudy of generation and identification is continuously carried out, and the generation network G is optimized.
Overall, the beneficial effects of the invention are as follows:
the invention provides a pathological section color normalization method based on deep learning. In order to reduce the difference between the generated picture of the non-target style picture and the target style picture, the generated picture of the non-target style picture and the target style picture are identified through another identification network, and inter-domain learning and identification confrontation training is performed, so that the difference between the generated picture and the target style picture is further reduced, and the performance of the generation network is optimized.
Furthermore, in the learning and discrimination countermeasure training in the domain, the G is generated by generating a color image presenting a target coloring style according to the input image of the target color style to deceive the discriminator in the domain, and the discriminator in the domain is used for distinguishing the image generated by the generator from the real image, so that the generator and the discriminator in the domain can form the countermeasure training. And the generated image is obtained according to the gray-scale map of the target dyeing style image, so that the generated image and the real image are consistent in color style or image content at the moment, and the average absolute error is added as a loss function to assist in training of the generator.
Furthermore, in the inter-domain learning and identification confrontation training, the target of the generation G is a color image which presents a target dyeing style and is generated according to an input image of a color style expected to be normalized to deceive the inter-domain identifier, and the target of the inter-domain identifier is to distinguish an image generated by the generator from a real image, so that the generator and the inter-domain identifier can form confrontation training.
Further, the invention converts the color picture into a gray scale image and a red-blue code image, and inputs the gray scale image and the red-blue code image into a generation network, and the gray scale image eliminates the color style difference (hue and tone) between pathological sections with different color styles to a certain extent. The staining reagent stains the cell plasma to red or blue according to the difference of acid and alkali, and doctors need to utilize the information when interpreting, so that the information of sample pictures needs to be kept when the task network is trained and tested. Pathological pictures with different color styles are subjected to color normalization through a generation network, and although the color styles of the pathological pictures are consistent, the pathological pictures may be subjected to color cross of cells with red and blue. In order to avoid the situation, the data before normalization is subjected to red-blue coding, and a red-blue coding picture is input into a generation network, so that the picture after color normalization is ensured not to have the cell red-blue color cross condition. By the method, the color style difference among pathological sections with different color styles can be eliminated to the maximum extent while the red and blue color information of the cells is kept.
Furthermore, the invention combines the generated network and the task network for training, thereby not only improving the generating effect of the generated network, but also ensuring the effect of the task network. Because the generation network cannot completely reconstruct the target color style, in order to enable the style picture generated by the generation network to have better performance in the task network, a task loss is added to the generation network. And the generating network and the task network with better performance are obtained by adjusting the generating network or the task network or the joint adjusting mode of the generating network and the task network.
The method is a universal method for improving the generalization ability of the cytopathology section model, is suitable for cervical cytopathology sections, and is also effective for improving the model generalization ability of other types of cytopathology sections by combining the characteristics of data and adjusting appropriate parameters.
Drawings
FIG. 1 is a diagram of a color normalization network for pathological sections based on deep learning according to the present invention;
FIG. 2 is a gray scale image and a red-blue code image generated by a color picture according to the present invention, wherein FIG. 2 (a) is a gray scale image and FIG. 2 (b) is a red-blue code image;
fig. 3 is a structural diagram of training at each stage of a pathological section color normalization network based on deep learning, wherein fig. 3 (a) is a structural diagram of training and supervision of L1 loss and intra-domain discrimination network loss, fig. 3 (b) is a structural diagram of training and supervision of inter-domain discrimination network loss, and fig. 3 (c) is a structural diagram of supervision of L1 loss, inter-domain discrimination network and task network loss;
fig. 4 is a diagram of a simulation example of the method of the present invention, in which fig. 4 (a) is a pathological image showing a target color style, fig. 4 (b) is a pathological image expected to be color-normalized, and fig. 4 (c) is a graph of normalized effect of supervised training using different combinations of loss.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Assume now that there are two more distinct color styles A, B, style a with a large number of Task tags (lesion type tags) and style B with no Task tags (only a small number of Task tags are also considered as none).
Taking A as a target color style, and normalizing a color style pathological section image B to the target color style through a confrontation generating model, wherein the confrontation generating model is constructed as follows:
1) Sample image preprocessing step 11) gray level image and red and blue coding image making
The step converts the pathological section sample images A and B into gray level images which are used as input images C for resisting the generation network G A And C B . The grayscale map (fig. 2 a) eliminates color style differences (hue ) between pathological sections of different color styles to some extent. While A, B have different color styles, since A, B is co-tasked (e.g., both are cervical canal cytopathy type identification), the color removal is performedAfter the style difference, A, B should have the same actual representation content, so that the morphological information such as cell texture and cell contour of A, B data has the same actual pathological meaning. Therefore, when the image color normalization transformation is performed, the detailed (fine-grained) information such as cell texture, outline and the like should be completely retained. Here, the present invention is implemented in such a manner that a gray scale map is input to the generation network G.
In the optimization, when the Task is judged, the cell pulp is dyed to be red or blue by a dyeing reagent according to the difference of acid and alkali, and the information needs to be utilized when a doctor judges, so that the information like a sample picture needs to be reserved when the Task network is trained and tested. Pathological pictures with different color styles are subjected to color normalization through a GAN network, and although the color styles of the pictures are consistent, the cells are possibly subjected to red-blue color mixing. In order to avoid this, the data before normalization is red-blue coded, and a red-blue coded map (fig. 2B) is input to the G generation network to ensure that the color-normalized picture does not have the cell red-blue cross color, for example, a specific way is that a natural image can be represented by three channels [ R, G, B ] in an RGB color space, and a pixel with the largest R channel pixel value on a pathological section is coded as a pixel with a 1,R channel pixel value not being the largest is coded as 0. In this way, a red-blue binary coding map is obtained.
Overlapping the gray level image and the red and blue code image of the sample image, normalizing the overlapped numerical value to [ -1,1], and sending the numerical value serving as an input sample into a G network. The gray level image and the red-blue code image obtained by the above process erase the color style difference (hue, saturation, brightness, etc.) on the basis of retaining important and general hue information (red or blue) and cell morphology information
In a preferred mode, the gray-scale image is subjected to data enhancement before the gray-scale image and the red-blue code image are overlaid and superposed. Although the gray-scale map can eliminate the color style difference (hue ) between pathological sections with different color styles to a certain extent, the gray-scale value is obtained by numerical (linear) calculation according to natural RGB three-channel values, so that the information such as brightness, contrast and the like in the color styles are still retained to a large extent. Therefore, gamma transformation and HSV color space disturbance are added to perform more complex calculation (nonlinearity), and color style differences such as brightness, contrast and the like between A, B are erased.
Step 2) Intra-Domain antagonism Generation training step (FIG. 3 a)
Using the color style A as a reconstructed target style, and defining the style A as C after the style A data reaches the stage C through the step 1) A The style B data is C after the style B data reaches the stage C through the step 1) B The reason for selecting A as the target style for the color style reconstruction is as follows: (1) The style a data has a large number of lesion labels, and the Task network is based on a training and has excellent test results for a; (2) Due to C A The loss is obtained by converting A, and the L1 loss can be adopted by taking A as a target, has a strong supervision effect on the G network subjected to style reconstruction, and is easier for network convergence.
The generation network G adopts an adjusted U-net structure (thinner and shallower than the conventional U-net structure), and the identification network D adopts a convolution neural network CNN containing five layers of convolution (part of convolution layers are provided with BN and Leaky-Relu).
Using sample image C A Training the generating network G to enable the generating network G to generate an image A 'close to the image A, distinguishing the true and false of the A and the false of the A' by the identifying network D1, and thus continuously performing counterstudy of generation and identification to construct the generating network G.
The generation network G and the identification network D1 adopt random initialization to obtain initial parameters; the loss function adopted by the intra-domain countermeasure generation training is
Figure BDA0002100402790000081
In the formula (I), the compound is shown in the specification,
Figure BDA0002100402790000082
Figure BDA0002100402790000083
wherein G is * Optimal generator for the countertraining, λ GAN1 ,λ L1 Is a hyper-parameter used to balance importance between different loss functions; e A []Is under A distribution]The expectation of an internal expression is that,
Figure BDA0002100402790000091
is at C A 2 in distribution]The expectation of an internal expression is that,
Figure BDA0002100402790000092
is at A, C A 2 in distribution]Expectation of inner expression, G is generator, D1 is intra-domain discriminator, A is original color image of target color style, C A A grey scale map of G and a red-blue code map. In the above loss function, L GAN1 Hope generator G produce, present color style of goal color input image of style go to deceive the discriminator in the domain, hope the discriminator in the domain distinguish the image that the generator produces from real image, so generator and discriminator in the domain can form the confrontation training; and L is L1 It is desirable that the generated image should be identical to the real image in both color style and image content. Therefore, the generated G trained by the two loss functions does not cause content loss in the image generation process, and can generate a color style similar to or even consistent with the target color style at the same time, thereby achieving the purpose of normalization
Step 3) inter-domain confrontation generation learning step (FIG. 3 c)
The color style normalization process that can be achieved with the challenge generation network G is A → C A → A' and B → C B → B', however, due to C in practice A 、C B The difference still exists in distribution, so that the absolute consistency of the color styles cannot be guaranteed by the A 'and the B' generated by the generation network G, and therefore the confrontation training links of the A 'and the B' are increased, and the A 'and the B' are more consistent.
In particular toThe implementation mode of (1) is as follows: realizing A → C by using the generated network G trained in the step 2) A → A' and B → C B → B ', a is a real image, and B' is a newly generated image, and is input to the discriminator D2 initialized at random again, and the countermeasure training is performed. The loss function adopted for inter-domain countermeasure generation learning is:
Figure BDA0002100402790000093
wherein, E A []Is under A distribution]The expectation of an internal expression, D2 is the inter-domain discriminator,
Figure BDA0002100402790000094
is at C B 2 in distribution]Expectation of internal expression, C B A gray scale map of a pathological image expected to be subjected to color normalization and a red-blue code map. The above loss function, L GAN2 The generator G is expected to generate a color image showing a target coloring style to deceive the inter-domain discriminator according to the input image of the color style expected to be normalized, and the inter-domain discriminator is expected to distinguish the image generated by the generator from the real image, so that the generator and the intra-domain discriminator can form the confrontation training. Through the generation of the confrontation training, the generator G can finally generate the color style which is similar to or even consistent with the target color style according to the input image of the color style expected to be normalized so as to achieve the purpose of normalization.
In the specific implementation process, in order to ensure that the G network trained in step 2) cannot be reinitialized by the discriminator D2, the LL1 loss in step 2) is reserved, and the complete loss function at this stage is expressed as follows:
Figure BDA0002100402790000101
by the inter-domain confrontation training newly designed in the step, the consistency of the A 'and the B' can be further improved while the effect of the images (A 'and B') generated in the step 2) is ensured, and the color normalization is basically completed.
Considering a large number of Task labels based on A, a Task network (such as the positive and negative average accuracy rates of 95%) capable of having excellent generalization performance on the image A is trained and called as a Task network. Because the depth model trained under a single color style is difficult to have the same or similar performance in the data of another color style, the test result of the Task network on B is poor (for example, the average accuracy of positive and negative is 65% +). The above steps 1) -3) of the present invention can normalize the picture of color style B to the target style a, but further, it is desirable to enable the Task network to test results on B as close as possible or even the same as those on a without the Task tag on B. Therefore, on the basis of steps 1) -3), it is further proposed that step 4) generates a network G by further training in combination with the Task network
Step 4) task supervision learning step
Training a generation network G taking the style A as a generation style through the generation countermeasure learning of the step 2) and the step 3), and converting the pictures of the style A and the style B into A 'and B' through the generation network G, wherein the A 'and the B' have better color and style consistency. However, although L L1 、L GAN Both losses are essentially supervised by the picture of style a, but it is inevitable that G is not able to fully reconstruct the colour style a, which can be verified by KL divergence (relative entropy of a and a '), and by the Task network test (the accuracy of the picture of style a ' differs greatly from that of the picture of style B ' in the Task network).
The specific implementation mode is as follows: training by taking the image A as a training sample in advance to obtain a task network T for executing a specified task; image C A Inputting the confrontation generation network G obtained in the step 3), and outputting an image A' by the confrontation generation network G; and inputting the image A' into a task network T, comparing the difference between the output result of the task network T and the task label corresponding to the image A, and further optimizing and confronting the difference to generate a network G as loss feedback.
If only the G network is optimized, the newly added loss function is:
Figure BDA0002100402790000111
wherein the content of the first and second substances,
Figure BDA0002100402790000112
is at A, C A ,Y A Bottom of cloth]The expectation G of the inner expression is a generator, and T is a task network; a is the original color image of the target color style, C A A gray scale pattern and a red-blue code pattern, Y A Task tag of A, C B A gray scale map of a pathological image expected to be subjected to color normalization and a red-blue code map.
In the concrete implementation process, L in the step 2) is reserved L1 Loss, L GAN2 (G, D), the complete loss function at this stage is expressed as follows:
Figure BDA0002100402790000113
if the generation network G and the Task network are optimized simultaneously, the optimization of the generation network G and the Task network can lead the generation network G to rebuild the color style, tend to be friendly to the Task network, and lead the Task network to adapt to the new style A' rebuilt by the generation network G, and the generation network G and the Task network adapt to each other. Then, the newly added loss function is expressed as:
Figure BDA0002100402790000114
wherein G is a generation network, and T is a task network; a is the original color image of the target color style, C A A gray scale pattern and a red-blue code pattern, C B A gray scale map of a pathological image expected to be subjected to color normalization and a red-blue code map.
In the specific implementation process, the parameters of the generated network G and the parameters of the Task network are updated simultaneously, and the gradient of the generated network G is from L GAN2 、L L1 And L Task And from the loss factor
Figure BDA0002100402790000121
Making a trade-off, the gradient of the Task network comes from L Task . The integrity loss function at this stage is as follows:
Figure BDA0002100402790000122
simulation example:
the color style shown in fig. 4 (a) is the target color style. Fig. 4 (b) shows the color styles that need to be normalized. FIG. 4 (c) left is the result of normalizing the images in FIG. 4 four (b) using the in-domain discrimination loss and the L1 loss as the generation network G obtained by supervised training; FIG. 4 (c) left two is the result of normalizing the image in FIG. 4 (b) using the intra-domain discrimination loss, the inter-domain discrimination loss and the L1 loss as the generation network G obtained by supervised training; FIG. 4 (c) left one is the result of normalizing the image in FIG. 4 (b) using the intra-domain discrimination loss, the inter-domain discrimination loss, the L1 loss, and the task loss as the generation network G obtained by supervised training. It can be seen that the generated color styles under different loss supervision have higher consistency with the target color style, and the detail information of the pathological image is not lost in the image generation process, so that the content completely consistent with the original image, namely figure 4 (b), is kept.
The invention obtains the cytopathology image which can convert the cytopathology image of any color style into the cytopathology image of the target style by training the confrontation generation network, thereby realizing the color normalization of the pathological section. The process combines the data characteristics of the pathological section of the cervical cell, converts the color picture into the gray-scale image and the red-blue code image, and inputs the gray-scale image and the red-blue code image into a generation network. In order to reduce the style difference between generated pictures with different styles after the generated network is passed through, the output pictures with other styles after the generated network are distinguished from the target style pictures, and the generated network is further optimized. Meanwhile, in order to better adapt to the task network, the combination training of the generation network and the task network is provided.
By the method, the generation network with the color normalization effect can be obtained after the pathological section color normalization network based on deep learning is trained, and meanwhile, the generated pictures can well retain input information required by the task network, so that the method is suitable for the task network, the generation effect of the generation network is improved, and the effect of the task network is guaranteed.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. The pathological section color normalization method is characterized in that the color style of a pathological section image A is taken as a target color style, a pathological section image B of another color style is normalized to the target color style through a confrontation generating model, and the confrontation generating model is constructed in the following mode:
1) Sample image preprocessing:
converting pathological section sample images A and B into gray level images and red-blue coded images as input images C for resisting generation of network G A And C B
2) Intra-domain confrontation generation training step
Using sample image C A Training the generating network G to generate an image A 'close to the image A, and identifying the network D1 to distinguish the true from the false of A and A', so as to continuously perform counterstudy of generation and identification and construct counterstudyForming a network G;
step 3) inter-domain confrontation generation learning step
Using sample image C B And continuing training by taking the confrontation generation network G in the step 2) as a starting point to generate an image B 'close to the image A, distinguishing the truth of the A and the truth of the B' by the network D2, continuously performing the confrontation learning of generation and discrimination, and optimizing the confrontation generation network G.
2. The pathological section color normalization method according to claim 1, wherein the loss function adopted in the intra-domain antagonism generation training of step 2) is:
Figure FDA0003904018170000011
in the formula (I), the compound is shown in the specification,
Figure FDA0003904018170000012
Figure FDA0003904018170000013
wherein G is * Optimal generator for the countertraining, λ GAN1 ,λ L1 Is a hyper-parameter used to balance importance between different loss functions; e A []Is in the A distribution]The expectation of an internal expression is that,
Figure FDA0003904018170000014
is at C A 2 in distribution]The expectation of an internal expression is that,
Figure FDA0003904018170000015
is at A, C A 2 in distribution]Expectation of inner expression, G is generator, D1 is intra-domain discriminator, A is original color image of target color style, C A Gray scale of A as G and red and blueAnd (6) encoding the graph.
3. The pathological section color normalization method according to claim 1 or 2, wherein the loss function adopted in the step 3) inter-domain confrontation generation learning is:
Figure FDA0003904018170000021
wherein E is A []Is in the A distribution]The expectation of an internal expression, D2 is the inter-domain discriminator,
Figure FDA0003904018170000022
is at C B Bottom of cloth]Expectation of internal expression, C B A gray scale map of a pathological image expected to be subjected to color normalization and a red-blue code map.
4. The method for color normalization of pathological sections according to claim 1, wherein in the step 1) of preprocessing the sample images, the pathological section images a and B are respectively red-blue coded, and the coded binary images are obtained.
5. The pathological section color normalization method according to claim 1, further comprising step 4) task supervision learning step:
training by taking the image A as a training sample in advance to obtain a task network T for executing a specified task; image C A Inputting the confrontation generation network G obtained in the step 3), and outputting an image A' by the confrontation generation network G; and inputting the image A' into a task network T, comparing the difference between the output result of the task network T and the task label corresponding to the image A, and taking the difference as loss feedback to further optimize and resist the generation network G.
6. The pathological section color normalization method according to claim 5, wherein the loss function of the optimization versus generation network G is expressed as:
Figure FDA0003904018170000023
wherein the content of the first and second substances,
Figure FDA0003904018170000024
is at A, C A ,Y A 2 in distribution]The expectation G of the inner expression is a generator, and T is a task network; a is the original color image of the target color style, C A A gray scale pattern and a red-blue code pattern, Y A Task tag of A, C B A gray scale map of a pathological image expected to be subjected to color normalization and a red-blue code map.
7. A countermeasure generator training system for pathological section color normalization, which takes a color style of a pathological section image A as a target color style and normalizes a pathological section image B of another color style to the target color style through a countermeasure generation model, the countermeasure generator training system comprising:
a sample image preprocessing module for converting the pathological section sample images A and B into a gray level image and a red-blue encoding image as an input image C for resisting the generation network G A And C B
An intra-domain antagonism generation training module for using the sample image C A Training the generation network G to enable the generation network G to generate an image A 'close to the image A, distinguishing the true and false of the A and the false of the A' by the identification network D1, and continuously performing countermeasure learning of generation and identification to construct a countermeasure generation network;
an inter-domain countermeasure generation learning module for utilizing the sample image C B And continuing training by taking the generation network G as a starting point to generate an image B 'close to the image A, and distinguishing the authenticity of the image A and the image B' by the identification network D2, so that the generation and identification counterwork learning is continuously carried out, and the counterwork network G is optimized.
CN201910533229.4A 2019-06-19 2019-06-19 Pathological section color normalization method and system Active CN110322396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910533229.4A CN110322396B (en) 2019-06-19 2019-06-19 Pathological section color normalization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910533229.4A CN110322396B (en) 2019-06-19 2019-06-19 Pathological section color normalization method and system

Publications (2)

Publication Number Publication Date
CN110322396A CN110322396A (en) 2019-10-11
CN110322396B true CN110322396B (en) 2022-12-23

Family

ID=68119893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910533229.4A Active CN110322396B (en) 2019-06-19 2019-06-19 Pathological section color normalization method and system

Country Status (1)

Country Link
CN (1) CN110322396B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028923B (en) * 2019-10-18 2024-01-30 平安科技(深圳)有限公司 Digital pathological image staining normalization method, electronic device and storage medium
KR20210050684A (en) * 2019-10-29 2021-05-10 에스케이하이닉스 주식회사 Image processing system
CN111161359B (en) * 2019-12-12 2024-04-16 东软集团股份有限公司 Image processing method and device
CN111062862A (en) * 2019-12-19 2020-04-24 北京澎思科技有限公司 Color-based data enhancement method and system, computer device and storage medium
CN111325661B (en) * 2020-02-21 2024-04-09 京工慧创(福州)科技有限公司 Seasonal style conversion model and method for image named MSGAN
CN111353987A (en) * 2020-03-02 2020-06-30 中国科学技术大学 Cell nucleus segmentation method and device
CN111444844A (en) * 2020-03-26 2020-07-24 苏州腾辉达网络科技有限公司 Liquid-based cell artificial intelligence detection method based on variational self-encoder
CN111754478A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Unsupervised domain adaptation system and unsupervised domain adaptation method based on generation countermeasure network
CN111985464B (en) * 2020-08-13 2023-08-22 山东大学 Court judgment document-oriented multi-scale learning text recognition method and system
CN114170224B (en) * 2021-01-20 2022-09-02 赛维森(广州)医疗科技服务有限公司 System and method for cellular pathology classification using generative staining normalization
CN114627010B (en) * 2022-03-04 2023-01-06 北京透彻未来科技有限公司 Dyeing space migration method based on dyeing density map
CN115239943A (en) * 2022-09-23 2022-10-25 杭州医策科技有限公司 Training method of image correction model and color correction method of slice image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670510A (en) * 2018-12-21 2019-04-23 万达信息股份有限公司 A kind of gastroscopic biopsy pathological data screening system and method based on deep learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160307305A1 (en) * 2013-10-23 2016-10-20 Rutgers, The State University Of New Jersey Color standardization for digitized histological images
WO2015189264A1 (en) * 2014-06-10 2015-12-17 Ventana Medical Systems, Inc. Predicting breast cancer recurrence directly from image features computed from digitized immunohistopathology tissue slides
US10839510B2 (en) * 2015-08-19 2020-11-17 Colorado Seminary, Which Owns And Operates The University Of Denver Methods and systems for human tissue analysis using shearlet transforms
US11449985B2 (en) * 2016-12-02 2022-09-20 Regents Of The University Of Minnesota Computer vision for cancerous tissue recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670510A (en) * 2018-12-21 2019-04-23 万达信息股份有限公司 A kind of gastroscopic biopsy pathological data screening system and method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
病理切片图像分割技术的研究;宁旭等;《中国医学物理学杂志》;20050930;第22卷(第5期);第648-650页 *

Also Published As

Publication number Publication date
CN110322396A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN110322396B (en) Pathological section color normalization method and system
CN112818862B (en) Face tampering detection method and system based on multi-source clues and mixed attention
CN111524205A (en) Image coloring processing method and device based on loop generation countermeasure network
TW202046242A (en) Method for training generative adversarial network, method for generating image, and computer-readable storage medium
CN110263801B (en) Image processing model generation method and device and electronic equipment
CN111597946B (en) Processing method of image generator, image generation method and device
CN107506796A (en) A kind of alzheimer disease sorting technique based on depth forest
Bhanu et al. Object detection in multi-modal images using genetic programming
CN111091059A (en) Data equalization method in household garbage plastic bottle classification
Kiani et al. Image colorization using generative adversarial networks and transfer learning
CN112884758A (en) Defective insulator sample generation method and system based on style migration method
CN115731178A (en) Cross-modal unsupervised domain self-adaptive medical image segmentation method
Nazki et al. MultiPathGAN: Structure preserving stain normalization using unsupervised multi-domain adversarial network with perception loss
Singh Colorization of old gray scale images and videos using deep learning
CN111564205A (en) Pathological image dyeing normalization method and device
CN111723840A (en) Clustering and style migration method for ultrasonic images
Schirrmeister et al. When less is more: Simplifying inputs aids neural network understanding
CN112541566B (en) Image translation method based on reconstruction loss
Hesham et al. Image colorization using Scaled-YOLOv4 detector
CN114463320B (en) Magnetic resonance imaging brain glioma IDH gene prediction method and system
CN113643400B (en) Image generation method
CN113065407B (en) Financial bill seal erasing method based on attention mechanism and generation countermeasure network
Lan et al. Unpaired stain style transfer using invertible neural networks based on channel attention and long-range residual
CN111754478A (en) Unsupervised domain adaptation system and unsupervised domain adaptation method based on generation countermeasure network
Xie et al. Learning Shape Priors by Pairwise Comparison for Robust Semantic Segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant