CN110322396A - A kind of pathological section color method for normalizing and system - Google Patents

A kind of pathological section color method for normalizing and system Download PDF

Info

Publication number
CN110322396A
CN110322396A CN201910533229.4A CN201910533229A CN110322396A CN 110322396 A CN110322396 A CN 110322396A CN 201910533229 A CN201910533229 A CN 201910533229A CN 110322396 A CN110322396 A CN 110322396A
Authority
CN
China
Prior art keywords
image
color
network
style
confrontation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910533229.4A
Other languages
Chinese (zh)
Other versions
CN110322396B (en
Inventor
刘秀丽
余江盛
余静雅
陈西豪
程胜华
曾绍群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiguang Intelligent Technology (wuhan) Co Ltd
Original Assignee
Huaiguang Intelligent Technology (wuhan) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiguang Intelligent Technology (wuhan) Co Ltd filed Critical Huaiguang Intelligent Technology (wuhan) Co Ltd
Priority to CN201910533229.4A priority Critical patent/CN110322396B/en
Publication of CN110322396A publication Critical patent/CN110322396A/en
Application granted granted Critical
Publication of CN110322396B publication Critical patent/CN110322396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04

Abstract

The invention discloses a kind of pathological section color method for normalizing and systems, and the picture of target style is generated as using generator, are identified by discrimination natwork and generate picture and true target style picture, execute the study in domain and identify dual training.In order to reduce the difference of non-targeted style picture generated between picture and target style picture, and then identify the generation picture and target style picture of non-targeted style picture by another discrimination natwork, it executes the study between domain and identifies dual training, to be further reduced difference between the two, optimization generates network performance.The present invention carries out color normalization to different colours style pathological section data, the depth model for solving to train under solid color style is difficult to have same or similar performance in the data of another Color Style, will lead to model when different colours style pathological section is as data training depth model and is difficult to the technical issues of restraining.

Description

A kind of pathological section color method for normalizing and system
Technical field
The invention belongs to medical cell Pathologic image analysis fields, more particularly, to a kind of for the thin of separate sources The normalized method and system of born of the same parents' pathological section color.
Background technique
In recent years, artificial intelligence technology is quickly grown, and artificial intelligence is combined with medical treatment can alleviate doctor's resource scarcity Problem.In medical cell pathology field, a large amount of accumulation of pathological section data provide for the analysis of medical cell pathological image Big data background, in the processing of big data sample, since the analysis processing capacity of deep learning algorithm is generally better than other biographies System parser, deep learning are widely used in big data medical cell Pathologic image analysis field.
It needs with the analysis that deep learning carries out medical cell pathological image through the study to a large amount of label datas, instruction Practise the depth model with classification, identification or segmentation effect.But in reality, due to filming instrument difference, instrument parameter difference, Staining pathologic section method divergence etc. causes institute at pathological section Color Style larger difference occur, and (Color Style difference includes: The difference of the image attributes such as form and aspect, hue, saturation, intensity).Color Style difference will lead to model and some problems occurs, such as: The depth model trained under solid color style is difficult to have same or similar performance in the data of another Color Style; It will lead to model when different colours style pathological section is as data training depth model to be difficult to restrain.
The Color Style difference requirements depth model of medical cell pathological image needs to have preferable generalization ability, can Adapt to the data of different colours style.Existing method is enhanced by data and expands training data, and noise etc. is added in data Mode improves the generalization ability of model, but the scope of application of model that these modes are trained always is limited, it cannot be guaranteed that Model can show preferably in the data of any Color Style.There are also methods to be matched by analysis color and spatial information Distribution between different colours style data, the normalization of this mode can only reduce to a certain extent different colours style it Between difference, cannot really accomplish the consistent of Color Style.Because the Color Style difference in reality is often complex, no Pathological image with Color Style is difficult to accurately analyze in distribution.
In conclusion the depth model with good generalization ability is more stable in practice, and the doctor of separate sources The Color Style for learning Cellular Pathology Image has differences.Although depth model can analyze medical cell pathological image, in face It is difficult that there are still analyses in the data that color style has differences.There is still a need for the methods of normalize to improve model by color Generalization ability, to meet on different colours style data the needs of.
Summary of the invention
In view of the drawbacks of the prior art or urgently technical need, the invention proposes a kind of normalization of base pathological section color Method and system, it is intended that different colours style pathological section data carry out color normalization, solve solid color The depth model trained under style is difficult to have same or similar performance, different colours wind in the data of another Color Style It will lead to model when lattice pathological section is as data training depth model and be difficult to the technical issues of restraining.
The Color Style of pathological section image A is color of object style, led to by a kind of staining pathologic section method for normalizing It crosses confrontation generation model and the pathological section image B of another Color Style is normalized to color of object style, the confrontation generates Model constructs as follows:
1) sample image pre-treatment step:
Grayscale image and red blue code pattern are converted by pathological section sample image A and B, as the input for generating network G Image CAAnd CB
2) confrontation generates training step in domain
Utilize sample image CATraining generates network G, so that generating network G generation and image A ' similar in image A, and reflects Other network D1 differentiates the true and false of A and A ', and the confrontation study for so constantly being generated and being identified, building confrontation generates network G;
Confrontation generates learning procedure between step 3) domain
Utilize sample image CB, and continue to train as starting point to generate network G, generation and image B ' similar in image A, and Discrimination natwork D2 differentiates the true and false of A and B ', and the confrontation study for so constantly being generated and being identified, confrontation generates network G.
Further, confrontation generates the loss function that training uses in the step 2) domain are as follows:
In formula,
Wherein, G*For optimal generator, λ obtained by dual trainingGAN1, λL1It is important between different loss functions for weighing The hyper parameter of property;EA[] is the expectation of [] interior expression formula under A distribution,For in CAThe phase of [] interior expression formula under distribution It hopes,For in A, CAThe expectation of [] interior expression formula under distribution, G are generator, and D1 is discriminator in domain, and A is target face The original color image of color style, CAFor the grayscale image and red blue code pattern of the A of G.Further, right between the step 3) domain The loss function that antibiosis is used at study are as follows:
Wherein, EAThe expectation of [] for [] interior expression formula under A distribution, D2 discriminator between domain,For in CBDistribution Under [] interior expression formula expectation, CBThe grayscale image and red blue code pattern of the normalized pathological image of color are carried out for expectation.Into One step, red blue coding is also carried out respectively to pathological section image A and B in step 1) the sample image pre-treatment step, it will Encode obtained binary map.
Further, further include step 4) task supervision learning procedure:
Obtain executing the Task Network T of appointed task using image A as training sample training in advance;By image CAInput step 3) confrontation obtained generates network G, and confrontation generates network G and exports image A ';By image A ' incoming task network T, comparison task Difference between the output result of network T task label corresponding with image A is advanced optimized the difference as loss feedback Confrontation generates network G.
Further, the loss function indicates are as follows:
Wherein,For in A, CA, YAThe expectation G of [] interior expression formula is generator under distribution, and T is Task Network; A is the original color image of color of object style, CAFor the grayscale image and red blue code pattern of A, YAFor the task label of A, CBFor It is expected that carrying out the grayscale image and red blue code pattern of the normalized pathological image of color.
One kind being used for the normalized confrontation generator training system of staining pathologic section, by the color of pathological section image A Style is color of object style, generates model by confrontation and the pathological section image B of another Color Style is normalized to target Color Style, the confrontation generator training system include:
Sample image preprocessing module, for converting grayscale image and red blue coding for pathological section sample image A and B Figure generates the input picture C of network G as confrontationAAnd CB
Confrontation generates training module in domain, for utilizing sample image CATraining generates network G, so that it is raw to generate network G At with image A ' similar in image A, and discrimination natwork D1 differentiate A and A ' true and false, pair for so constantly being generated and being identified Anti- study, building generate network G;
Confrontation generates study module between domain, for utilizing sample image CB, and continue to train as starting point to generate network G, Generation and image B ' similar in image A, and discrimination natwork D2 differentiates the true and false of A and B ', is so constantly generated and is identified Confrontation study, optimization generate network G.
In general, beneficial effects of the present invention:
The normalized method of pathological section color based on deep learning that the invention proposes a kind of, is generated using generator For the picture of target style, identified by discrimination natwork and generate picture and true target style picture, executes the study in domain With identification dual training.In order to reduce the difference of non-targeted style picture generated between picture and target style picture, in turn The generation picture and target style picture for identifying non-targeted style picture by another discrimination natwork, execute the study between domain and mirror Other dual training, to be further reduced difference between the two, optimization generates network performance.
Further, in study and identification dual training of the present invention in domain, the target of G is generated as according to color of object The input picture of style, which generates, to be presented the color image of target coloration style and removes discriminator in deception domain, and discriminator in domain Target is to open the image that generator generates with true picture resolution, and discriminator can form confrontation in such generator and domain Training.And generating image is to be obtained according to the grayscale image of target coloration style image, therefore generate image and true figure at this time Picture either Color Style or picture material all should be consistent, therefore adds mean absolute error as loss function, auxiliary Carry out the training of generator.
Further, the present invention learns and identifies in dual training between domain, generates the target of G to be returned according to expectation The input picture of one Color Style changed, which generates, to be presented the color image of target coloration style and goes to discriminator between deception domain, and domain Between the target of discriminator be to differentiate image that generator generates with true picture to open, discriminator energy between such generator and domain Form dual training.
Further, color image is converted into grayscale image by the present invention and red blue code pattern input generates network, grayscale image The Color Style difference (form and aspect, tone) between the pathological section of different colours style has been eliminated to a certain extent.Dyeing examination Agent contaminates cell cytosol to be red or blue according to soda acid sex differernce, and when doctor's interpretation needs using this type of information, so Task Network Network is trained and needs to retain samples pictures this type of information when testing.The pathological picture of different colours style is by generating network Although the picture Color Style carried out after color normalization is identical, it is possible that the case where cell red blue colour contamination.In order to Avoid the occurrence of this, the present invention carries out red blue coding to the data before normalization, by compiling to the red indigo plant of generation network inputs Code figure, to guarantee that the picture after color normalization does not occur the red blue colour contamination situation of cell.It can be red in reservation cell by this method While blue colouring information, the Color Style difference between the pathological section of different colours style has been eliminated to the greatest extent.
Further, the present invention will generate network and Task Network combined training, both promote the generation effect for generating network While, also ensure the effect of Task Network.Because generate network can not reconstruction color of object style completely, In order to enable preferable performance can be had in Task Network by generating the style picture that network generates, we are newly-increased to network is generated One task loss.Performance, which is obtained, by way of adjusting and generating network or Task Network or the two joint debugging more preferably generates net Network and Task Network.
The method of the present invention is a kind of method of general raising cell pathology hierarchical model generalization ability, not only thin to uterine neck Born of the same parents' pathological section is applicable in, as long as the raising combined data feature of the model generalization ability to other kinds of cell pathology slice, It is similarly effective to adjust suitable parameter.
Detailed description of the invention
Fig. 1 is that the pathological section color proposed by the present invention based on deep learning normalizes network structure;
Fig. 2 is the grayscale image generated in the present invention by color image and red blue code pattern, and wherein Fig. 2 (a) is grayscale image, figure 2 (b) be red blue code pattern;
Fig. 3 is that the pathological section color proposed by the present invention based on deep learning normalizes each stage-training structure of network Figure, wherein Fig. 3 (a) is the structure chart that discrimination natwork loss is trained supervision in L1 loss and domain, and Fig. 3 (b) identifies between domain Network losses are trained the structure chart of supervision, and Fig. 3 (c) is that discrimination natwork and Task Network loss are supervised between L1 loss, domain The structure chart superintended and directed;
Fig. 4 is the simulation example figure of proposition method of the present invention, wherein Fig. 4 (a) is the pathology figure that color of object style is presented Picture, Fig. 4 (b) schedule to last the normalized pathological image of pending color, and Fig. 4 (c) is the training that exercised supervision using different loss portfolios Normalization effect picture.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below Not constituting a conflict with each other can be combined with each other.
It is now assumed that wherein style A has a large amount of Task task label (lesions there are two types of Color Style A, B to differ greatly Type label), style B does not have Task task label (only a small amount of Task task label is also regarded as not having).
Using A as color of object style, model is generated for a kind of pathological section image B normalizing of Color Style by confrontation To color of object style, the confrontation generates model and constructs as follows:
1) sample image pre-treatment step 11) grayscale image and red blue code pattern production
Pathological section sample image A and B are converted grayscale image by this step, and the input picture of network G is generated as confrontation CAAnd CB.Grayscale image (Fig. 2 a) has eliminated the Color Style difference between the pathological section of different colours style to a certain extent (form and aspect, tone).Although A, B has different colours style, (cervix is such as belonged to since A, B belong to a task The identification of solencyte lesion type), therefore the practical characterization content of A, B should be identical after removal Color Style difference, therefore A, B data The morphologic informations such as cell texture, cell outline have identical practical pathology meaning.Therefore color of image normalization change is being carried out When changing, the details such as cell texture, profile (fine granularity) information should completely be retained.Herein, the present invention takes to generation net The mode of network G input gray level figure is realized.
As optimization, it is contemplated that task differentiate when, by staining reagent according to soda acid sex differernce by cell pulp be it is red or Blue, when doctor's interpretation, need using this type of information, so Task network is trained and needs to retain samples pictures when testing This type of information.Although the pathological picture of different colours style carries out the picture Color Style after color normalization by GAN network It is identical, but it is possible that the case where cell red blue colour contamination.The occurrence of in order to avoid this, to the data before normalization into The red blue coding of row, by generating the red blue code pattern (Fig. 2 b) of network inputs to G, to guarantee that the picture after color normalization does not occur The red blue colour contamination situation of cell, for example a kind of specific mode be, natural image on RGB color can by triple channel [R, G, B] characterization, by pixel value maximum pixel coder in the channel R on pathological section be the channel 1, R pixel value be not maximum pixel coder It is 0.Red blue binary-coding figure is obtained in this way.
The grayscale image of sample image and red blue coding map combining are stacked, and to the numerical value normalizing after stacking to [- 1, 1], it is sent into G network as input sample, which is defined as intermediate stage C by the present invention.It is obtained by the above process Grayscale image and red blue code pattern on the basis for remaining important and general form and aspect information (red or blue), cytomorphology information On, erase Color Style difference (form and aspect, hue, saturation, intensity etc.)
In a preferred manner, before grayscale image and red blue coding map combining stack, also grayscale image is counted According to enhancing.Although the Color Style that grayscale image can eliminate to a certain extent between the pathological section of different colours style is poor Different (form and aspect, tone), but since gray value is to carry out numerical value (linear) according to natural RGB triple channel value to be calculated, Still the information such as brightness in Color Style, contrast are largely remained.Therefore increase Gamma transformation, hsv color sky Between disturbance carry out a more complex calculating (non-linear), erase the Color Styles difference such as brightness between A, B, contrast.
Confrontation generates shown in training step (Fig. 3 a) in step 2) domain)
Using Color Style A as the target style rebuild, the another style A data that define are after step 1) reaches stage C CA, style B data is C after step 1) reaches stage CB, choosing A is that the target style reason that Color Style is rebuild is as follows: (1) style A data have a large amount of lesion labels, and Task network is that based on A training and have outstanding test result to A;(2) due to CAIt is got, using A as target, can be lost using L1, this loss has the G network for carrying out style reconstruction by transformation by A Extremely strong supervisory role, more easily network convergence.
Network G is generated using U-net structure adjusted (more conventional U-net structure is thinner, more shallow), discrimination natwork D is adopted With the convolutional Neural network CNN containing five layers of convolution (having BN, Leaky-Relu after the convolutional layer of part).
Utilize sample image CATraining generates network G, so that generating network G generation and image A ' similar in image A, and reflects Other network D1 differentiates the true and false of A and A ', and the confrontation study for so constantly being generated and being identified, building generates network G.
It generates network G and discrimination natwork D1 and initial parameter is obtained using random initializtion;Confrontation generates training and uses in domain Loss function be
In formula,
Wherein, G*For optimal generator, λ obtained by dual trainingGAN1, λL1It is important between different loss functions for weighing The hyper parameter of property;EA[] is the expectation of [] interior expression formula under A distribution,For in CAThe phase of [] interior expression formula under distribution It hopes,For in A, CAThe expectation of [] interior expression formula under distribution, G are generator, and D1 is discriminator in domain, and A is target face The original color image of color style, CAFor the grayscale image and red blue code pattern of the A of G.In above-mentioned loss function, LGAN1Wish raw The G that grows up to be a useful person goes in deception domain to identify according to the color image that the input picture of color of object style generates presentation target coloration style Device, and wish that discriminator opens the image that generator generates with true picture resolution in domain, discriminator in such generator and domain Dual training can be formd;And LL1Wish that generating image and true picture either Color Style or picture material all should It is completely the same.Therefore the generation G trained by this two loss functions not will cause content not only in image generation process On loss, while Color Style close or even consistent with color of object style can also be generated, and reach normalized mesh 's
Confrontation generates shown in learning procedure (Fig. 3 c) between step 3) domain)
The Color Style normalization process that generating network G using confrontation can be realized is A → CA→ A ' and B → CB→ B ', but It is, actually due to CA、CBThere are still differences in distribution, therefore A ' and B ' by generating network G generation not can guarantee color Style it is absolute consistent, therefore increase the dual training link of A ' and B ', so that A ' and B ' are more with uniformity.
Concrete implementation mode is: using the generation network G trained in step 2), realizing A → CA→ A ' and B → CB→ As true picture, B ' is input to again the discriminator D2 of random initializtion as the generation image in new meaning by B ', A, carries out Dual training.Confrontation generates the loss function that study uses between domain are as follows:
Wherein, EAThe expectation of [] for [] interior expression formula under A distribution, D2 discriminator between domain,For in CBDistribution Under [] interior expression formula expectation, CBThe grayscale image and red blue code pattern of the normalized pathological image of color are carried out for expectation.On State loss function, LGAN2Wish that generator G is generated according to the input picture for expecting the Color Style being normalized to present The color image of target coloration style removes discriminator between deception domain, and wish generator is generated by discriminator between domain image with very Real image resolution is opened, and discriminator can form dual training in such generator and domain.By generating dual training, generator It is close with color of object style that the input picture for the Color Style that G can finally be normalized according to expectation generates Color Style Even consistent Color Style, to reach normalized purpose.
During specific implementation, to guarantee the G network trained in step 2), the discriminator that will not be reinitialized D2 instructs to retain to meaningless image is generated the LL1 loss in step 2), and the stage, complete loss function was expressed as follows:
By dual training between the newly-designed domain of this step, image (A ' and B ') effect can be generated in guaranteeing step 2) Meanwhile further increase A ' and B ' consistency, so far color normalization be basically completed.
In view of a large amount of Task task labels based on A, outstanding Generalization performance can be had on image A by having trained Task Network (such as: positive, negative Average Accuracy reaches 95%+), referred to as Task network.Due to being instructed under solid color style The depth model practised is difficult to have same or similar performance in the data of another Color Style, and Task network is tested on B As a result bad (such as: positive, negative Average Accuracy reaches 65%+).Above-mentioned steps 1 of the present invention) -3) it can be by Color Style B's Picture is normalized on target style A, but furthermore, it is also desirable in the case where B is without Task task label, so that Task network Can on B test result it is as close as possible or even identical on A.Therefore, in step 1) -3) on the basis of, it is further proposed that Step 4) is by the way that further training generates network G with Task network integration
Step 4) task supervision learning procedure
It 2) fights learning training with the generation of step 3) through the above steps and goes out to make a living into the generation network of style with style A G, and the picture of style A and style B are converted to A ', B ' by generating network G, and A ' has preferable Color Style consistent with B ' Property.Although however LL1、LGANTwo losses are substantially using the picture of style A as supervising, but unavoidable is that G is simultaneously Can not reconstruction color style A completely, this is surveyed by KL divergence (ask A and A ' relative entropy) and Task network Examination (accuracy rate in Task network differs larger to style A ' picture with style B ' picture) can be verified.
Specific implementation are as follows: obtain executing the Task Network T of appointed task using image A as training sample training in advance; By image CAInput step 3) obtained confrontation generates network G, and confrontation generates network G and exports image A ';Image A ' input is appointed Be engaged in network T, the difference between the output result of comparison task network T task label corresponding with image A, using the difference as damage It loses feedback and advanced optimizes confrontation generation network G.
If only optimizing G network, the loss function newly increased are as follows:
Wherein,For in A, CA, YAThe expectation G of [] interior expression formula is generator under distribution, and T is Task Network; A is the original color image of color of object style, CAFor the grayscale image and red blue code pattern of A, YAFor the task label of A, CBFor It is expected that carrying out the grayscale image and red blue code pattern of the normalized pathological image of color.
During specific implementation, retain the L in step 2)L1Loss, LGAN2(G, D), the stage complete loss function It is expressed as follows:
Network G and Task network are generated if optimizing simultaneously, while optimizing generation network G and Task network to make to generate net While network G reconstruction color style, it is intended to can Task network be deacclimatized simultaneously in the direction of Task network friendliness By generation network G reconstruct Lai new style A ', generate network G and Task network and be mutually adapted.So, the loss letter newly increased Number indicates are as follows:
Wherein, G makes a living into network, and T is Task Network;A is the original color image of color of object style, CAFor the ash of A Degree figure and red blue code pattern, CBThe grayscale image and red blue code pattern of the normalized pathological image of color are carried out for expectation.
During specific implementation, generates the parameter of network G and Task network parameter while updating, due to generating network G Gradient derives from LGAN2、LL1And LTask, and by loss coefficientWeighed, the gradient of Task network derives from LTask。 The complete loss function in the stage is as follows:
Simulation example:
The Color Style that (a) is showed in Fig. 4 is color of object style.The face being normalized in Fig. 4 (b) for needs Color style.Fig. 4 (c) the first from left is to use to identify generation network G that loss and L1 loss are obtained as supervised training in domain to Fig. 4 tetra- (b) result after image is normalized in;Fig. 4 (c) the second from left is that identification between loss, domain is lost and L1 loses using identifying in domain The generation network G obtained as supervised training image in Fig. 4 (b) is normalized after result;Fig. 4 (c) the first from left is to use Identify loss in domain, the generation network G that identifies loss, L1 loss and task loss between domain as supervised training and obtain is to Fig. 4 (b) result after image is normalized in.As can be seen that different losses under supervising Color Style generated with target face Color style has higher consistency, and during image generates, the detailed information of pathology image is not lost, Maintain the content completely the same with original picture i.e. Fig. 4 (b).
The present invention generates network by training confrontation, obtains that the Cellular Pathology Image of any Color Style mesh can be changed into The Cellular Pathology Image of style is marked, to realize the color normalization of pathological section.The process combination cervical cell pathological section Data characteristics, color image is converted into grayscale image and red blue code pattern input generates network, can retained by this method While cell red blue colouring information, the Color Style eliminated between the pathological section of different colours style to the greatest extent is poor It is different.In order to reduce different-style by the stylistic differences generated between picture after generating network, by other styles by generating Output picture after network is identified with target style picture, advanced optimizes generation network.Meanwhile in order to better Task Network is adapted to, proposes that network and Task Network combined training will be generated.
It can obtain after normalizing network by pathological section color of the above method training based on deep learning with color The generation network of effect is normalized, while input information needed for generating picture and capable of being effectively maintained Task Network, adapts to task Network ensure that the effect of Task Network while both improving the generation effect for generating network.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should all include Within protection scope of the present invention.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should all include Within protection scope of the present invention.

Claims (7)

1. a kind of pathological section color method for normalizing, which is characterized in that by the Color Style of pathological section image A be target face Color style generates model by confrontation and the pathological section image B of another Color Style is normalized to color of object style, described Confrontation generates model and constructs as follows:
1) sample image pre-treatment step:
Grayscale image and red blue code pattern are converted by pathological section sample image A and B, the input of network G is generated as confrontation Image CAAnd CB
2) confrontation generates training step in domain
Utilize sample image CATraining generates network G, so that generating network G generation and image A ' similar in image A, and identifies net Network D1 differentiates the true and false of A and A ', and the confrontation study for so constantly being generated and being identified, building confrontation generates network G;
Confrontation generates learning procedure between step 3) domain
Utilize sample image CB, and network G is generated using confrontation in step 2) and continues to train as starting point, it generates and the similar figure of image A As B ', and discrimination natwork D2 differentiates the true and false of A and B ', and the confrontation study for so constantly being generated and being identified optimizes to antibiosis At network G.
2. pathological section color method for normalizing according to claim 1, which is characterized in that fought in the step 2) domain Generate the loss function that training uses are as follows:
In formula,
Wherein, G*For optimal generator, λ obtained by dual trainingGAN1, λL1For for weighing importance between different loss functions Hyper parameter;EA[] is the expectation of [] interior expression formula under A distribution,For in CAThe expectation of [] interior expression formula under distribution,For in A, CAThe expectation of [] interior expression formula under distribution, G are generator, and D1 is discriminator in domain, and A is color of object wind The original color image of lattice, CAFor the grayscale image and red blue code pattern of the A of G.
3. pathological section color method for normalizing according to claim 1 or 2, which is characterized in that between the step 3) domain Confrontation generates the loss function that study uses are as follows:
Wherein, EAThe expectation of [] for [] interior expression formula under A distribution, D2 discriminator between domain,For in CBUnder distribution [] The expectation of interior expression formula, CBThe grayscale image and red blue code pattern of the normalized pathological image of color are carried out for expectation.
4. pathological section color method for normalizing according to claim 1, which is characterized in that the step 1) sample image The red blue binary map for encoding, coding being obtained also is carried out respectively to pathological section image A and B in pre-treatment step.
5. pathological section color method for normalizing according to claim 1, which is characterized in that further include step 4) task prison Superintend and direct learning procedure:
Obtain executing the Task Network T of appointed task using image A as training sample training in advance;By image CAInput step 3) The confrontation arrived generates network G, and confrontation generates network G and exports image A ';By image A ' incoming task network T, comparison task network T Output result task label corresponding with image A between difference, using the difference as loss feedback advanced optimize confrontation Generate network G.
6. pathological section color method for normalizing according to claim 5, which is characterized in that the loss function indicates Are as follows:
Wherein,For in A, CA, YAThe expectation G of [] interior expression formula is generator under distribution, and T is Task Network;A is The original color image of color of object style, CAFor the grayscale image and red blue code pattern of A, YAFor the task label of A, CBBy a definite date Hope the grayscale image for carrying out the normalized pathological image of color and red blue code pattern.
7. one kind is used for the normalized confrontation generator training system of pathological section color, by the color wind of pathological section image A Lattice are color of object style, generate model by confrontation and the pathological section image B of another Color Style is normalized to target face Color style, the confrontation generator training system include:
Sample image preprocessing module, for converting grayscale image and red blue code pattern for pathological section sample image A and B, The input picture C of network G is generated as confrontationAAnd CB
Confrontation generates training module in domain, for utilizing sample image CATraining generates network G, generates and figure so that generating network G The image A ' as similar in A, and discrimination natwork D1 differentiates the true and false of A and A ', the confrontation study for so constantly being generated and being identified, Building confrontation generates network;
Confrontation generates study module between domain, for utilizing sample image CB, and continue to train as starting point to generate network G, generate with Image B ' similar in image A, and discrimination natwork D2 differentiates true and false, the confrontation for so constantly being generated and being identified of A and B ' It practises, optimization confrontation network G.
CN201910533229.4A 2019-06-19 2019-06-19 Pathological section color normalization method and system Active CN110322396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910533229.4A CN110322396B (en) 2019-06-19 2019-06-19 Pathological section color normalization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910533229.4A CN110322396B (en) 2019-06-19 2019-06-19 Pathological section color normalization method and system

Publications (2)

Publication Number Publication Date
CN110322396A true CN110322396A (en) 2019-10-11
CN110322396B CN110322396B (en) 2022-12-23

Family

ID=68119893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910533229.4A Active CN110322396B (en) 2019-06-19 2019-06-19 Pathological section color normalization method and system

Country Status (1)

Country Link
CN (1) CN110322396B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028923A (en) * 2019-10-18 2020-04-17 平安科技(深圳)有限公司 Digital pathological image dyeing normalization method, electronic device and storage medium
CN111062862A (en) * 2019-12-19 2020-04-24 北京澎思科技有限公司 Color-based data enhancement method and system, computer device and storage medium
CN111161359A (en) * 2019-12-12 2020-05-15 东软集团股份有限公司 Image processing method and device
CN111325661A (en) * 2020-02-21 2020-06-23 京工数演(福州)科技有限公司 Seasonal style conversion model and method for MSGAN image
CN111353987A (en) * 2020-03-02 2020-06-30 中国科学技术大学 Cell nucleus segmentation method and device
CN111444844A (en) * 2020-03-26 2020-07-24 苏州腾辉达网络科技有限公司 Liquid-based cell artificial intelligence detection method based on variational self-encoder
CN111754478A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Unsupervised domain adaptation system and unsupervised domain adaptation method based on generation countermeasure network
CN111985464A (en) * 2020-08-13 2020-11-24 山东大学 Multi-scale learning character recognition method and system for court judgment documents
CN112750067A (en) * 2019-10-29 2021-05-04 爱思开海力士有限公司 Image processing system and training method thereof
CN114170224A (en) * 2021-01-20 2022-03-11 赛维森(广州)医疗科技服务有限公司 System and method for cellular pathology classification using generative staining normalization
CN114627010A (en) * 2022-03-04 2022-06-14 透彻影像(北京)科技有限公司 Dyeing space migration method based on dyeing density map
CN115239943A (en) * 2022-09-23 2022-10-25 杭州医策科技有限公司 Training method of image correction model and color correction method of slice image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160307305A1 (en) * 2013-10-23 2016-10-20 Rutgers, The State University Of New Jersey Color standardization for digitized histological images
US20170053398A1 (en) * 2015-08-19 2017-02-23 Colorado Seminary, Owner and Operator of University of Denver Methods and Systems for Human Tissue Analysis using Shearlet Transforms
US20170091937A1 (en) * 2014-06-10 2017-03-30 Ventana Medical Systems, Inc. Methods and systems for assessing risk of breast cancer recurrence
US20180165809A1 (en) * 2016-12-02 2018-06-14 Panagiotis Stanitsas Computer vision for cancerous tissue recognition
CN109670510A (en) * 2018-12-21 2019-04-23 万达信息股份有限公司 A kind of gastroscopic biopsy pathological data screening system and method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160307305A1 (en) * 2013-10-23 2016-10-20 Rutgers, The State University Of New Jersey Color standardization for digitized histological images
US20170091937A1 (en) * 2014-06-10 2017-03-30 Ventana Medical Systems, Inc. Methods and systems for assessing risk of breast cancer recurrence
US20170053398A1 (en) * 2015-08-19 2017-02-23 Colorado Seminary, Owner and Operator of University of Denver Methods and Systems for Human Tissue Analysis using Shearlet Transforms
US20180165809A1 (en) * 2016-12-02 2018-06-14 Panagiotis Stanitsas Computer vision for cancerous tissue recognition
CN109670510A (en) * 2018-12-21 2019-04-23 万达信息股份有限公司 A kind of gastroscopic biopsy pathological data screening system and method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宁旭等: "病理切片图像分割技术的研究", 《中国医学物理学杂志》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028923B (en) * 2019-10-18 2024-01-30 平安科技(深圳)有限公司 Digital pathological image staining normalization method, electronic device and storage medium
CN111028923A (en) * 2019-10-18 2020-04-17 平安科技(深圳)有限公司 Digital pathological image dyeing normalization method, electronic device and storage medium
CN112750067B (en) * 2019-10-29 2024-05-07 爱思开海力士有限公司 Image processing system and training method thereof
CN112750067A (en) * 2019-10-29 2021-05-04 爱思开海力士有限公司 Image processing system and training method thereof
CN111161359A (en) * 2019-12-12 2020-05-15 东软集团股份有限公司 Image processing method and device
CN111161359B (en) * 2019-12-12 2024-04-16 东软集团股份有限公司 Image processing method and device
CN111062862A (en) * 2019-12-19 2020-04-24 北京澎思科技有限公司 Color-based data enhancement method and system, computer device and storage medium
CN111325661A (en) * 2020-02-21 2020-06-23 京工数演(福州)科技有限公司 Seasonal style conversion model and method for MSGAN image
CN111325661B (en) * 2020-02-21 2024-04-09 京工慧创(福州)科技有限公司 Seasonal style conversion model and method for image named MSGAN
CN111353987A (en) * 2020-03-02 2020-06-30 中国科学技术大学 Cell nucleus segmentation method and device
CN111444844A (en) * 2020-03-26 2020-07-24 苏州腾辉达网络科技有限公司 Liquid-based cell artificial intelligence detection method based on variational self-encoder
CN111754478A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Unsupervised domain adaptation system and unsupervised domain adaptation method based on generation countermeasure network
CN111985464B (en) * 2020-08-13 2023-08-22 山东大学 Court judgment document-oriented multi-scale learning text recognition method and system
CN111985464A (en) * 2020-08-13 2020-11-24 山东大学 Multi-scale learning character recognition method and system for court judgment documents
CN114170224A (en) * 2021-01-20 2022-03-11 赛维森(广州)医疗科技服务有限公司 System and method for cellular pathology classification using generative staining normalization
CN114627010A (en) * 2022-03-04 2022-06-14 透彻影像(北京)科技有限公司 Dyeing space migration method based on dyeing density map
CN115239943A (en) * 2022-09-23 2022-10-25 杭州医策科技有限公司 Training method of image correction model and color correction method of slice image

Also Published As

Publication number Publication date
CN110322396B (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN110322396A (en) A kind of pathological section color method for normalizing and system
BenTaieb et al. Adversarial stain transfer for histopathology image analysis
de Bel et al. Stain-transforming cycle-consistent generative adversarial networks for improved segmentation of renal histopathology
CN111539883B (en) Digital pathological image H & E dyeing restoration method based on strong reversible countermeasure network
CN110263801B (en) Image processing model generation method and device and electronic equipment
CN107506796A (en) A kind of alzheimer disease sorting technique based on depth forest
CN109544507A (en) A kind of pathological image processing method and system, equipment, storage medium
CN112614070B (en) defogNet-based single image defogging method
CN114581552A (en) Gray level image colorizing method based on generation countermeasure network
CN114678121B (en) Method and system for constructing HP spherical deformation diagnosis model
CN112330790A (en) CT image automatic coloring method based on counterlearning and self-adaptive chromaticity correction
Nazki et al. MultiPathGAN: Structure preserving stain normalization using unsupervised multi-domain adversarial network with perception loss
Singh Colorization of old gray scale images and videos using deep learning
CN113989595B (en) Shadow model-based federal multi-source domain adaptation method and system
Moyes et al. Multi-channel auto-encoders for learning domain invariant representations enabling superior classification of histopathology images
CN112990340B (en) Self-learning migration method based on feature sharing
Schirrmeister et al. When less is more: Simplifying inputs aids neural network understanding
US20240054605A1 (en) Methods and systems for wavelet domain-based normalizing flow super-resolution image reconstruction
CN111768326B (en) High-capacity data protection method based on GAN (gas-insulated gate bipolar transistor) amplified image foreground object
Sudheera et al. Detection of dental plaque using enhanced K-means and silhouette methods
Weligampola et al. A retinex based gan pipeline to utilize paired and unpaired datasets for enhancing low light images
Jia et al. A fast texture-to-stain adversarial stain normalization network for histopathological images
Nandhini et al. Hierarchical patch selection: an improved patch sampling for no reference image quality assessment
Lan et al. Unpaired stain style transfer using invertible neural networks based on channel attention and long-range residual
Xing et al. The Beauty or the Beast: Which Aspect of Synthetic Medical Images Deserves Our Focus?

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant