CN117893450B - Digital pathological image enhancement method, device and equipment - Google Patents

Digital pathological image enhancement method, device and equipment Download PDF

Info

Publication number
CN117893450B
CN117893450B CN202410301347.3A CN202410301347A CN117893450B CN 117893450 B CN117893450 B CN 117893450B CN 202410301347 A CN202410301347 A CN 202410301347A CN 117893450 B CN117893450 B CN 117893450B
Authority
CN
China
Prior art keywords
image
images
feature
determining
composite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410301347.3A
Other languages
Chinese (zh)
Other versions
CN117893450A (en
Inventor
彭博
张立志
李艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202410301347.3A priority Critical patent/CN117893450B/en
Publication of CN117893450A publication Critical patent/CN117893450A/en
Application granted granted Critical
Publication of CN117893450B publication Critical patent/CN117893450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method, a device and equipment for enhancing a digital pathological image, belonging to the technical field of image enhancement. The method is based on circularly generating an countermeasure network CycleGAN to obtain a synthetic image of a digital pathological image of breast cancer; the synthesized image is subjected to characterization and filtration through three types of characteristic and filtration, namely an image energy value, a characteristic entropy value and a class characteristic space distance, and the energy value is used for representing the information richness of the synthesized image to obtain a first synthesized image; the characteristic entropy value is used for measuring the uncertainty of the first composite image, and further a second composite image is obtained; the class feature space distance is used for obtaining a high-quality pseudo image based on similarity measurement of the second synthesized image and the real image, and the high-quality pseudo image is used as a final high-quality synthesized image, so that the quality of the generated image is effectively improved.

Description

Digital pathological image enhancement method, device and equipment
Technical Field
The invention belongs to the technical field of image enhancement, and particularly relates to a method, a device and equipment for enhancing digital pathological images.
Background
In the field of medical images, accurate segmentation of pathological images is helpful for pathologists to make disease diagnosis and prognosis schemes, while segmentation algorithms based on deep learning require a large number of data supports with labels, so that data enhancement becomes an effective method. In the prior art, the characteristics of the original dataset are extracted by using a generating countermeasure network (GENERATIVE ADVERSARIAL Networks, GANs) so as to resist the generation of a new target domain image, but due to the different training processes of GANs, the problem of large fluctuation of the quality of the generated image exists in the current image data enhancement technology based on the GAN network. How to effectively use the GAN network to obtain high quality generated images is key to solving this series of problems.
Therefore, how to effectively use the GAN network to obtain high quality generated images is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to solve the technical problem that a GAN network cannot be effectively used to obtain high-quality generated images in the prior art.
To achieve the above object, in one aspect, the present invention provides a method for enhancing a digital pathological image, the method comprising the steps of:
Acquiring a target digital pathological image, wherein the target digital pathological image is a hematoxylin-eosin stained breast cancer pathological section image;
Preprocessing the target digital pathological image, extracting cell nucleus characteristic information, and obtaining an input image according to the cell nucleus characteristic information;
The input image and the preprocessed target digital pathology image are jointly input into a preset circulation generation countermeasure network, and a synthetic image of the breast cancer digital pathology image is obtained;
Determining an energy value of the corresponding composite image according to gradients between pixels in each composite image;
Sequencing all the composite images according to the sequence from the large energy value to the small energy value, and determining the composite images with the energy values sequenced at the first preset percentage as first composite images;
based on a pre-trained preset classification model, carrying out classification prediction on the first synthesized images, and determining the characteristic entropy value of each first synthesized image according to a prediction result;
sequencing all the first synthesized images according to the sequence of the characteristic entropy values from small to large, and determining the first synthesized images with the characteristic entropy values sequenced at the front second preset percentage as second synthesized images;
Inputting the preprocessed target digital pathology image and the preprocessed second synthesized image into the pre-trained preset classification model, and respectively extracting to obtain real image high-dimensional characteristics and second synthesized image high-dimensional characteristics;
Calculating the feature importance degree of the high-dimensional features of the real images through a random forest algorithm, determining feature dimensions with the feature importance degree ranked at a third preset percentage or larger than a preset threshold value in the sequence from high to low, respectively extracting feature dimensions in the high-dimensional features of the real high-dimensional features and the high-dimensional features of the second synthetic images according to the feature dimensions to serve as the total features of the real images and the total features of the second synthetic images, averaging the total features of the real images to obtain feature average barycenters, and determining the feature space distance of each second synthetic image according to cosine distances between the total features of the second synthetic images and the feature average barycenters;
and sequencing the second composite images according to the sequence of the characteristic space distances from small to large, and determining the second composite images with the characteristic space distances sequenced at the front fourth preset percentage as high-quality composite images.
Optionally, the extracting the nuclear feature information after preprocessing the target digital pathology image, and obtaining an input image according to the nuclear feature information includes:
Dividing the target digital pathological image into a plurality of tiles with specified sizes, determining a plurality of tiles with abundant tissue structures according to the tissue information richness and the dyeing quality, and marking the tiles as the preprocessed target digital pathological image;
determining nuclear mask data contained in the preprocessed target digital pathology image;
determining nuclear characteristic information according to the nuclear mask data; wherein the nuclear characteristic information comprises a nuclear size, profile, and a nuclear number probability density function of cancer and paracancerous tiles;
And determining input images beside the cancer and the cancer according to the cell nucleus characteristic information, and taking the input images beside the cancer and the cancer as the input images.
Optionally, the number of tiles ranges from [80, 100].
Optionally, the step of jointly inputting the input image and the preprocessed target digital pathology image into a preset loop generation countermeasure network to obtain a composite image of the breast cancer digital pathology image includes:
Determining an input image dataset, wherein the input image dataset comprises a plurality of input images therein;
Performing quantity matching on the input image dataset and the real image dataset, and meeting one-to-one correspondence conditions; wherein the real image dataset comprises the tiles, each tile being a real image;
The input image data set and the real image data set are jointly input into a preset circulation generation countermeasure network;
generating a countermeasure network based on the loop, and generating a composite image for each input image in the input image dataset according to the style of the real image.
Optionally, the method for calculating the characteristic entropy value of each first composite image includes:
Wherein H (P) is a characteristic entropy value, and P is a malignancy probability.
Optionally, the method for calculating the feature space distance of each second composite image includes:
Wherein L is the total number of layers of the feature, I is the composite image, the average centroid c of the feature is calculated from the average features of all the real images in the same class, θ is the vector Sum vector/>Included angle between/>Is a feature vector of a composite image comprising a feature total layer number L,/>For the average feature vector of the real image containing the feature total layer number L,And/>The distance between them is the cosine distance in the feature space. /(I)Is the pointing quantity/>L2 norm of (2), in particular meaning vector/>Length of/(I)And the same is true.
Optionally, the pre-trained preset classification model is ResNet models.
In yet another aspect, an apparatus for digital pathology image enhancement, comprises:
the first acquisition module is used for acquiring a target digital pathological image, wherein the target digital pathological image is a hematoxylin-eosin stained breast cancer pathological section image;
The extraction module is used for extracting the nuclear characteristic information after preprocessing the target digital pathological image and obtaining an input image according to the nuclear characteristic information;
The second acquisition module is used for jointly inputting the input image and the preprocessed target digital pathology image into a preset circulation generation countermeasure network to obtain a composite image of the breast cancer digital pathology image;
The determining module is used for determining the energy value of the corresponding composite image according to the gradient among pixels in each composite image;
Sequencing all the composite images according to the sequence from the large energy value to the small energy value, and determining the composite images with the energy values sequenced at the first preset percentage as first composite images;
based on a pre-trained preset classification model, carrying out classification prediction on the first synthesized images, and determining the characteristic entropy value of each first synthesized image according to a prediction result;
sequencing all the first synthesized images according to the sequence of the characteristic entropy values from small to large, and determining the first synthesized images with the characteristic entropy values sequenced at the front second preset percentage as second synthesized images;
Inputting the preprocessed target digital pathology image and the preprocessed second synthesized image into the pre-trained preset classification model, and respectively extracting to obtain real image high-dimensional characteristics and second synthesized image high-dimensional characteristics;
Calculating the feature importance degree of the high-dimensional features of the real images through a random forest algorithm, determining feature dimensions with the feature importance degree ranked at a third preset percentage or larger than a preset threshold value in the sequence from high to low, respectively extracting feature dimensions in the high-dimensional features of the real high-dimensional features and the high-dimensional features of the second synthetic images according to the feature dimensions to serve as the total features of the real images and the total features of the second synthetic images, averaging the total features of the real images to obtain feature average barycenters, and determining the feature space distance of each second synthetic image according to cosine distances between the total features of the second synthetic images and the feature average barycenters;
and sequencing the second composite images according to the sequence of the characteristic space distances from small to large, and determining the second composite images with the characteristic space distances sequenced at the front fourth preset percentage as high-quality composite images.
In yet another aspect, an apparatus for digital pathology image enhancement includes a processor and a memory, the processor coupled to the memory:
the processor is used for calling and executing the program stored in the memory;
the memory is used for storing the program, and the program is at least used for executing the method for enhancing the digital pathological image.
It can be appreciated that the invention provides a method for enhancing digital pathological images, which comprises the steps of firstly obtaining the digital pathological images of the breast cancer, wherein the digital pathological images are specifically hematoxylin-eosin (H & E) stained pathological section images; then preprocessing the digital pathological image, extracting nuclear characteristic information, and guiding input image generation according to the information; then, the combined real images are input into a cyclic generation countermeasure network CycleGAN together to obtain synthetic data of the breast cancer digital pathological image; and finishing the evaluation of the synthesized image by sequentially filtering the image energy value, the characteristic entropy value and the class characteristic space distance, thereby obtaining a high-quality image.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present description, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for digital pathology image enhancement according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a digital pathology image enhancement device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for enhancing digital pathological images according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As described in the background art, in the prior art, the generation countermeasure network (GENERATIVE ADVERSARIAL Networks, GANs) is generally used to extract the features of the original dataset, so as to resist the generation of a new target domain image, but due to the different training processes of GANs, the current image data enhancement technology based on the GAN network generally has the problems of large fluctuation of the quality of the generated image and the like. How to effectively use the GAN network to obtain high quality generated images is key to solving this series of problems.
Therefore, how to effectively use the GAN network to obtain high quality generated images is a technical problem to be solved by those skilled in the art.
Based on the above, the embodiment of the invention provides a method, a device and equipment for enhancing a digital pathological image, so as to acquire a high-quality image.
Fig. 1 is a schematic flow chart of a method for enhancing a digital pathological image according to an embodiment of the present invention, and although the present description provides the following steps of the method according to the embodiment or the accompanying drawings, more or fewer steps or module units may be included in the method according to the present invention or after being partially combined based on conventional or non-creative labor, and in the steps or structures where there is no logically necessary causal relationship, the execution sequence of the steps or the module structure of the apparatus is not limited to the execution sequence or the module structure shown in the embodiment or the accompanying drawings of the present invention. The described methods or module structures may be implemented in a sequential or parallel manner (e.g., in a parallel processor or multithreaded environment, or even in a distributed processing, server cluster implementation environment) in accordance with the method or module structures shown in the embodiments or figures when the actual device, server, or end product is in use.
The method for enhancing the digital pathological image provided in the embodiment of the present disclosure may be applied to a terminal device such as a client and a server, as shown in fig. 1, and specifically includes the following steps:
step S101, obtaining a target digital pathological image, wherein the target digital pathological image is a hematoxylin-eosin stained breast cancer pathological section image.
In this embodiment, a method for enhancing an overall digital pathological section image will be described by taking a breast cancer digital pathological section image as an example. After the digital pathological section image of the breast cancer is obtained, the digital pathological section image of the breast cancer is dyed through hematoxylin-eosin (H & E), and the dyed pathological section image is obtained and is used as a target digital pathological image.
In particular, the main difference between hematoxylin-eosin (H & E) stained pathological section images and other types (computed tomography (CT) images, magnetic Resonance (MR) images) is the difference in acquisition mode and use. It first takes a tissue sample from a patient, typically by biological cutting or surgical drawing, then secures, dehydrates, dips the tissue sample in wax, and then cuts into extremely thin sections. The sections were cut by a rotating knife or a slight pain, then deparaffinized in an oven after support with a slide, and finally the sections were immersed in hematoxylin-eosin dye to stain the nuclei blue and the cytoplasm pink. It is mainly used for reflecting microstructure and cell morphology of tissues and providing detailed information of cell nucleus, cytoplasm and the like.
Step S102, extracting cell nucleus characteristic information after preprocessing the target digital pathology image, and obtaining an input image according to the cell nucleus characteristic information.
In this embodiment, in order to improve the stability and robustness of the segmentation model, an input image background interference method may be introduced. The general preprocessing step is to divide the pathology image into a plurality of small tiles, and the tiles often have large differences in background due to different tissue areas. Based on this, the present application can introduce a background perturbation method by introducing random background filling in the input image, i.e. filling pixel blocks with pixel values in the range of [0, 255] in the original blank background. In this way, the distribution of nuclei is not disturbed, while the segmentation model will show a stronger robustness against sharp edges such as irregularly shaped and intact morphology nuclei.
In some embodiments, the method may specifically include the following steps:
Dividing the target digital pathological image into a plurality of tiles with specified sizes, determining a plurality of tiles with abundant tissue structures according to the tissue information richness and the dyeing quality, and marking the tiles as the preprocessed target digital pathological image;
determining nuclear mask data contained in the preprocessed target digital pathology image;
determining nuclear characteristic information according to the nuclear mask data; wherein the nuclear characteristic information comprises a nuclear size, profile, and a nuclear number probability density function of cancer and paracancerous tiles;
And determining input images beside the cancer and the cancer according to the cell nucleus characteristic information, and taking the input images beside the cancer and the cancer as the input images.
The target digital pathological image can be divided into a plurality of small blocks (namely tiles) with specified sizes, 80-100 tiles containing abundant tissue structures can be selected by a person (doctor) or a machine according to the tissue information richness and the dyeing quality, and the pretreated target digital pathological image is obtained after manual or machine labeling. The marked content comprises: the outline of the nucleus is marked (delineated). That is, the nuclear mask data is obtained by manual labeling, and mainly, for each cell nucleus, the outline of the cell nucleus is sketched by labeling software (for example, labelme image labeling software), and a plurality of coordinate point positions can be obtained for one cell nucleus, namely, n coordinate point positions correspond to one nuclear mask data. By these coordinate points, the outline and shape of each cell nucleus can be scratched out, and the probability density of the cell nucleus number based on one (cancer and paracancer) tile can be calculated according to the cell nucleus number in each (cancer and paracancer) tile. And further obtaining the characteristic information of the cell nucleus.
After the nuclear characteristic information is obtained, the number ranges of cancer cells and paracancestor (i.e. benign) cells in a tile can be respectively determined according to the nuclear number probability density function, randomly extracted from a nuclear mask library of the cancer cells and paracancestor cells respectively, and recombined to generate a cancer cell input image and a paracancestor cell input image. It is worth to be noted that, the construction and recombination of the cell nucleus mask library in the application comprises the following specific steps: firstly, an image with a blank background is provided, according to the technical content, individual cell nucleuses can be scratched out according to cell nucleuses mask data, a cell nucleuses mask library (namely a cancer cell nucleuses mask library and a cancer cell nucleuses mask library beside a cancer cell is respectively formed), the number range of the cell nucleuses mask filled in an input image is determined according to the cell nucleuses number probability density function of a tile at a 95% confidence level, and cell nucleuses mask are randomly extracted from the cell nucleuses mask library beside the cancer cell and the cancer cell nucleuses respectively, and are filled in a non-overlapping mode, so that the input image is formed.
Non-overlapping filling implementation process: when filling the previous cell nucleus mask n1, all point location coordinates c1 of n1 are recorded, and when filling the next mask n2, it is required to satisfy that all point location coordinates of n2 are not within c 1.
Step S103, the input image and the preprocessed target digital pathology image are jointly input into a preset circulation generation countermeasure network, and a composite image of the breast cancer digital pathology image is obtained.
Because the paired medical image data is very difficult to acquire, and CycleGAN can realize style conversion from the source domain image to the target domain image without matched training data, the generated image not only has a new grid of the target domain, but also retains the structural information of the source domain image, and further generates a real and reliable medical image, therefore, the application adopts CycleGAN.
It should be noted that, when the preset loop generation countermeasure network CycleGAN performs model training, the method for acquiring the sample input image is the same as the principle of the input image acquiring process in the embodiment, and only the target digital pathological image is the sample target digital pathological image. The loop generation countermeasure network CycleGAN generates a similar composite image in accordance with the nuclear feature information in each input image in combination with the real image (target digital pathology image) style. In the present application, the style of the real image is described, and the style refers to the visual appearance and characteristics of the real image, including color, texture, shape, structure, etc., which reflect the specific visual characteristics and morphological performance of the real image, and the synthetic image of the specified style can be generated according to the characteristics.
That is, the CycleGAN network performs the countermeasure training using a pair of images consisting of a real image plus an input image in the actual training, so that in the final generation task, the model brings the style of the input image as close as possible to the real image, where the style can be understood as the visual appearance and characteristics of the real image.
Specifically, in the training preset cycle generation countermeasure network, the real image (target digital pathology image) used in the application is from breast cancer pathology image data of the national institute of clinical pathology, huaxi medical institute of university of Sichuan, and the data set consists of 60 full-field digital sections (WSI) stained with hematoxylin-eosin (H & E) of 43 breast cancer patients. The idea of carrying out data enhancement by adopting a GAN-based generation method is derived from two-person zero and game in a game theory, and the game is continuously played through a Generator (G) and a discriminator (Discriminator, D) in a network, so that the Generator learns the distribution of original data and generates new data similar to the original data.
In some embodiments, the input image and the preprocessed target digital pathology image are jointly input into a preset circulation generation countermeasure network to obtain a composite image of the breast cancer digital pathology image, specifically:
Determining an input image dataset, wherein the input image dataset comprises a plurality of input images therein;
Performing quantity matching on the input image dataset and the real image dataset, and meeting one-to-one correspondence conditions; wherein the real image dataset comprises the tiles, each tile being a real image;
jointly inputting the input image dataset and the real image dataset into the loop generation countermeasure network CycleGAN;
Based on the loop generation countermeasure network CycleGAN, a composite image is generated for each input image transformation in the input image dataset in accordance with the style of the real image.
Step S104, determining the energy value of the corresponding composite image according to the gradient between pixels in each composite image.
The energy value is a result calculated by a gradient energy function calculation formula for an image of a given size of n×m. It represents the energy value, also called gray value, within a pixel area. The higher the gray value of the image, the brighter it appears in visual effect. When calculating the energy value of the image, the absolute values of gradients of the image in the x direction and the y direction need to be calculated respectively, and then the absolute values of the two directions are added to obtain the energy value of the image. Filtering an image by adopting a Sobel operator, wherein the operator comprises two groups of convolution factors with the size of 3 multiplied by 3, namely a transverse convolution factor and a longitudinal convolution factor, carrying out two-dimensional convolution operation on an original image in the x direction and the y direction, and carrying out weighted difference on gray values of pixels in four adjacent domains on the upper, lower, left and right sides of each pixel in the image to respectively obtain a transverse brightness difference approximation value and a longitudinal brightness difference approximation value so as to inhibit possible noise in the image, further extracting edge information more accurately, and adding edge energy of red, green and blue channels of an RGB image to obtain the energy value of the whole image.
Step 105, sorting all the composite images according to the order of the energy values from large to small, and determining the composite images with the energy values sorted by the first preset percentage as the first composite image.
Wherein the first preset percentage may be 50%.
And S106, carrying out classification prediction on the first synthesized images based on a pre-trained preset classification model, and determining the characteristic entropy value of each first synthesized image according to a prediction result. In the present application, the prediction result may be a prediction probability value.
The pre-trained preset classification model can be ResNet models, is a depth residual error network and comprises 18 convolution layers, wherein the 18 convolution layers comprise a basic convolution layer, a pooling layer and a full connection layer, residual error connection is introduced, gradient disappearance problems in deep network training are effectively relieved through cross-layer quick connection and residual error learning, and deeper model training and feature learning are achieved. The deep network structure has excellent feature learning capability to extract deep features of data, and information entropy can be calculated according to prediction probability. Wherein ResNet18 main model parameters are as follows: lr: the learning rate is generally fixed at 0.01 or 0.001; batch_size: batch size is generally fixed to be a multiple of 2 such as 8, 16, 32 and the like, and increasing batch_size can increase the convergence rate of the model, otherwise, the reduction can be slowed down; epochs: increasing epochs the iteration times can enhance the model fitting capacity, otherwise, decreasing can weaken; the relaxation model can be run through this value continuously down.
In the embodiment of the application, resNet models are used for training on the real breast cancer pathological images, so that the models learn deep features of the real breast cancer pathological images, then a trained feature extractor is used for carrying out deep feature extraction and classification prediction on the composite images, and the result of classification prediction is used as the calculation basis of the image feature entropy value.
In the embodiment of the invention, the characteristic entropy value is calculated by inputting the classification prediction result into the formula of the information entropy, and as the base number of logarithms is usually set to be 2 and the number of events is also 2, the range of the information entropy is usually between 0 and 1, the smaller the entropy value is, the more accurate the classification prediction of the benign and malignant of the composite image is, the smaller the uncertainty that the composite image is classified as the benign and malignant is, and the higher the image quality is.
In this embodiment, the method for calculating the feature entropy value of each of the first composite images includes:
Wherein H (P) is a characteristic entropy value, and P is a malignancy probability.
And step S107, sequencing all the first synthesized images according to the sequence of the characteristic entropy values from small to large, and determining the first synthesized images with the characteristic entropy values sequenced at the front second preset percentage as second synthesized images.
Wherein the second preset percentage may be 50%.
And S108, inputting the preprocessed target digital pathology image and the preprocessed second composite image into the pre-trained preset classification model, and respectively extracting to obtain high-dimensional features of the real image and high-dimensional features of the second composite image.
That is, the tile and the second composite image are respectively input into a preset classification model, and the real image high-dimensional feature and the second composite image high-dimensional feature are respectively obtained.
The real image high-dimensional feature may be 512-dimensional high-dimensional feature extracted by ResNet network.
Step S109, calculating the feature importance degree of the high-dimensional features of the real images through a random forest algorithm, determining feature dimensions with feature importance degrees ranked in a third preset percentage or larger than a preset threshold according to the sequence from high to low, respectively extracting feature dimensions in the high-dimensional features of the real high-dimensional features and the high-dimensional features of the second synthetic images according to the feature dimensions to serve as the total features of the real images and the total features of the second synthetic images, averaging the total features of the real images to obtain feature average centroids, and determining the feature space distance of each second synthetic image according to cosine distances between the total features of the second synthetic images and the feature average centroids. Wherein the third preset percentage may be 20% and the preset threshold may be 0.02.
The random forest is an integrated learning method and consists of a plurality of decision trees. It trains multiple decision trees by randomly selecting data samples and features and votes or averages based on the predictions of these trees to arrive at a final prediction. According to the application, when a random forest is constructed, for a sample, 512-dimensional high-dimensional features extracted by a classification model are taken as training data x, a classification model prediction result is taken as a label y to train a random forest algorithm, a grid search algorithm is used for searching the optimal super-parameter combination of the model, and in the training process, the random forest can track the contribution of each feature to the reduction of the non-purity (such as a base index or entropy). When features are used to split nodes, the average value of the degree of reduction in the degree of unrepeace of each feature may be used as a measure of the importance of the feature, from which a feature importance value is obtained, and features with feature importance values greater than a preset threshold or a third preset percentage ranked before are selected as the total feature, where the preset threshold may be 0.02, because feature importance values greater than this threshold differ significantly from the importance values of other features. That is, for all 512-dimensional feature vectors, the feature importance value of each dimension is determined, the feature dimension with the value <0.02 is removed according to the feature importance value, only the features with the feature importance value of 19-dimensional >0.02 are left as the total features, the combination is the combination after feature removal, that is, the low-importance features are removed, and the remaining features are combined together to form a 19-dimensional total feature. After the feature dimension is determined to be 19 dimensions, 19-dimensional real high-dimensional features and second composite image high-dimensional features are extracted as real image total features and second composite image total features.
The method for calculating the feature space distance of each second composite image comprises the following steps:
Wherein L is the total number of layers of the feature, I is the composite image, the average centroid c of the feature is calculated from the average features of all the real images in the same class, θ is the vector Sum vector/>Included angle between/>Is a feature vector of a composite image comprising a feature total layer number L,/>For the average feature vector of the real image containing the feature total layer number L,And/>The distance between them is the cosine distance in the feature space. /(I)Is the pointing quantity/>L2 norm of (2), in particular meaning vector/>Length of/(I)And the same is true. In the present application, the feature space distance is the cosine distance.
And calculating the importance degree of the features by a random forest algorithm, and then selecting the features with high importance as the total features of the pathological image. The method comprises the steps of obtaining a multidimensional feature average centroid of a real image, namely a feature average centroid c, by averaging multidimensional feature vectors of all the real images, and sequentially calculating cosine distances between the multidimensional feature centroid of each synthesized image and the feature average centroid c, wherein the feature average centroid c is obtained by calculating average features of all the real images in the same class. If the cosine distance between the multidimensional characteristic centroid of the composite image and the characteristic average centroid c is smaller, the higher the similarity between the composite image and the real image is, namely, the higher the image quality is.
The feature space distance is selected based on the distance from the total feature of the synthesized image to the average centroid of the class feature (the average centroid of the class feature is the average centroid of the feature of the cancer tile and the average centroid of the feature of the cancer side tile), and the distance can be a cosine distance which is a measurement method for measuring the similarity between two vectors and is measured by calculating the cosine value of the included angle between the two vectors. If the directions of the two vectors are more similar (the included angle is closer to 0 degree), the cosine value is closer to 1, and the cosine distance is closer to 0; the more dissimilar the directions of the two vectors (the closer the angle is to 180 degrees), the closer the cosine value is to-1 and the closer the cosine distance is to 2.
And step S110, sequencing the second composite images according to the sequence of the feature space distances from small to large, and determining the second composite images with the feature space distances sequenced at the front fourth preset percentage as high-quality composite images. Wherein the fourth preset percentage may be 50%.
Image characterization is the process of converting image data into an efficient representation that can be used for computer vision tasks. It involves converting pixel information in an image into a more meaningful, more operable form so that the machine can understand and process the image. The application utilizes the nuclear characteristic information to acquire a synthetic image through a cyclic generation countermeasure network CycleGAN, and simultaneously forms an enhanced data evaluation method through cascading three different image characterizations, so as to sequentially evaluate the image quality of the synthetic image generated by the cyclic generation countermeasure network CycleGAN, and evaluate the synthetic image with high quality.
It can be understood that in the technical scheme provided by the embodiment of the invention, the generated synthetic image is obtained by circularly generating the countermeasure network CycleGAN, and the evaluation of the synthetic image is completed by filtering the image energy value, the characteristic entropy value and the class characteristic space distance in sequence, so that a high-quality pseudo image is obtained, and the purpose of improving the accuracy of the segmentation model is achieved.
In order to explain the technical effects of the technical scheme provided by the embodiment of the invention, the invention further provides a verification embodiment:
expert evaluation method:
The method comprises the following steps: inviting clinical pathology specialists to subjectively and qualitatively evaluate the image screening quality. I.e. randomly, the experts were provided with 200 synthetic images, half of which were not screened by the digital pathology image enhancement method of the present application, and the other half were screened by the digital pathology image enhancement method of the present application, and these images were divided into 20 groups. Each group of 10 images consists of two sub-groups of 5 images, one from the set of images not screened by the digital pathology image enhancement method of the present application and the other from the set of images screened by the digital pathology image enhancement method of the present application. These 20 combined images are submitted to the pathology specialist separately and the subjective assessment of the image quality is performed by the specialist, i.e. for each group of images the specialist selects the subgroup with the better quality. And finally, after the evaluation is finished, counting expert judgment results to finish the qualitative analysis of the digital pathological image enhancement method.
Results: analysis and display are carried out on the judging results of two pathologists, and the two pathologists can distinguish the subgroup screened by the method without the digital pathological image enhancement according to the application and the subgroup screened by the method based on the digital pathological image enhancement according to the application with high consistency. The data statistical analysis shows that in 20 combined images, the quality of the subgroup is better after 15 times of selection of the pathology expert based on the digital pathology image enhancement method of the application,
3 Times of selection are in the tie, and only 2 times of selection are better in quality of subgroup screening by the method which is not enhanced by the digital pathological image. Therefore, the method provided by the application is effective for screening out high-quality synthetic pathological images.
Cell nucleus segmentation quantization index evaluation method:
The full-scale sample Frs, part of the real sample Prs, part of the real sample plus the pseudo-pathology image (Prs+ CycleGAN) which is not screened by the digital pathology image enhancement method of the application and part of the real sample plus the pseudo-pathology image (Prs+ CycleGAN + MRHC) which is screened by the digital pathology image enhancement method of the application are input into a plurality of representative segmentation networks. The effectiveness of the digital pathology image enhancement method of the present application is further verified by analyzing the quantitative evaluation index of these models in terms of cell nucleus segmentation.
Internal data (Huaxi breast cancer pathology image data (WCHSCU)) test:
1) The Attention-Unet network, the segmentation effect of the amplified sample set 2 (Prs+ CycleGAN + MRHC) obtained based on the digital pathological image enhancement method of the application is respectively improved by 5.8%, 5.7%, 3.8%, 5.6% and 0.8% on MPA, DSC, mioU, precision and Recall compared with the amplified sample set 1 (Prs+ CycleGAN) which is not screened by the digital pathological image enhancement method of the application; compared with the equivalent real sample set Frs, the method has the advantages that the method approaches infinitely on the evaluation index and even performs better on the MPA index;
2) Unet 3 & lt3+ & gt network, and the segmentation effect of the amplified sample set 2 (Prs+ CycleGAN + MRHC) obtained based on the digital pathological image enhancement method is improved by 3.8%, 3.6%, 2.4%, 3.0% and 1.3% on MPA, DSC, mioU, precision and Recall respectively compared with the amplified sample set 1 (Prs+ CycleGAN) which is not screened by the digital pathological image enhancement method; compared with the equivalent real sample set Frs, the method has infinite approximation on evaluation indexes;
3) MedT network, the segmentation effect of the amplified sample set 2 (Prs+ CycleGAN + MRHC) obtained based on the digital pathological image enhancement method is improved by 3.7%, 3.8%, 2.1%, 2.5% and 0.6% on MPA, DSC, mioU, precision and Recall respectively compared with the amplified sample set 1 (Prs+ CycleGAN) which is not screened by the digital pathological image enhancement method; the infinite approximation on the evaluation index and even better performance on the MPA index are achieved compared with the equivalent real sample set Frs.
External public data (Kumar-BC in cancer genomic profile and UCSB from the bioinformatics center of san bara division, california university) test:
According to the application, 3-component analysis models are respectively constructed based on 4 comparison data sets and 3 segmentation networks, and experimental results show that the effect lifting trend on the Kumar-BC and UCSB external data sets is consistent with the trend on the Huaxi independent test set, namely, the model effect trained on the amplified sample set 2 (Prs+ CycleGAN + MRHC) obtained by the digital pathology image enhancement method is superior to the model effect trained on the amplified sample set 1 (Prs+ CycleGAN) which is not screened by the digital pathology image enhancement method, and approximates to the model effect trained on the full-scale real sample Frs. It is worth noting that the segmentation model constructed by the digital pathology image enhancement method of the application has the same effect on Kumar-BC and UCSB external data sets as that of the independent test set of the Huaxi data, and is better than the Hua Xi independent test set even on 5 quantization indexes (MPA, DSC, mioU, precision and Recall) of the UCSB data set.
Therefore, the method for enhancing the digital pathological image has better effect.
Based on a general inventive concept, the invention also provides a device for enhancing digital pathological images, which is used for realizing the method embodiment. Fig. 2 is a schematic structural diagram of a device for enhancing digital pathological images according to an embodiment of the present invention. As shown in fig. 2, the apparatus provided by the embodiment of the present invention may include the following structures:
a first acquisition module 21, configured to acquire a target digital pathological image, where the target digital pathological image is a hematoxylin-eosin stained breast cancer pathological section image;
The extracting module 22 is configured to pre-process the target digital pathology image, extract nuclear feature information, and obtain an input image according to the nuclear feature information;
A second obtaining module 23, configured to jointly input the input image and the preprocessed target digital pathology image into a preset loop generation countermeasure network, so as to obtain a composite image of the breast cancer digital pathology image;
A determining module 24 for determining an energy value of each composite image based on the gradient between pixels in the respective composite image;
Sequencing all the composite images according to the sequence from the large energy value to the small energy value, and determining the composite images with the energy values sequenced at the first preset percentage as first composite images;
based on a pre-trained preset classification model, carrying out classification prediction on the first synthesized images, and determining the characteristic entropy value of each first synthesized image according to a prediction result;
sequencing all the first synthesized images according to the sequence of the characteristic entropy values from small to large, and determining the first synthesized images with the characteristic entropy values sequenced at the front second preset percentage as second synthesized images;
Inputting the preprocessed target digital pathology image and the preprocessed second synthesized image into the pre-trained preset classification model, and respectively extracting to obtain real image high-dimensional characteristics and second synthesized image high-dimensional characteristics;
Calculating the feature importance degree of the high-dimensional features of the real images through a random forest algorithm, determining feature dimensions with the feature importance degree ranked at a third preset percentage or larger than a preset threshold value in the sequence from high to low, respectively extracting feature dimensions in the high-dimensional features of the real high-dimensional features and the high-dimensional features of the second synthetic images according to the feature dimensions to serve as the total features of the real images and the total features of the second synthetic images, averaging the total features of the real images to obtain feature average barycenters, and determining the feature space distance of each second synthetic image according to cosine distances between the total features of the second synthetic images and the feature average barycenters;
and sequencing the second composite images according to the sequence of the characteristic space distances from small to large, and determining the second composite images with the characteristic space distances sequenced at the front fourth preset percentage as high-quality composite images.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Based on a general inventive concept, the present invention also provides a digital pathology image enhancement apparatus for implementing the above method embodiments.
Fig. 3 is a schematic structural diagram of a device for enhancing digital pathological images according to an embodiment of the present invention. As shown in fig. 3, the apparatus for digital pathology image enhancement of the present embodiment includes a processor 31 and a memory 32, the processor 31 being connected to the memory 32. Wherein the processor 31 is configured to invoke and execute the program stored in the memory 32; the memory 32 is used to store the program for at least performing the method of digital pathology image enhancement in the above embodiments.
Specific embodiments of a device for enhancing a digital pathological image provided in the embodiments of the present application may refer to the implementation manner of the method for enhancing a digital pathological image in any of the above embodiments, which is not described herein.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (8)

1. A method of digital pathology image enhancement, the method comprising:
Acquiring a target digital pathological image, wherein the target digital pathological image is a hematoxylin-eosin stained breast cancer pathological section image;
Extracting nuclear characteristic information after preprocessing the target digital pathology image, and obtaining an input image according to the nuclear characteristic information, wherein the method comprises the following steps:
Dividing the target digital pathological image into a plurality of tiles with specified sizes, determining a plurality of tiles with abundant tissue structures according to the tissue information richness and the dyeing quality, and marking the tiles as the preprocessed target digital pathological image;
determining nuclear mask data contained in the preprocessed target digital pathology image;
determining nuclear characteristic information according to the nuclear mask data; wherein the nuclear characteristic information comprises a nuclear size, profile, and a nuclear number probability density function of cancer and paracancerous tiles;
determining input images beside the cancer and the cancer according to the cell nucleus characteristic information, and taking the input images beside the cancer and the cancer as the input images;
the input image and the preprocessed target digital pathology image are jointly input into a preset circulation generation countermeasure network, and a synthetic image of the breast cancer digital pathology image is obtained; the synthetic image is obtained by combining nuclear characteristic information of an input image with an image style of a preprocessed target digital pathological image based on a preset cycle generation countermeasure network;
Determining an energy value of the corresponding composite image according to gradients between pixels in each composite image;
Sequencing all the composite images according to the sequence from the large energy value to the small energy value, and determining the composite images with the energy values sequenced at the first preset percentage as first composite images;
based on a pre-trained preset classification model, carrying out classification prediction on the first synthesized images, and determining the characteristic entropy value of each first synthesized image according to a prediction result;
sequencing all the first synthesized images according to the sequence of the characteristic entropy values from small to large, and determining the first synthesized images with the characteristic entropy values sequenced at the front second preset percentage as second synthesized images;
Inputting the preprocessed target digital pathology image and the preprocessed second synthesized image into the pre-trained preset classification model, and respectively extracting to obtain real image high-dimensional characteristics and second synthesized image high-dimensional characteristics; the real image is a preprocessed target digital pathological image;
Calculating the feature importance degree of the high-dimensional features of the real images through a random forest algorithm, determining feature dimensions with the feature importance degree ranked at a third preset percentage or larger than a preset threshold value in the sequence from high to low, respectively extracting feature dimensions in the high-dimensional features of the real high-dimensional features and the high-dimensional features of the second synthetic images according to the feature dimensions to serve as the total features of the real images and the total features of the second synthetic images, averaging the total features of the real images to obtain feature average barycenters, and determining the feature space distance of each second synthetic image according to cosine distances between the total features of the second synthetic images and the feature average barycenters;
and sequencing the second composite images according to the sequence of the characteristic space distances from small to large, and determining the second composite images with the characteristic space distances sequenced at the front fourth preset percentage as high-quality composite images.
2. The method of claim 1, wherein the number of tiles ranges from [80, 100].
3. The method of claim 1, wherein the step of jointly inputting the input image and the preprocessed target digital pathology image into a preset loop generation countermeasure network to obtain a composite image of the breast cancer digital pathology image comprises:
Determining an input image dataset, wherein the input image dataset comprises a plurality of input images therein;
Performing quantity matching on the input image dataset and the real image dataset, and meeting one-to-one correspondence conditions; wherein the real image dataset comprises the tiles, each tile being a real image;
The input image data set and the real image data set are jointly input into a preset circulation generation countermeasure network;
generating a countermeasure network based on the loop, and generating a composite image for each input image in the input image dataset according to the style of the real image.
4. The method of claim 1, wherein the method of calculating the feature entropy value of each of the first composite images includes:
H(p)=-plog2(p)-(1-p)log2(1-p)
Wherein H (P) is a characteristic entropy value, and P is a malignancy probability.
5. The method of claim 1, wherein the method of calculating the feature space distance of each of the second composite images comprises:
Wherein L is the total number of layers of the feature, I is the composite image, the average centroid c of the feature is calculated from the average features of all the real images in the same class, θ is the vector Sum vector/>Included angle between/>Is a feature vector of a composite image comprising a feature total layer number L,/>For the average eigenvector of the real image containing the total number of layers of features L,/>AndThe distance between the two is the cosine distance in the feature space,/>Is the pointing quantity/>L2 norm of (2), in particular meaning vector/>Length of/(I)And the same is true.
6. The method of claim 1, wherein the pre-trained pre-set classification model is a ResNet model.
7. An apparatus for digital pathology image enhancement, comprising:
the first acquisition module is used for acquiring a target digital pathological image, wherein the target digital pathological image is a hematoxylin-eosin stained breast cancer pathological section image;
The extraction module is used for extracting the nuclear characteristic information after preprocessing the target digital pathological image and obtaining an input image according to the nuclear characteristic information;
The method is particularly used for dividing the target digital pathological image into a plurality of tiles with specified sizes, determining a plurality of tiles with abundant tissue structures according to the richness of tissue information and the dyeing quality, and marking the tiles as the preprocessed target digital pathological image;
determining nuclear mask data contained in the preprocessed target digital pathology image;
determining nuclear characteristic information according to the nuclear mask data; wherein the nuclear characteristic information comprises a nuclear size, profile, and a nuclear number probability density function of cancer and paracancerous tiles;
determining input images beside the cancer and the cancer according to the cell nucleus characteristic information, and taking the input images beside the cancer and the cancer as the input images;
The second acquisition module is used for jointly inputting the input image and the preprocessed target digital pathology image into a preset circulation generation countermeasure network to obtain a composite image of the breast cancer digital pathology image; the synthetic image is obtained by combining nuclear characteristic information of an input image with an image style of a preprocessed target digital pathological image based on a preset cycle generation countermeasure network;
The determining module is used for determining the energy value of the corresponding composite image according to the gradient among pixels in each composite image;
Sequencing all the composite images according to the sequence from the large energy value to the small energy value, and determining the composite images with the energy values sequenced at the first preset percentage as first composite images;
based on a pre-trained preset classification model, carrying out classification prediction on the first synthesized images, and determining the characteristic entropy value of each first synthesized image according to a prediction result;
sequencing all the first synthesized images according to the sequence of the characteristic entropy values from small to large, and determining the first synthesized images with the characteristic entropy values sequenced at the front second preset percentage as second synthesized images;
Inputting the preprocessed target digital pathology image and the preprocessed second synthesized image into the pre-trained preset classification model, and respectively extracting to obtain real image high-dimensional characteristics and second synthesized image high-dimensional characteristics; the real image is a preprocessed target digital pathological image;
Calculating the feature importance degree of the high-dimensional features of the real images through a random forest algorithm, determining feature dimensions with the feature importance degree ranked at a third preset percentage or larger than a preset threshold value in the sequence from high to low, respectively extracting feature dimensions in the high-dimensional features of the real high-dimensional features and the high-dimensional features of the second synthetic images according to the feature dimensions to serve as the total features of the real images and the total features of the second synthetic images, averaging the total features of the real images to obtain feature average barycenters, and determining the feature space distance of each second synthetic image according to cosine distances between the total features of the second synthetic images and the feature average barycenters;
and sequencing the second composite images according to the sequence of the characteristic space distances from small to large, and determining the second composite images with the characteristic space distances sequenced at the front fourth preset percentage as high-quality composite images.
8. An apparatus for digital pathology image enhancement, comprising a processor and a memory, the processor being coupled to the memory:
the processor is used for calling and executing the program stored in the memory;
The memory for storing the program at least for performing the method of digital pathology image enhancement of any one of claims 1 to 6.
CN202410301347.3A 2024-03-15 2024-03-15 Digital pathological image enhancement method, device and equipment Active CN117893450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410301347.3A CN117893450B (en) 2024-03-15 2024-03-15 Digital pathological image enhancement method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410301347.3A CN117893450B (en) 2024-03-15 2024-03-15 Digital pathological image enhancement method, device and equipment

Publications (2)

Publication Number Publication Date
CN117893450A CN117893450A (en) 2024-04-16
CN117893450B true CN117893450B (en) 2024-05-24

Family

ID=90641592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410301347.3A Active CN117893450B (en) 2024-03-15 2024-03-15 Digital pathological image enhancement method, device and equipment

Country Status (1)

Country Link
CN (1) CN117893450B (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577010A (en) * 2009-06-11 2009-11-11 清华大学 Method for automatically evaluating synthetic quality of image based on image library
CN102346912A (en) * 2010-07-23 2012-02-08 索尼公司 Image processing apparatus, image processing method, and program
CN106060512A (en) * 2016-06-28 2016-10-26 华中科技大学 Method for selecting and filling reasonable mapping points in virtual viewpoint synthesis
CN109272492A (en) * 2018-08-24 2019-01-25 深思考人工智能机器人科技(北京)有限公司 A kind of processing method and system of cell pathology smear
KR101969864B1 (en) * 2017-12-15 2019-04-18 동국대학교 산학협력단 Method of synthesizing images based on mutual interlocking of object and background images
CN111242174A (en) * 2019-12-31 2020-06-05 浙江大学 Liver cancer image feature extraction and pathological classification method and device based on imaging omics
CN111860640A (en) * 2020-07-17 2020-10-30 大连海事大学 Specific sea area data set augmentation method based on GAN
CN111985536A (en) * 2020-07-17 2020-11-24 万达信息股份有限公司 Gastroscope pathological image classification method based on weak supervised learning
CN112101451A (en) * 2020-09-14 2020-12-18 北京联合大学 Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks
CN112699885A (en) * 2020-12-21 2021-04-23 杭州反重力智能科技有限公司 Semantic segmentation training data augmentation method and system based on antagonism generation network GAN
WO2021226382A1 (en) * 2020-05-06 2021-11-11 The Board Of Regents Of The University Of Texas System Systems and methods for characterizing a tumor microenvironment using pathological images
CN114627424A (en) * 2022-03-25 2022-06-14 合肥工业大学 Gait recognition method and system based on visual angle transformation
EP4042377A1 (en) * 2019-10-28 2022-08-17 Google LLC Synthetic generation of clinical skin images in pathology
CN115587985A (en) * 2022-10-14 2023-01-10 复旦大学 Method for dividing cell nucleus of histopathology image and normalizing dyeing style
CN115841438A (en) * 2022-10-24 2023-03-24 中国科学院长春光学精密机械与物理研究所 Infrared image and visible light image fusion method based on improved GAN network
CN116186507A (en) * 2022-12-01 2023-05-30 广东石油化工学院 Feature subset selection method, device and storage medium
DE202023101413U1 (en) * 2023-03-21 2023-08-02 Hamid Alinejad-Rokny A system for diagnosing hypertrophic cardiomyopathy using deep learning techniques
CN117032691A (en) * 2023-07-11 2023-11-10 哈尔滨工业大学 Improved GAN network-based ovarian disease ultrasonic image generation algorithm
CN117173464A (en) * 2023-08-29 2023-12-05 武汉大学 Unbalanced medical image classification method and system based on GAN and electronic equipment
CN117437240A (en) * 2023-10-30 2024-01-23 重庆邮电大学 Oral squamous cell carcinoma medical image segmentation method based on improved U-Net network
WO2024021536A1 (en) * 2022-07-27 2024-02-01 华东理工大学 Catalytic cracking unit key index modeling method based on time sequence feature extraction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11398013B2 (en) * 2019-10-18 2022-07-26 Retrace Labs Generative adversarial network for dental image super-resolution, image sharpening, and denoising

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577010A (en) * 2009-06-11 2009-11-11 清华大学 Method for automatically evaluating synthetic quality of image based on image library
CN102346912A (en) * 2010-07-23 2012-02-08 索尼公司 Image processing apparatus, image processing method, and program
CN106060512A (en) * 2016-06-28 2016-10-26 华中科技大学 Method for selecting and filling reasonable mapping points in virtual viewpoint synthesis
KR101969864B1 (en) * 2017-12-15 2019-04-18 동국대학교 산학협력단 Method of synthesizing images based on mutual interlocking of object and background images
CN109272492A (en) * 2018-08-24 2019-01-25 深思考人工智能机器人科技(北京)有限公司 A kind of processing method and system of cell pathology smear
EP4042377A1 (en) * 2019-10-28 2022-08-17 Google LLC Synthetic generation of clinical skin images in pathology
CN111242174A (en) * 2019-12-31 2020-06-05 浙江大学 Liver cancer image feature extraction and pathological classification method and device based on imaging omics
WO2021226382A1 (en) * 2020-05-06 2021-11-11 The Board Of Regents Of The University Of Texas System Systems and methods for characterizing a tumor microenvironment using pathological images
CN111985536A (en) * 2020-07-17 2020-11-24 万达信息股份有限公司 Gastroscope pathological image classification method based on weak supervised learning
CN111860640A (en) * 2020-07-17 2020-10-30 大连海事大学 Specific sea area data set augmentation method based on GAN
CN112101451A (en) * 2020-09-14 2020-12-18 北京联合大学 Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks
CN112699885A (en) * 2020-12-21 2021-04-23 杭州反重力智能科技有限公司 Semantic segmentation training data augmentation method and system based on antagonism generation network GAN
CN114627424A (en) * 2022-03-25 2022-06-14 合肥工业大学 Gait recognition method and system based on visual angle transformation
WO2024021536A1 (en) * 2022-07-27 2024-02-01 华东理工大学 Catalytic cracking unit key index modeling method based on time sequence feature extraction
CN115587985A (en) * 2022-10-14 2023-01-10 复旦大学 Method for dividing cell nucleus of histopathology image and normalizing dyeing style
CN115841438A (en) * 2022-10-24 2023-03-24 中国科学院长春光学精密机械与物理研究所 Infrared image and visible light image fusion method based on improved GAN network
CN116186507A (en) * 2022-12-01 2023-05-30 广东石油化工学院 Feature subset selection method, device and storage medium
DE202023101413U1 (en) * 2023-03-21 2023-08-02 Hamid Alinejad-Rokny A system for diagnosing hypertrophic cardiomyopathy using deep learning techniques
CN117032691A (en) * 2023-07-11 2023-11-10 哈尔滨工业大学 Improved GAN network-based ovarian disease ultrasonic image generation algorithm
CN117173464A (en) * 2023-08-29 2023-12-05 武汉大学 Unbalanced medical image classification method and system based on GAN and electronic equipment
CN117437240A (en) * 2023-10-30 2024-01-23 重庆邮电大学 Oral squamous cell carcinoma medical image segmentation method based on improved U-Net network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A CAD system for automatic dysplasia grading on H&E cervical whole-slide images;Oliveira, S等;《Sci Rep 》;20230309;1-12 *
A survey of synthetic data generation for machine learning;Abufadda M等;《2021 22nd international arab conference on information technology (ACIT)》;20220117;1-7 *
Selective synthetic augmentation with HistoGAN for improved histopathology image classification;Xue Y等;《Med Image Anal》;20201001;1-15 *
SMOOTH-GAN: towards sharp and smooth synthetic EHR data generation;Rashidian S等;《Artificial Intelligence in Medicine: 18th International Conference on Artificial Intelligence in Medicine, AIME 2020》;20200926;37-48 *
基于生成对抗网络的核磁共振影像超分辨分析及其在乳腺癌分子病理信息预测中的应用;刘祖辉;《中国优秀硕士学位论文全文数据库医药卫生科技辑》;20210215(第2期);E060-196 *
基于生成对抗网络的超声图像超分辨率重建;唐真迪等;《太赫兹科学与电子信息学报》;20230531;第21卷(第5期);677-683 *

Also Published As

Publication number Publication date
CN117893450A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
Han Automatic liver lesion segmentation using a deep convolutional neural network method
Yu et al. Liver vessels segmentation based on 3d residual U-NET
CN104933709B (en) Random walk CT lung tissue image automatic segmentation methods based on prior information
CN109360208A (en) A kind of medical image cutting method based on one way multitask convolutional neural networks
CN110310287A (en) It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN111179237B (en) Liver and liver tumor image segmentation method and device
CN105096310B (en) Divide the method and system of liver in magnetic resonance image using multi-channel feature
CN109410167A (en) A kind of analysis method and Related product of 3D galactophore image
CN109034221A (en) A kind of processing method and its device of cervical cytology characteristics of image
CN110782427B (en) Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN108447063A (en) The multi-modal nuclear magnetic resonance image dividing method of Gliblastoma
CN110120048A (en) In conjunction with the three-dimensional brain tumor image partition method for improving U-Net and CMF
CN104616289A (en) Removal method and system for bone tissue in 3D CT (Three Dimensional Computed Tomography) image
CN111242953B (en) MR image segmentation method and device based on condition generation countermeasure network
CN106682127A (en) Image searching system and method
CN110047079A (en) A kind of optimum segmentation scale selection method based on objects similarity
CN112329871A (en) Pulmonary nodule detection method based on self-correction convolution and channel attention mechanism
CN113191968A (en) Method for establishing three-dimensional ultrasonic image blind denoising model and application thereof
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
Kitrungrotsakul et al. Interactive deep refinement network for medical image segmentation
Tong et al. A dual tri-path CNN system for brain tumor segmentation
Xiaojie et al. Segmentation of the aortic dissection from CT images based on spatial continuity prior model
CN117893450B (en) Digital pathological image enhancement method, device and equipment
Hanbury et al. Morphological segmentation on learned boundaries
Liu et al. Automatic Lung Parenchyma Segmentation of CT Images Based on Matrix Grey Incidence.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant