CN115861604A - Cervical tissue image processing method, cervical tissue image processing apparatus, computer device, and storage medium - Google Patents

Cervical tissue image processing method, cervical tissue image processing apparatus, computer device, and storage medium Download PDF

Info

Publication number
CN115861604A
CN115861604A CN202310121906.8A CN202310121906A CN115861604A CN 115861604 A CN115861604 A CN 115861604A CN 202310121906 A CN202310121906 A CN 202310121906A CN 115861604 A CN115861604 A CN 115861604A
Authority
CN
China
Prior art keywords
image
cervical tissue
magnification
tissue
cervical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310121906.8A
Other languages
Chinese (zh)
Other versions
CN115861604B (en
Inventor
林真
汪进
陈睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Severson Guangzhou Medical Technology Service Co ltd
Original Assignee
Severson Guangzhou Medical Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Severson Guangzhou Medical Technology Service Co ltd filed Critical Severson Guangzhou Medical Technology Service Co ltd
Priority to CN202310121906.8A priority Critical patent/CN115861604B/en
Publication of CN115861604A publication Critical patent/CN115861604A/en
Application granted granted Critical
Publication of CN115861604B publication Critical patent/CN115861604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a cervical tissue image processing method, a cervical tissue image processing device, computer equipment and a storage medium, which can improve the identification accuracy of a target tissue region in a cervical tissue image. The method comprises the following steps: acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image; extracting the characteristics of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue characteristic corresponding to the high-magnification cervical tissue image and a second cervical tissue characteristic corresponding to the low-magnification cervical tissue image, and performing characteristic fusion on the first cervical tissue characteristic and the second cervical tissue characteristic to obtain a target cervical tissue characteristic; inputting the target cervical tissue characteristics into the trained tissue image multi-class segmentation model to obtain a multi-class probability distribution map; determining the target tissue region in the cervical tissue image based on the multi-class probability distribution map.

Description

Cervical tissue image processing method, cervical tissue image processing apparatus, computer device, and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing an image of cervical tissue, a computer device, and a storage medium.
Background
With the development of computer vision technology and hardware, the analysis of pathological slides by means of an auxiliary diagnosis system is possible.
In the related art, for the analysis of the slice image of the cervical tissue, the trained model can be used to identify the target region in the slice image of the cervical tissue at an extremely low magnification, but in practice, it is found that in this way, a plurality of target regions are omitted or are identified incorrectly, and the identification accuracy of the slice image of the cervical tissue is low.
Disclosure of Invention
In view of the above, it is desirable to provide a cervical tissue image processing method, apparatus, computer device and computer readable storage medium capable of improving the identification accuracy of cervical tissue slice images.
In a first aspect, the present application provides a cervical tissue image processing method. The method comprises the following steps:
acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
In one embodiment, the acquiring the corresponding high-magnification cervical tissue image and the low-magnification cervical tissue image of the cervical tissue image includes:
carrying out foreground region identification on a tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image;
acquiring a low-magnification image corresponding to each image block in a plurality of image blocks corresponding to the cervical tissue image to obtain a low-magnification cervical tissue image corresponding to the cervical tissue image, acquiring a high-magnification image corresponding to each image block in the plurality of image blocks, and segmenting the high-magnification image into a plurality of high-magnification image blocks as the high-magnification cervical tissue image corresponding to the cervical tissue image; the magnification of the tissue slice image is less than the magnification of the low-magnification image.
In one embodiment, the performing foreground region identification on the tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image includes:
carrying out binarization processing on a tissue slice image to be identified, and acquiring a plurality of image blocks corresponding to the tissue slice image after binarization processing;
determining the number of target pixel points in each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold as cervical tissue images corresponding to cervical tissues; and the target pixel points are pixel points with pixel values meeting the preset pixel value condition.
In one embodiment, the determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map includes:
determining pixel value statistical characteristics corresponding to a plurality of pixel points in a multi-class probability distribution map for the multi-class probability distribution map corresponding to the tissue of each class;
inputting the pixel value statistical features of each multi-class probability distribution map into a trained classifier, and determining a plurality of abnormal regions and types corresponding to each abnormal region in the cervical tissue image by the classifier based on the input pixel value statistical features;
and determining target tissue areas corresponding to the tissues of all categories in the cervical tissue image according to the corresponding types of the abnormal areas.
In one embodiment, the inputting the pixel value statistical features of the multi-class probability distribution maps into a trained classifier, and determining, by the classifier, a plurality of abnormal regions and a type corresponding to each abnormal region in the cervical tissue image based on the input pixel value statistical features includes:
fusing each input pixel value statistical feature with the target cervical tissue feature to obtain a fused image feature;
inputting the fused image features into a trained classifier, and determining a plurality of abnormal regions in the cervical tissue image and a type corresponding to each abnormal region by the classifier based on the fused image features.
In one embodiment, the performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image includes:
inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, and determining a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image by the feature extraction network respectively.
In one embodiment, the performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image includes:
inputting the high-magnification cervical tissue image into a high-magnification image feature extraction network to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image output by the high-magnification image feature extraction network;
and the number of the first and second groups,
and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature corresponding to the low-magnification cervical tissue image output by the low-magnification image feature extraction network.
In a second aspect, the present application also provides a cervical tissue image processing apparatus. The device comprises:
the high and low magnification ratio tissue image acquisition module is used for acquiring a high magnification ratio cervical tissue image and a low magnification ratio cervical tissue image corresponding to the cervical tissue image;
the characteristic extraction module is used for carrying out characteristic extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue characteristic corresponding to the high-magnification cervical tissue image and a second cervical tissue characteristic corresponding to the low-magnification cervical tissue image, and carrying out characteristic fusion on the first cervical tissue characteristic and the second cervical tissue characteristic to obtain a target cervical tissue characteristic;
the segmentation module is used for inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
a target region determination module, configured to determine the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
After acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image, the method, the device, the computer equipment and the storage medium can perform feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, perform feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature, then can obtain a multi-class segmentation model of the tissue image and output a multi-class probability distribution map corresponding to each of multiple classes of tissues, and in the multi-class probability distribution map corresponding to each class of tissues, the pixel value of each pixel point represents the probability that the pixel point belongs to the target tissue region corresponding to the class of tissues, so that the target tissue region in the cervical tissue image can be determined according to the multi-class probability distribution map. In this application, through respectively follow high magnification cervical tissue image and low magnification cervical tissue image and acquire corresponding first cervical tissue characteristic and second cervical tissue characteristic, and carry out the feature fusion, make the target cervical tissue characteristic that obtains can the multi-angle, reflect cervical tissue image details and semantic information under the different fields of vision comprehensively, and then when utilizing target cervical tissue characteristic to carry out the regional discernment of target tissue, can increase the segmentation rate of accuracy that the model was segmented to tissue image multiclass, promote the discernment rate of accuracy to target tissue region in the cervical tissue image.
Drawings
Fig. 1 is a schematic flow chart of a method for image processing of cervical tissue in one embodiment;
FIG. 2a is a multi-class probability distribution diagram in one embodiment;
FIG. 2b is a thermodynamic diagram after nesting colors in one embodiment;
fig. 3 is a schematic flow chart illustrating the steps for obtaining a characteristic of target cervical tissue in one embodiment;
fig. 4 is a schematic flow chart illustrating another step of obtaining a characteristic of target cervical tissue in one embodiment;
FIG. 5 is a schematic flow chart illustrating the steps for determining a target tissue region in one embodiment;
FIG. 6 is a flowchart illustrating a step of obtaining a classification result according to one embodiment;
FIG. 7 is a flowchart of another step of obtaining classification results in one embodiment;
FIG. 8 is a diagram illustrating cropping of a high-low power image according to an embodiment;
FIG. 9a is an image of a tissue section after a binarization process in an embodiment;
FIG. 9b is an exemplary diagram of a candidate region generation mesh in one embodiment;
FIG. 9c is a diagram illustrating a partitioning of an image block according to an embodiment;
fig. 10 is a block diagram showing the structure of a cervical tissue image processing apparatus according to an embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a cervical tissue image processing method is provided, which is exemplified by applying the method to a terminal, and it is understood that the method can also be applied to a server, and can also be applied to a system comprising the terminal and the server, and is implemented by interaction between the terminal and the server. The terminal can be but not limited to various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart sound boxes, smart televisions, smart air conditioners, smart vehicle-mounted equipment and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In this embodiment, the method includes the steps of:
s101, acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image.
In specific implementation, the cervical tissue can be sampled and a tissue slide corresponding to the cervical tissue can be obtained, and then the tissue slide corresponding to the cervical tissue can be subjected to image acquisition to obtain a cervical tissue image.
Aiming at the same cervical tissue image, images of the cervical tissue image under different magnifications can be obtained to obtain a high-magnification cervical tissue image and a low-magnification cervical tissue image, wherein the magnification of the high-magnification cervical tissue image is larger than that of the low-magnification cervical tissue image. In other words, the high-magnification cervical tissue image and the low-magnification cervical tissue image may be obtained by image sampling of the same cervical tissue region, but the image contents of the two images are different.
Specifically, the high-magnification cervical tissue image may represent various detailed features, such as the morphology of cervical tissue cells in the sampled cervical tissue (i.e., cells in the cervical tissue), the distribution of the cervical tissue cells (adhesion or dispersion, etc.), and the state of cellular structures in the cervical tissue cells (such as cell nucleus, cell membrane, or cytoplasm, etc.); the low-magnification cervical tissue image may characterize the structural features of the cervical tissue as a whole, such as the distribution of one or more cervical tissues in the sample.
In an alternative embodiment, after acquiring the tissue Slide corresponding to the cervical tissue, a full-field slice Image (WSI) or a pyramid Image of the tissue Slide may be acquired as the cervical tissue Image, and the full-field slice Image and the pyramid Image may include a plurality of images at different magnifications for the same tissue region. Of course, in other examples, the high-magnification cervical tissue image and the low-magnification cervical tissue image can also be obtained by adjusting the magnification of the microscope and performing image sampling on the same tissue region at different magnifications.
S102, extracting characteristics of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue characteristic corresponding to the high-magnification cervical tissue image and a second cervical tissue characteristic corresponding to the low-magnification cervical tissue image, and performing characteristic fusion on the first cervical tissue characteristic and the second cervical tissue characteristic to obtain a target cervical tissue characteristic.
In this step, after the high-magnification cervical tissue image and the low-magnification cervical tissue image are obtained, feature extraction may be performed on the high-magnification cervical tissue image and the low-magnification cervical tissue image, respectively.
Specifically, feature extraction may be performed on the high-magnification cervical tissue image to obtain corresponding cervical tissue features, and in order to facilitate distinction, in this embodiment, the cervical tissue features obtained from the high-magnification cervical tissue image are referred to as first cervical tissue features; correspondingly, feature extraction may also be performed on the low-magnification cervical tissue image, and a cervical tissue feature is obtained based on the feature extraction result, and the feature may also be referred to as a second cervical tissue feature.
In one example, the first cervical tissue characteristic may be characteristic of a cellular level, e.g., the first cervical characteristic may include at least one of: color characteristics of the cervical tissue cells (e.g., color characteristics of the nucleus, cytoplasm, cell membrane, or other cellular structures in the cells), texture characteristics of the cervical tissue cells, shape characteristics of the cervical tissue cells, and spatial relationship characteristics of the plurality of cervical tissue cells. The second cervical tissue characteristic may be characteristic of a tissue grade, for example the second cervical tissue characteristic may include at least one of: the color characteristic of the whole cervical tissue, the texture characteristic of the whole cervical tissue, the shape characteristic of the whole cervical tissue and the spatial relationship characteristic of a plurality of cervical tissues in the visual field.
After the first cervical tissue characteristic and the second cervical tissue characteristic are obtained, the first cervical tissue characteristic and the second cervical tissue characteristic can be further subjected to characteristic fusion, and the result of the characteristic fusion is taken as a target cervical tissue characteristic. When feature fusion is performed, for example, the first cervical tissue feature and the second cervical tissue feature may be spliced, and the spliced tissue feature is used as a target cervical tissue feature; or, a complex vector can be combined based on the vector of the first cervical tissue feature and the vector of the second cervical tissue feature, and the complex vector is used as the target cervical tissue feature; the skilled person can select the specific way of feature fusion according to the actual situation.
In this embodiment, by acquiring the corresponding first cervical tissue feature and the second cervical tissue feature from the high-magnification cervical tissue image and the low-magnification cervical tissue image, a processing mode of observing the tissue slide in a diagnosis process in practice in combination with a high-low-magnification mirror can be simulated, multi-level biological features of the cervical tissue are acquired from the cervical tissue images under different magnifications, and the biological features at different levels are fused, so that the obtained target cervical tissue features can comprehensively reflect detail and semantic information of the cervical tissue image under different visual fields at multiple angles, and loss of texture information of a single cell is avoided while cervical tissue structure information is retained.
S103, inputting the target cervical tissue characteristics into the trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in the multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to the target tissue region corresponding to the tissue of the class.
As an example, the target tissue region may be an abnormal region in cervical tissue; the tissue image multi-class segmentation model may be a decoding network (Decoder).
In practical application, after the target cervical tissue feature is obtained, the feature may be input to a pre-trained tissue image multi-class segmentation model, and a multi-class probability distribution map corresponding to each of multiple classes of tissues is generated by the tissue image multi-class segmentation model based on the input target cervical tissue, for example, the multi-class probability distribution map may also be displayed by using a thermodynamic diagram.
The multi-class probability distribution map corresponding to each class of tissue includes a plurality of pixel points, and the pixel value of each pixel point can represent the probability that the pixel point belongs to the pixel point in the target tissue region corresponding to the class of tissue, or can represent the probability that the pixel point is a class of tissue in cervical tissue, that is, the pixel point of the cervical tissue cell of a target type (such as an abnormal type).
In the step, the target cervical tissue characteristics including the first cervical tissue characteristics and the second cervical tissue characteristics are input into the tissue image multi-class segmentation model, so that the target tissue area can be identified and segmented from the cervical tissue image by combining the tissue characteristics and cell characteristics of the cervical tissue image in different scales, and the segmentation accuracy of the target tissue area is improved.
In an example, the pixel value of the pixel point may be expressed as being positively correlated with the probability, that is, the larger the pixel value is, the larger the probability that the pixel point belongs to the target tissue region or the cervical tissue cell of the target type is; of course, the pixel value of the pixel point may also be inversely related to the probability.
Fig. 2a shows an example of a multi-class probability distribution graph, where the multi-class probability distribution graph is a gray scale graph, and a pixel value of each pixel point in the graph is in an interval [0,255], where the size of the pixel value is a value obtained by reasoning a tissue image multi-class segmentation model and converting a confidence of each pixel point (that is, a confidence of a cervical histiocyte of the pixel point belonging to a target tissue region or a target type), and when converting, the confidence determined by the tissue image multi-class segmentation model can be converted from float (floating point) type data to uint8 type data, and by converting the confidence, the confidence corresponding to each of the plurality of pixel points can be quickly stored, and at the same time, the storage space is saved. In other examples, the multi-class probability distribution map in the form of a gray scale map may be further color-nested, and fig. 2b shows an example of a thermodynamic diagram obtained after color-nesting, which can more clearly show the distribution of the target tissue region in the cervical tissue image.
In an optional embodiment, the tissue image multi-class segmentation model may be obtained by performing supervised training on the neural network model through a target cervical tissue feature corresponding to the sample cervical tissue image and a multi-class probability distribution map generated by artificially labeling the sample cervical tissue image, where an obtaining manner of the target cervical tissue feature corresponding to the sample cervical tissue image may be the same as a manner of obtaining the target cervical tissue feature in steps S101 to S102, and a specific processing manner may refer to the foregoing, which is not described herein again.
And S104, determining a target tissue area in the cervical tissue image according to the multi-class probability distribution map.
After the multi-class probability distribution map is obtained, a target tissue region in the cervical tissue image may be determined according to the multi-class probability distribution map. Specifically, for example, after obtaining the multi-class probability distribution map, the distribution situation of different classes of tissues in the cervical tissue image can be identified according to the multi-class probability distribution map, so that a tissue region with an abnormality and a corresponding abnormality type in the cervical tissue image can be determined, and the tissue region with the abnormality can be used as a target tissue region.
In other embodiments, when the type (e.g., abnormal type) of the tissue of the target tissue region is identified, the target cervical tissue feature may also be input to a preset classifier, the classifier classifies different tissue regions in the cervical tissue image, and an identification result of the cervical tissue image is output, where the identification result includes the target tissue region in the cervical tissue image and a tissue type corresponding to the target tissue region.
In this embodiment, after acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to a cervical tissue image, feature extraction may be performed on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, feature fusion may be performed on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature, then, a multi-class probability distribution map corresponding to each of multiple classes of tissues output by a multi-class segmentation model of the tissue image may be obtained, in the multi-class probability distribution map corresponding to each class of tissues, a pixel value of each pixel point indicates a probability that the pixel point belongs to a target tissue region corresponding to the class of tissue, and then, according to the multi-class probability distribution map, a target tissue region in the cervical tissue image may be determined. In this application, through respectively follow high magnification cervical tissue image and low magnification cervical tissue image and acquire corresponding first cervical tissue characteristic and second cervical tissue characteristic, and carry out the feature fusion, make the target cervical tissue characteristic that obtains can the multi-angle, reflect cervical tissue image details and semantic information under the different fields of vision comprehensively, and then when utilizing target cervical tissue characteristic to carry out the regional discernment of target tissue, can increase the segmentation rate of accuracy that the model was segmented to tissue image multiclass, promote the discernment rate of accuracy to target tissue region in the cervical tissue image.
Moreover, the scheme of the application can realize end-to-end full-flow automatic identification, quickly acquire the classification result and the corresponding evidence (namely the multi-class probability distribution map) aiming at the target tissue region in the cervical tissue image, and realize full-automatic integrated reasoning without human intervention.
In one embodiment, the step S102 of performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image may include the following steps:
inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, and respectively determining a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image by the feature extraction network.
In practical application, the same feature extraction network can be used for extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image, and when the features of the input image are extracted, the high-magnification cervical tissue image and the low-magnification cervical tissue image can be input into the feature extraction network together or sequentially.
In an optional embodiment, when performing feature extraction, as shown in fig. 3, after acquiring the cervical tissue image, a high-magnification image and a low-magnification image of the image may be acquired, and a plurality of image blocks corresponding to the high-magnification image and the low-magnification image may be acquired, respectively, to obtain a high-magnification cervical tissue Patch image and a low-magnification cervical tissue Patch image.
Then, the high-magnification cervical tissue Patch image and the low-magnification cervical tissue Patch image can be input into the same feature extraction network (backbone network) for feature extraction, so that a first cervical tissue feature corresponding to each image block in the high-magnification cervical tissue Patch image and a second cervical tissue feature corresponding to each image block in the low-magnification cervical tissue Patch image are obtained, and then feature fusion can be performed on the plurality of first cervical tissue features and the plurality of second cervical tissue features, so that a target cervical tissue feature is obtained.
In this embodiment, the same feature extraction network is used to perform feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image, so that in the process of identifying the target tissue region in the cervical tissue image, the used computing resources can be effectively saved, and the equipment load and the equipment threshold for identifying the target tissue region can be reduced.
In another embodiment, the step S102 of performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image may include the following steps:
inputting the high-magnification cervical tissue image into a high-magnification image feature extraction network to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image output by the high-magnification image feature extraction network; and inputting the low-magnification cervical tissue image into the low-magnification image feature extraction network to obtain a second cervical tissue feature corresponding to the low-magnification cervical tissue image output by the low-magnification image feature extraction network.
Specifically, the high-power image feature extraction network and the low-power image feature extraction network may be trained in advance.
In an example, the high-power image feature extraction network may be trained based on a high-power sample cervical tissue image and associated cervical tissue cell feature labels, where the cervical tissue cell feature labels may be information reflecting cervical tissue cell color, texture, cell structure, or spatial distribution characteristics; the hypo-image feature extraction network can be trained based on a sample cervical tissue image of low magnification and associated cervical tissue feature labels, and the cervical tissue feature labels can be information reflecting texture, structure or spatial distribution characteristics of cervical tissue.
After the trained high-power image feature extraction network and low-power image feature extraction network are obtained, the high-power cervical tissue image can be input into the high-power image feature extraction network to obtain a first cervical tissue feature output by the high-power image feature extraction network; and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature output by the network.
In an optional embodiment, when performing feature extraction, as shown in fig. 4, after acquiring the cervical tissue image, a high-magnification image and a low-magnification image of the image may be acquired, and a plurality of image blocks corresponding to the high-magnification image and the low-magnification image may be acquired, respectively, to obtain a high-magnification cervical tissue Patch image and a low-magnification cervical tissue Patch image.
Then, the high-magnification cervical tissue Patch image can be input into the high-magnification image feature extraction network backbone to obtain a first cervical tissue feature corresponding to each image block output by the high-magnification image feature extraction network, and the low-magnification cervical tissue Patch image can be input into the low-magnification image feature extraction network backbone to obtain a second cervical tissue feature corresponding to each image block output by the low-magnification image feature extraction network, so that feature fusion can be performed to obtain the target cervical tissue feature.
In this embodiment, the high-magnification image feature extraction network and the low-magnification image feature extraction network which are independent of each other are used to process the high-magnification cervical tissue image and the low-magnification cervical tissue image, so that the features of the image matched with the scale (or magnification) can be extracted in a targeted manner, and the extraction accuracy and precision of the first cervical tissue feature and the second cervical tissue feature are improved.
In one embodiment, as shown in fig. 5, the step S104 of determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map may include the steps of:
s201, aiming at the multi-class probability distribution map corresponding to the tissue of each class, determining pixel value statistical characteristics corresponding to a plurality of pixel points in the multi-class probability distribution map.
The pixel value statistical characteristics can represent the distribution condition of the pixel values of a plurality of pixels in the multi-class probability distribution map.
After the multi-class probability distribution map is obtained, for the multi-class probability distribution map corresponding to the tissue of each class, the pixel values of all the pixel points in the multi-class probability distribution map can be counted to obtain the pixel value statistical characteristics corresponding to a plurality of pixel points in the multi-class probability distribution map. In an example, the pixel value statistical characteristics may include at least one of: and the confidence histogram, the ratio of the thresholds to each class and the ratio of the thresholds to the whole image area.
The confidence histogram may be a histogram generated based on pixel values of a plurality of pixel points, and the histogram may include a plurality of pixel value intervals, which may reflect the distribution of the pixel values of the plurality of pixel points in the multi-class probability distribution map. The ratio of the threshold values to each category may be determined based on the number of pixels corresponding to any two pixel value intervals after dividing a plurality of pixel value intervals; the ratio of the area of the whole image to the area of the threshold value can be determined based on the ratio of the number of pixels corresponding to any pixel value interval to the number of pixels of the whole image after a plurality of pixel value intervals are divided.
S202, inputting the pixel value statistical characteristics of each multi-class probability distribution map into a trained classifier, and determining a plurality of abnormal regions and types corresponding to the abnormal regions in the cervical tissue image by the classifier based on the input pixel value statistical characteristics.
As an example, the classifier may be obtained by training a neural network (e.g., a deep neural network), or may use a conventional Machine learning classifier, such as a Support Vector Machine (SVM), a random forest algorithm (RandomForest), a Lightweight Grid-Based Cluster algorithm (LGBC), or an AdaBoost iterative algorithm.
In this step, the obtained statistical features of the pixel values of the multi-class probability distribution maps may be input into a trained classifier, and the classifier performs classification inference on the cervical tissue image, specifically, the classifier may determine the distribution of a plurality of pixel points in different multi-class probability distribution maps based on the input statistical features of a plurality of pixel values, and further may identify a plurality of abnormal regions in the cervical tissue image based on the distribution and determine a type corresponding to each abnormal region, where the type may be a type of tissue (e.g., an abnormal type of cell).
And S203, determining a target tissue area in the cervical tissue image according to the type corresponding to each abnormal area.
When the type corresponding to each abnormal region is obtained, the abnormal region of which the type is identified may be used as the target tissue region in the cervical tissue image. In an alternative embodiment, as shown in fig. 6, if the multi-class probability distribution map is a multi-class probability distribution map corresponding to each of a plurality of image blocks of the cervical tissue image, the obtained multi-class probability distribution maps may be merged, and the pixel values of the pixel points in the merged multi-class probability distribution map are counted to obtain a statistical pixel feature corresponding to the merged multi-class probability distribution map, which is used as a statistical pixel feature of the whole cervical tissue image, and then the statistical pixel feature may be input to a trained classifier to obtain a classification result of the whole cervical tissue image, where the classification result may generally adopt a category of a tissue region with the most serious variation level (i.e., the highest variation degree) in the whole image.
In this embodiment, a multi-class probability distribution map obtained by fusing different-scale cervical features can be combined, and target tissue areas of different types of cervical tissues can be quickly identified through a pre-trained classifier, so that the efficiency and accuracy of identifying the cervical tissues of the specified type are improved.
In one embodiment, inputting the pixel value statistical features of the multi-class probability distribution maps into a trained classifier, and determining, by the classifier, a plurality of abnormal regions and types corresponding to the abnormal regions in the neck tissue image based on the input pixel value statistical features may include the following steps:
fusing each input pixel value statistical characteristic with a target cervical tissue characteristic to obtain a fused image characteristic; and inputting the fused image features into a trained classifier, and determining a plurality of abnormal regions and types corresponding to the abnormal regions in the cervical tissue image by the classifier based on the fused image features.
Specifically, after the pixel value statistical features of the multiple multi-class probability distribution maps are obtained, feature fusion can be performed on the multiple pixel value statistical features and the target cervical tissue, fused image features are obtained based on feature fusion results, the fused image features can be input into a classifier, and the classifier identifies and classifies abnormal regions based on the fused image features to obtain types corresponding to the abnormal regions.
In an optional embodiment, as shown in fig. 7, if the target cervical tissue feature is a target cervical tissue feature corresponding to each of a plurality of image blocks of the cervical tissue image, and the multi-class probability distribution map corresponding to each class of tissue is a multi-class probability distribution map corresponding to each of the plurality of image blocks of the cervical tissue image, the plurality of target cervical tissue features may be merged first to obtain the target cervical tissue feature of the cervical tissue image full map, and the obtained plurality of multi-class probability distribution maps may be merged, and the pixel values of the pixel points in the merged multi-class probability distribution maps are counted to obtain a pixel value statistical feature corresponding to the merged multi-class probability distribution map, which is used as the pixel value statistical feature of the cervical tissue image full map. Then, the target cervical tissue features and the pixel value statistical features of the cervical tissue image full map can be merged to serve as the fused image features, and the fused image features are input into a trained classifier to obtain the classification result of the cervical tissue image full map.
In this embodiment, when determining the abnormal region and the type corresponding to the abnormal region, the pixel value statistical feature and the target cervical tissue feature are fused and then input to the classifier, so that the distribution of the pixel values of a plurality of pixel points can be combined while using the multi-scale image feature of the cervical tissue image, and the accuracy of identifying the abnormal region and the type thereof can be improved.
In one embodiment, the step S101 of acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image may include the following steps:
performing foreground area identification on the tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image; the method comprises the steps of obtaining a low-magnification image corresponding to each image block in a plurality of image blocks corresponding to a cervical tissue image to obtain a low-magnification cervical tissue image corresponding to the cervical tissue image, obtaining a high-magnification image corresponding to each image block in the plurality of image blocks, and segmenting the high-magnification image into a plurality of high-magnification image blocks to serve as the high-magnification cervical tissue image corresponding to the cervical tissue image.
Wherein the magnification of the tissue section image is smaller than the magnification of the low-magnification image.
Specifically, a cervical tissue sample of the patient may be obtained, and after the cervical tissue sample is prepared, a tissue slice image to be identified may be further obtained. Then, foreground region identification can be performed on the tissue slice image to be identified, where the foreground region identification can be to identify an effective region in the tissue slice image, and the effective region can be an image region where cervical tissue is located in the tissue slice image. The foreground region identification may be performed at an extremely low magnification when performing the foreground region identification, where the extremely low magnification may refer to a magnification (e.g., a lowest magnification) smaller than that of the low-magnification image. Taking the magnification of the low-magnification image as 5 magnifications as an example, the magnification corresponding to the tissue slice image during foreground region identification may be selected in the interval [1, 2.5], in other words, in this example, the ratio of the magnification of the tissue slice image to the magnification of the low-magnification image may be between [0.25, 0.5 ].
Thus, after the foreground region is identified, a cervical tissue image corresponding to the cervical tissue in the tissue slice image can be obtained according to the identification result of the foreground region identification. In an alternative embodiment, the cervical tissue image may be composed of a plurality of image patches, and after the cervical tissue image is obtained, for each image patch of the plurality of image patches, a low-magnification image corresponding to each image patch may be acquired, and the plurality of low-magnification images are used as the low-magnification cervical tissue images corresponding to the cervical tissue image, where the low-magnification images may better present structural features.
Meanwhile, a high-magnification image corresponding to each image block in the plurality of image blocks can be obtained, and the high-magnification image is segmented into the plurality of high-magnification image blocks to serve as the high-magnification cervical tissue image corresponding to the cervical tissue image, wherein the high-magnification image can present more detailed features.
For example, as shown in fig. 8, in the example of cropping a high-low-magnification image, for a low-magnification image and a high-magnification image of the same image block, physical regions represented by the low-magnification image and the high-magnification image are the same physical region, but the sizes of the images are different, and for an example, a 5-magnification image and a 20-magnification image are taken as examples, a physical region represented by 1 5-magnification image may be equal to a physical region represented by 4 20-magnification images, and after the high-magnification image for the same image region is acquired, the high-magnification image may be divided, for example, the 20-magnification image may be cropped into 4 images (without adjusting the size to retain details), and when feature extraction is performed, 5 images (1 image of 5-magnification and 4 images of 20-magnification) may be simultaneously input to the feature extraction network.
In this embodiment, on one hand, for a large number of irrelevant background areas possibly included in the image corresponding to the tissue slide, by performing foreground area identification on the tissue slice image, the background areas irrelevant to target tissue area identification can be removed, and only the cervical tissue image where the cervical tissue is located is subjected to inference identification, so that invalid inference time is reduced, and the identification speed of the whole image of the tissue slice image is improved; on the other hand, the method and the device can rapidly position all foreground regions in the tissue slice image by performing foreground region identification at a lower multiplying power, and meanwhile, compared with missed diagnosis caused by searching of a target tissue region at an extremely low multiplying power in the related art, the method and the device can further respectively acquire a low-multiplying-power image and a high-multiplying-power image with higher multiplying power for feature extraction aiming at the cervical tissue image of the corresponding region of the cervical tissue, can perform multi-scale feature extraction in the image with higher multiplying power, and can remarkably improve recall rate and accuracy of target tissue region identification.
In an embodiment, the foreground region identification of the tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image may include the following steps:
carrying out binarization processing on a tissue slice image to be identified, and acquiring a plurality of image blocks corresponding to the tissue slice image after binarization processing; and determining the number of target pixel points in each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold value as cervical tissue images corresponding to cervical tissues.
The target pixel points are pixel points with pixel values meeting the preset pixel value condition, and the preset pixel values are pixel values or pixel value intervals corresponding to the foreground effective area after binarization processing.
In a specific implementation, a binarization process may be performed on a tissue image to be recognized, where the binarization process may also be referred to as a mask process, and fig. 9a shows a tissue slice image after the binarization process. Then, the tissue slice image after the binarization processing may be divided into a plurality of image blocks.
In an alternative embodiment, a plurality of image blocks may be obtained through a candidate area generation mesh, and fig. 9b shows an exemplary diagram of a candidate area generation mesh, which may specifically be a set of coordinates of a region that can be cropped and is generated according to an actual size of a tissue slice image, a size of an image block that needs to be cropped, and an overlap (overlapping area) between each image block, where the size of the entire candidate area generation mesh may be the same as the actual size of the tissue slice image, and the size of each mesh in the candidate area generation mesh is the size of the image block that needs to be cropped. Further, the candidate region generation mesh may be superimposed on the tissue slice image after the binarization processing to obtain a plurality of corresponding image blocks, for example, as shown in fig. 9 c.
After a plurality of image blocks are obtained, the number of target pixel points in each image block can be determined, and the image block with the number of the target pixel points exceeding a number threshold is determined as a cervical tissue image corresponding to cervical tissue; and if the number of the target pixel points does not exceed the number threshold, the image block is not determined as the cervical tissue image, and the next image block can be skipped to be processed continuously.
For example, taking the tissue slice image after the binarization processing in fig. 9a as an example, the foreground effective region is binarized into white, and the pixel value of the foreground effective region is 1, for each image block, if the number of target pixel points in the image block, of which the pixel value is 1, exceeds a preset number threshold, the image block is determined as a cervical tissue image, in an example, the number threshold of the target pixel points may be 0, and when the number of the target pixel points is greater than 0, the image block is determined as the cervical tissue image, which can ensure that the target cervical tissue features are extracted for all the foreground effective regions, and avoid missing detection.
In this embodiment, by counting the number of target pixel points in the plurality of image blocks obtained after binarization processing and performing corresponding comparison, the region where the cervical tissue is located can be quickly identified, and the identification efficiency of the target tissue region is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the present application further provides a cervical tissue image processing apparatus for implementing the cervical tissue image processing method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the cervical tissue image processing apparatus provided below can be referred to the limitations in the above cervical tissue image processing method, and are not described herein again.
In one embodiment, as shown in fig. 10, there is provided a cervical tissue image processing apparatus including:
a high-low magnification tissue image obtaining module 1001, configured to obtain a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
a feature extraction module 1002, configured to perform feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and perform feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
a segmentation module 1003, configured to input the target cervical tissue features into a trained tissue image multi-class segmentation model, so as to obtain a multi-class probability distribution map corresponding to each of multiple classes of tissues output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
a target region determination module 1004 for determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
In one embodiment, the high-low magnification tissue image acquisition module 1001 is configured to:
carrying out foreground region identification on a tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image;
acquiring a low-magnification image corresponding to each image block of a plurality of image blocks corresponding to the cervical tissue image to obtain a low-magnification cervical tissue image corresponding to the cervical tissue image, acquiring a high-magnification image corresponding to each image block of the plurality of image blocks, and segmenting the high-magnification image into a plurality of high-magnification image blocks as the high-magnification cervical tissue image corresponding to the cervical tissue image; the magnification of the tissue slice image is less than the magnification of the low-magnification image.
In one embodiment, the high-low magnification tissue image acquisition module 1001 is configured to:
carrying out binarization processing on a tissue slice image to be identified, and acquiring a plurality of image blocks corresponding to the tissue slice image after binarization processing;
determining the number of target pixel points in each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold as cervical tissue images corresponding to cervical tissues; and the target pixel points are pixel points with pixel values meeting the preset pixel value condition.
In one embodiment, the target area determination module 1004 is configured to:
determining pixel value statistical characteristics corresponding to a plurality of pixel points in a multi-class probability distribution map for the multi-class probability distribution map corresponding to the tissue of each class;
inputting the pixel value statistical features of each multi-class probability distribution map into a trained classifier, and determining a plurality of abnormal regions and types corresponding to each abnormal region in the cervical tissue image by the classifier based on the input pixel value statistical features;
and determining a target tissue area in the cervical tissue image according to the type corresponding to each abnormal area.
In one embodiment, the target area determination module 1004 is configured to:
fusing each input pixel value statistical feature with the target cervical tissue feature to obtain a fused image feature;
inputting the fused image features into a trained classifier, and determining a plurality of abnormal regions in the cervical tissue image and a type corresponding to each abnormal region by the classifier based on the fused image features.
In one embodiment, the feature extraction module 1002 is configured to:
inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, and determining a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image by the feature extraction network respectively.
In one embodiment, the feature extraction module 1002 is configured to:
inputting the high-magnification cervical tissue image into a high-magnification image feature extraction network to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image output by the high-magnification image feature extraction network;
and the number of the first and second groups,
and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature corresponding to the low-magnification cervical tissue image output by the low-magnification image feature extraction network.
The various modules of the cervical tissue image processing apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O for short), and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store images of cervical tissue. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a cervical tissue image processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of:
acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to each class of tissue, the pixel value of each pixel point represents the probability of a target tissue area corresponding to the tissue of which the pixel point belongs to the class;
determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
In one embodiment, the steps in the other embodiments described above are also implemented when the computer program is executed by a processor.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to each class of tissue, the pixel value of each pixel point represents the probability of a target tissue area corresponding to the tissue of which the pixel point belongs to the class;
determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
In one embodiment, the computer program when executed by the processor also performs the steps in the other embodiments described above.
In one embodiment, a computer program product is provided, comprising a computer program which when executed by a processor performs the steps of:
acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
In one embodiment, the computer program when executed by the processor also implements the steps of the other embodiments described above.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant countries and regions.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method of image processing of cervical tissue, the method comprising:
acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
determining the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
2. The method of claim 1, wherein said acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to a cervical tissue image comprises:
carrying out foreground region identification on a tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image;
acquiring a low-magnification image corresponding to each image block of a plurality of image blocks corresponding to the cervical tissue image to obtain a low-magnification cervical tissue image corresponding to the cervical tissue image, acquiring a high-magnification image corresponding to each image block of the plurality of image blocks, and segmenting the high-magnification image into a plurality of high-magnification image blocks as the high-magnification cervical tissue image corresponding to the cervical tissue image; the magnification of the tissue slice image is less than the magnification of the low-magnification image.
3. The method of claim 2, wherein the performing foreground region identification on the tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image comprises:
carrying out binarization processing on a tissue slice image to be identified, and acquiring a plurality of image blocks corresponding to the tissue slice image after binarization processing;
determining the number of target pixel points in each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold value as cervical tissue images corresponding to cervical tissues; and the target pixel points are pixel points with pixel values meeting the preset pixel value condition.
4. The method of claim 1, wherein said determining the target tissue region in the cervical tissue image from the multi-class probability distribution map comprises:
determining pixel value statistical characteristics corresponding to a plurality of pixel points in a multi-class probability distribution map for the multi-class probability distribution map corresponding to the tissue of each class;
inputting the pixel value statistical features of each multi-class probability distribution map into a trained classifier, and determining a plurality of abnormal regions and types corresponding to each abnormal region in the cervical tissue image by the classifier based on the input pixel value statistical features;
and determining a target tissue area in the cervical tissue image according to the type corresponding to each abnormal area.
5. The method of claim 4, wherein inputting the pixel value statistical features of the respective multi-class probability distribution maps into a trained classifier, and determining, by the classifier, a plurality of abnormal regions and a type corresponding to each of the abnormal regions in the cervical tissue image based on the input respective pixel value statistical features comprises:
fusing each input pixel value statistical feature with the target cervical tissue feature to obtain a fused image feature;
inputting the fused image features into a trained classifier, and determining a plurality of abnormal regions in the cervical tissue image and a type corresponding to each abnormal region by the classifier based on the fused image features.
6. The method of any one of claims 1-5, wherein the performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image comprises:
inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, and determining a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image by the feature extraction network respectively.
7. The method of any one of claims 1-5, wherein the performing feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image comprises:
inputting the high-magnification cervical tissue image into a high-magnification image feature extraction network to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image output by the high-magnification image feature extraction network;
and the number of the first and second groups,
and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature corresponding to the low-magnification cervical tissue image output by the low-magnification image feature extraction network.
8. An apparatus for image processing of cervical tissue, the apparatus comprising:
the high and low magnification ratio tissue image acquisition module is used for acquiring a high magnification ratio cervical tissue image and a low magnification ratio cervical tissue image corresponding to the cervical tissue image;
the characteristic extraction module is used for carrying out characteristic extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue characteristic corresponding to the high-magnification cervical tissue image and a second cervical tissue characteristic corresponding to the low-magnification cervical tissue image, and carrying out characteristic fusion on the first cervical tissue characteristic and the second cervical tissue characteristic to obtain a target cervical tissue characteristic;
the segmentation module is used for inputting the target cervical tissue characteristics into a trained tissue image multi-class segmentation model to obtain a multi-class probability distribution graph corresponding to each tissue of multiple classes output by the tissue image multi-class segmentation model; in a multi-class probability distribution map corresponding to the tissue of each class, the pixel value of each pixel point represents the probability that the pixel point belongs to a target tissue area corresponding to the tissue of the class;
a target region determination module, configured to determine the target tissue region in the cervical tissue image according to the multi-class probability distribution map.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202310121906.8A 2023-02-16 2023-02-16 Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium Active CN115861604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310121906.8A CN115861604B (en) 2023-02-16 2023-02-16 Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310121906.8A CN115861604B (en) 2023-02-16 2023-02-16 Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115861604A true CN115861604A (en) 2023-03-28
CN115861604B CN115861604B (en) 2023-06-02

Family

ID=85658210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310121906.8A Active CN115861604B (en) 2023-02-16 2023-02-16 Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115861604B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034208A (en) * 2018-07-03 2018-12-18 怀光智能科技(武汉)有限公司 A kind of cervical cell pathological section classification method of high-low resolution combination
CN110610480A (en) * 2019-08-02 2019-12-24 成都上工医信科技有限公司 MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN113763386A (en) * 2021-07-13 2021-12-07 合肥工业大学 Multi-scale feature fusion based intelligent segmentation method and system for surgical instrument image
CN114550169A (en) * 2022-02-23 2022-05-27 腾讯科技(深圳)有限公司 Training method, device, equipment and medium for cell classification model
CN115457012A (en) * 2022-09-27 2022-12-09 云南大学 Pathological image segmentation method, system, storage medium, equipment and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034208A (en) * 2018-07-03 2018-12-18 怀光智能科技(武汉)有限公司 A kind of cervical cell pathological section classification method of high-low resolution combination
CN110610480A (en) * 2019-08-02 2019-12-24 成都上工医信科技有限公司 MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN113763386A (en) * 2021-07-13 2021-12-07 合肥工业大学 Multi-scale feature fusion based intelligent segmentation method and system for surgical instrument image
CN114550169A (en) * 2022-02-23 2022-05-27 腾讯科技(深圳)有限公司 Training method, device, equipment and medium for cell classification model
CN115457012A (en) * 2022-09-27 2022-12-09 云南大学 Pathological image segmentation method, system, storage medium, equipment and terminal

Also Published As

Publication number Publication date
CN115861604B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US11373305B2 (en) Image processing method and device, computer apparatus, and storage medium
CN110163234B (en) Model training method and device and storage medium
CN111612008A (en) Image segmentation method based on convolution network
CN111476806B (en) Image processing method, image processing device, computer equipment and storage medium
CN109390053B (en) Fundus image processing method, fundus image processing apparatus, computer device, and storage medium
CN112132827A (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN115294126B (en) Cancer cell intelligent identification method for pathological image
CN114693624A (en) Image detection method, device and equipment and readable storage medium
CN113420827A (en) Semantic segmentation network training and image semantic segmentation method, device and equipment
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN111461211A (en) Feature extraction method for lightweight target detection and corresponding detection method
CN112991281B (en) Visual detection method, system, electronic equipment and medium
CN111382638A (en) Image detection method, device, equipment and storage medium
CN113361589A (en) Rare or endangered plant leaf identification method based on transfer learning and knowledge distillation
CN113033371A (en) CSP model-based multi-level feature fusion pedestrian detection method
CN115908363B (en) Tumor cell statistics method, device, equipment and storage medium
CN112132145A (en) Image classification method and system based on model extended convolutional neural network
CN116206334A (en) Wild animal identification method and device
CN116188855A (en) Multi-scale plant disease identification method, device, storage medium and apparatus
CN115861604B (en) Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium
CN115713769A (en) Training method and device of text detection model, computer equipment and storage medium
CN116091784A (en) Target tracking method, device and storage medium
CN116129158A (en) Power transmission line iron tower small part image recognition method and device
CN115205573A (en) Image processing method, device and equipment
CN114037702B (en) Method and device for screening and classifying slice-level cervical cancer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant