CN115861604B - Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium - Google Patents

Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN115861604B
CN115861604B CN202310121906.8A CN202310121906A CN115861604B CN 115861604 B CN115861604 B CN 115861604B CN 202310121906 A CN202310121906 A CN 202310121906A CN 115861604 B CN115861604 B CN 115861604B
Authority
CN
China
Prior art keywords
image
cervical tissue
tissue
magnification
cervical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310121906.8A
Other languages
Chinese (zh)
Other versions
CN115861604A (en
Inventor
林真
汪进
陈睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Severson Guangzhou Medical Technology Service Co ltd
Original Assignee
Severson Guangzhou Medical Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Severson Guangzhou Medical Technology Service Co ltd filed Critical Severson Guangzhou Medical Technology Service Co ltd
Priority to CN202310121906.8A priority Critical patent/CN115861604B/en
Publication of CN115861604A publication Critical patent/CN115861604A/en
Application granted granted Critical
Publication of CN115861604B publication Critical patent/CN115861604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a cervical tissue image processing method, a cervical tissue image processing device, computer equipment and a storage medium, which can improve the identification accuracy of a target tissue area in a cervical tissue image. The method comprises the following steps: obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image; extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature; inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain a multi-category probability distribution map; and determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map.

Description

Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a cervical tissue image processing method, apparatus, computer device, and storage medium.
Background
With the development of computer vision technology and hardware, it is possible to analyze pathological slides with the aid of auxiliary diagnostic systems.
In the related art, a trained model can be utilized to identify a target area in the cervical tissue slice image by analyzing the cervical tissue slice image, but in practice, the situation that a plurality of target areas are missed or are identified incorrectly is found, and the accuracy of identifying the cervical tissue slice image is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a cervical tissue image processing method, apparatus, computer device, and computer-readable storage medium capable of improving the accuracy of recognition of cervical tissue slice images.
In a first aspect, the present application provides a cervical tissue image processing method. The method comprises the following steps:
obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
Inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain the tissue image multi-category segmentation model and outputting multi-category probability distribution graphs corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
and determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map.
In one embodiment, the acquiring the high-magnification cervical tissue image and the low-magnification cervical tissue image corresponding to the cervical tissue image includes:
performing foreground region identification on a tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image;
obtaining a low-magnification image corresponding to each image block in a plurality of image blocks corresponding to the cervical tissue image, obtaining a low-magnification cervical tissue image corresponding to the cervical tissue image, obtaining a high-magnification image corresponding to each image block in the plurality of image blocks, and segmenting the high-magnification image into a plurality of high-magnification image blocks serving as a high-magnification cervical tissue image corresponding to the cervical tissue image; the magnification of the tissue slice image is less than the magnification of the low magnification image.
In one embodiment, the identifying the foreground region of the tissue slice image to be identified to obtain a cervical tissue image corresponding to the cervical tissue in the tissue slice image includes:
performing binarization processing on a tissue slice image to be identified, and acquiring a plurality of image blocks corresponding to the tissue slice image after the binarization processing;
determining the number of target pixel points in each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold as cervical tissue images corresponding to cervical tissues; the target pixel points are pixel points with pixel values meeting the preset pixel value conditions.
In one embodiment, the determining the target tissue region in the cervical tissue image from the multi-class probability distribution map comprises:
determining pixel value statistical features corresponding to a plurality of pixel points in a multi-category probability distribution map corresponding to each category of tissue;
inputting the pixel value statistical features of each multi-category probability distribution map to a trained classifier, and determining a plurality of abnormal areas in the cervical tissue image and types corresponding to each abnormal area by the classifier based on each input pixel value statistical feature;
And determining a target tissue region corresponding to each type of tissue in the cervical tissue image according to the type corresponding to each abnormal region.
In one embodiment, the inputting the pixel value statistical feature of each multi-category probability distribution map into a trained classifier, and determining, by the classifier, a plurality of abnormal regions in the cervical tissue image and types corresponding to each of the abnormal regions based on each of the input pixel value statistical features includes:
fusing the input statistical characteristics of the pixel values with the target cervical tissue characteristics to obtain fused image characteristics;
inputting the fused image features into a trained classifier, and determining a plurality of abnormal areas in the cervical tissue image and types corresponding to the abnormal areas by the classifier based on the fused image features.
In one embodiment, the feature extraction of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image includes:
Inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, and respectively determining a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image by the feature extraction network.
In one embodiment, the feature extraction of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image includes:
inputting the high-magnification cervical tissue image into a high-magnification image feature extraction network to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image output by the high-magnification image feature extraction network;
the method comprises the steps of,
and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature corresponding to the low-magnification cervical tissue image output by the low-magnification image feature extraction network.
In a second aspect, the present application also provides a cervical tissue image processing apparatus. The device comprises:
The high-low magnification tissue image acquisition module is used for acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
the feature extraction module is used for carrying out feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and carrying out feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
the segmentation module is used for inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model, obtaining the tissue image multi-category segmentation model and outputting multi-category probability distribution diagrams corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
and the target area determining module is used for determining the target tissue area in the cervical tissue image according to the multi-category probability distribution map.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain the tissue image multi-category segmentation model and outputting multi-category probability distribution graphs corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
And determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain the tissue image multi-category segmentation model and outputting multi-category probability distribution graphs corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
And determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain the tissue image multi-category segmentation model and outputting multi-category probability distribution graphs corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
And determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map.
According to the cervical tissue image processing method, the device, the computer equipment and the storage medium, after the high-magnification cervical tissue image and the low-magnification cervical tissue image corresponding to the cervical tissue image are acquired, feature extraction can be performed on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain the first cervical tissue feature corresponding to the high-magnification cervical tissue image and the second cervical tissue feature corresponding to the low-magnification cervical tissue image, feature fusion is performed on the first cervical tissue feature and the second cervical tissue feature to obtain the target cervical tissue feature, then a tissue image multi-category segmentation model can be obtained to output multi-category probability distribution diagrams corresponding to various categories of tissues, in the multi-category probability distribution diagrams corresponding to the tissues of each category, the pixel value of each pixel point represents the probability that the pixel point belongs to the target tissue region corresponding to the category of the tissue, and further the target tissue region in the cervical tissue image can be determined according to the multi-category probability distribution diagrams. According to the method and the device, the corresponding first cervical tissue characteristics and second cervical tissue characteristics are obtained from the high-magnification cervical tissue image and the low-magnification cervical tissue image respectively, and the characteristics are fused, so that the obtained target cervical tissue characteristics can reflect details and semantic information of the cervical tissue image in different fields of view in a multi-angle and comprehensive manner, and further when the target cervical tissue characteristics are utilized for identifying the target tissue area, the segmentation accuracy of the tissue image multi-category segmentation model can be increased, and the identification accuracy of the target tissue area in the cervical tissue image is improved.
Drawings
Fig. 1 is a flow chart of a cervical tissue image processing method according to an embodiment;
FIG. 2a is a multi-class probability distribution diagram in one embodiment;
FIG. 2b is a thermodynamic diagram of one embodiment after nesting colors;
fig. 3 is a flowchart illustrating a step of acquiring a target cervical tissue characteristic in one embodiment;
fig. 4 is a flowchart illustrating another step of acquiring a target cervical tissue characteristic in one embodiment;
FIG. 5 is a flowchart illustrating steps for determining a target tissue region in one embodiment;
FIG. 6 is a flowchart illustrating steps for obtaining classification results according to an embodiment;
FIG. 7 is a flowchart illustrating another step of obtaining a classification result according to an embodiment;
FIG. 8 is a schematic view of a high and low magnification image cropping in one embodiment;
FIG. 9a is a tissue slice image after binarization processing in one embodiment;
FIG. 9b is an exemplary diagram of a candidate region generation grid in one embodiment;
FIG. 9c is a schematic diagram of segmentation of an image block in one embodiment;
fig. 10 is a block diagram showing the structure of a cervical tissue image processing apparatus according to an embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a cervical tissue image processing method is provided, where the embodiment is applied to a terminal for illustration, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. The terminal can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted equipment and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In this embodiment, the method includes the steps of:
S101, obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image.
In the specific implementation, the cervical tissue can be sampled and a tissue slide corresponding to the cervical tissue can be acquired, and then the image acquisition can be carried out on the tissue slide corresponding to the cervical tissue to obtain a cervical tissue image.
Aiming at the same cervical tissue image, the cervical tissue image can be obtained under different magnification, and a high-magnification cervical tissue image and a low-magnification cervical tissue image are obtained, wherein the magnification of the high-magnification cervical tissue image is larger than that of the low-magnification cervical tissue image. In other words, the high-magnification cervical tissue image and the low-magnification cervical tissue image may be obtained by sampling the images of the same cervical tissue region, but the image contents contained in the two images are different.
Specifically, various detail features can be represented in the high-magnification cervical tissue image, for example, the morphology of cervical tissue cells (i.e. cells in the cervical tissue) in the sampled cervical tissue, the distribution condition (adhesion or dispersion, etc.) of the cervical tissue cells, and the state of cell structures (such as nucleus, cell membrane or cytoplasm, etc.) in the cervical tissue cells; the low magnification cervical tissue image may be representative of structural features of the cervical tissue as a whole, such as the distribution of one or more cervical tissues in the sample.
In an alternative embodiment, after a tissue slide corresponding to cervical tissue is acquired, a full-field slice image (Whole Slide Image, WSI) or pyramid image of the tissue slide may be acquired as cervical tissue images, and the full-field slice image and pyramid image may include multiple images at different magnifications for the same tissue region. Of course, in other examples, the high-magnification cervical tissue image and the low-magnification cervical tissue image may be obtained by adjusting the magnification of the microscope and image-sampling the same tissue region at different magnifications.
S102, extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature.
In this step, after the high-magnification cervical tissue image and the low-magnification cervical tissue image are obtained, feature extraction may be performed on the high-magnification cervical tissue image and the low-magnification cervical tissue image, respectively.
Specifically, feature extraction can be performed on the high-magnification cervical tissue image to obtain a corresponding cervical tissue feature, and in order to facilitate differentiation, the cervical tissue feature obtained from the high-magnification cervical tissue image is referred to as a first cervical tissue feature in this embodiment; accordingly, feature extraction may also be performed on the low-magnification cervical tissue image, and a cervical tissue feature may be obtained based on the feature extraction result, which may also be referred to as a second cervical tissue feature.
In one example, the first cervical tissue characteristic may characterize a cell-level characteristic, e.g., the first cervical characteristic may include at least one of: the color characteristics of the cervical tissue cells (such as the color characteristics of the nucleus, cytoplasm, cell membrane or other cellular structures in the cell), the texture characteristics of the cervical tissue cells, the shape characteristics of the cervical tissue cells, the spatial relationship characteristics of a plurality of cervical tissue cells. The second cervical tissue characteristic may be characteristic of the tissue level, e.g., the second cervical tissue characteristic may include at least one of: the color characteristics of the whole cervical tissue, the texture characteristics of the whole cervical tissue, the shape characteristics of the whole cervical tissue and the spatial relationship characteristics of a plurality of cervical tissues in the visual field.
After the first cervical tissue characteristic and the second cervical tissue characteristic are obtained, the first cervical tissue characteristic and the second cervical tissue characteristic can be further subjected to characteristic fusion, and the result of the characteristic fusion is used as a target cervical tissue characteristic. When the features are fused, the first cervical tissue feature and the second cervical tissue feature may be spliced, and the spliced tissue feature is taken as a target cervical tissue feature; alternatively, the vector of the first cervical tissue characteristic and the vector of the second cervical tissue characteristic may be combined to form a complex vector, and the complex vector may be used as the target cervical tissue characteristic; the specific manner of feature fusion can be selected by those skilled in the art according to the actual circumstances.
In this embodiment, by acquiring the corresponding first cervical tissue feature and the second cervical tissue feature from the high-magnification cervical tissue image and the low-magnification cervical tissue image respectively, a processing mode of combining a high-magnification lens and a low-magnification lens for observation during tissue slide diagnosis in practice can be simulated, multi-level biological features of the cervical tissue are acquired from the cervical tissue images under different magnifications, and the biological features of different levels are fused, so that the acquired target cervical tissue features can reflect details and semantic information of the cervical tissue images under different fields of view in a multi-angle and comprehensive manner, and the texture information of a single cell is prevented from being lost while the cervical tissue structure information is reserved.
S103, inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model, obtaining the tissue image multi-category segmentation model, and outputting multi-category probability distribution diagrams corresponding to various categories of tissues; in the multi-category probability distribution map corresponding to the organization of each category, the pixel value of each pixel represents the probability that the pixel belongs to the target organization area corresponding to the organization of the category.
As an example, the target tissue region may be an abnormal region in cervical tissue; the tissue image multi-class segmentation model may be a decoding network (Decoder).
In practical application, after the target cervical tissue characteristics are obtained, the characteristics can be input into a pre-trained tissue image multi-category segmentation model, the tissue image multi-category segmentation model generates multi-category probability distribution diagrams corresponding to various categories of tissues based on the input target cervical tissues, and the multi-category probability distribution diagrams can be displayed by using a thermodynamic diagram.
The multi-category probability distribution map corresponding to each category of tissue comprises a plurality of pixel points, and the pixel value of each pixel point can represent the probability that the pixel point belongs to a pixel point in a target tissue area corresponding to the category of tissue, or can represent the probability that the pixel point is one category of tissue in cervical tissues, namely the pixel point of cervical tissue cells of a target type (such as an abnormal type).
In the step, the target cervical tissue characteristics including the first cervical tissue characteristics and the second cervical tissue characteristics are input into the tissue image multi-category segmentation model, so that the target tissue areas can be identified and segmented from the cervical tissue images by combining the tissue characteristics and the cell characteristics of the cervical tissue images in different scales, and the segmentation accuracy of the target tissue areas is improved.
In an example, the pixel value of a pixel may be expressed as positively correlated with the probability, i.e., the greater the pixel value, the greater the probability that the pixel belongs to a target tissue region or cervical tissue cell of a target type; of course, the pixel value of the pixel point may also be inversely related to the probability.
Fig. 2a shows an example of a multi-class probability distribution diagram, which is a gray level diagram, wherein the pixel value of each pixel point in the diagram is in a section [0,255], and the size of the pixel value is a value obtained by performing reasoning on a multi-class segmentation model of a tissue image and then performing conversion on the confidence coefficient of each pixel point (that is, the confidence coefficient of the pixel point belonging to a target tissue region or a cervical tissue cell of a target type), and when the confidence coefficient is converted, the confidence coefficient determined by the multi-class segmentation model of the tissue image can be converted from float (floating point) type data to ui 8 type data, and by converting the confidence coefficient, the confidence coefficient corresponding to each of a plurality of pixel points can be stored quickly, and meanwhile, the storage space is saved. In other examples, the multi-class probability distribution map in the form of a gray scale map may also be color-nested, and fig. 2b shows an example of a thermodynamic diagram obtained after the color-nesting, which can more clearly show the distribution of the target tissue region in the cervical tissue image.
In an optional embodiment, the tissue image multi-category segmentation model may be obtained by performing supervised training on the neural network model through a target cervical tissue feature corresponding to the sample cervical tissue image and a multi-category probability distribution map generated by manually labeling the sample cervical tissue image, where the obtaining manner of the target cervical tissue feature corresponding to the sample cervical tissue image may be the same as the obtaining manner of the target cervical tissue feature in steps S101-S102, and the specific processing manner may be referred to above, and will not be described herein.
S104, determining a target tissue area in the cervical tissue image according to the multi-category probability distribution map.
After the multi-category probability distribution map is obtained, a target tissue region in the cervical tissue image may be determined from the multi-category probability distribution map. Specifically, for example, after the multi-category probability distribution map is obtained, the distribution situation of tissues of different categories in the cervical tissue image can be identified according to the multi-category probability distribution map, so that an abnormal tissue region and a corresponding abnormal type in the cervical tissue image can be determined, and the abnormal tissue region is taken as a target tissue region.
In other embodiments, when identifying the tissue type (such as an abnormal type) of the target tissue area, the target cervical tissue feature may also be input into a preset classifier, the classifier classifies different tissue areas in the cervical tissue image, and outputs an identification result of the cervical tissue image, where the identification result includes the target tissue area in the cervical tissue image and the tissue type corresponding to the target tissue area.
In this embodiment, after obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image, feature extraction may be performed on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and feature fusion may be performed on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature, and then, a multi-category probability distribution map corresponding to each of a plurality of categories of tissues may be output by using a multi-category segmentation model of the tissue image, and in the multi-category probability distribution map corresponding to each category of tissues, a pixel value of each pixel represents a probability that the pixel belongs to a target tissue region corresponding to the category of tissues, and further, a target tissue region in the cervical tissue image may be determined according to the multi-category probability distribution map. According to the method and the device, the corresponding first cervical tissue characteristics and second cervical tissue characteristics are obtained from the high-magnification cervical tissue image and the low-magnification cervical tissue image respectively, and the characteristics are fused, so that the obtained target cervical tissue characteristics can reflect details and semantic information of the cervical tissue image in different fields of view in a multi-angle and comprehensive manner, and further when the target cervical tissue characteristics are utilized for identifying the target tissue area, the segmentation accuracy of the tissue image multi-category segmentation model can be increased, and the identification accuracy of the target tissue area in the cervical tissue image is improved.
In addition, the scheme of the application can realize end-to-end full-flow automatic identification, quickly acquire the classification result and corresponding evidence (namely multi-category probability distribution map) of the target tissue region in the cervical tissue image, and realize full-automatic integrated reasoning without human intervention.
In one embodiment, the step S102 of extracting features from the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image may include the following steps:
inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, and respectively determining a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image by the feature extraction network.
In practical application, the same feature extraction network can be used for extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image, and the high-magnification cervical tissue image and the low-magnification cervical tissue image can be input into the feature extraction network together or sequentially when the input image is used for extracting features.
In an alternative embodiment, when the feature extraction is performed, as shown in fig. 3, after the cervical tissue image is acquired, a high-magnification image and a low-magnification image of the image may be acquired, and a plurality of image blocks corresponding to the high-magnification image and the low-magnification image are acquired respectively, so as to obtain a high-magnification cervical tissue Patch image and a low-magnification cervical tissue Patch image.
Then, the high-magnification cervical tissue Patch image and the low-magnification cervical tissue Patch image can be input into the same feature extraction network (backbone network) for feature extraction, so that a first cervical tissue feature corresponding to each image block in the high-magnification cervical tissue Patch image and a second cervical tissue feature corresponding to each image block in the low-magnification cervical tissue Patch image are obtained, and feature fusion can be carried out on the first cervical tissue features and the second cervical tissue features to obtain a target cervical tissue feature.
In this embodiment, the high-magnification cervical tissue image and the low-magnification cervical tissue image are subjected to feature extraction by the same feature extraction network, so that in the process of identifying the target tissue region in the cervical tissue image, the used computing resources can be effectively saved, and the equipment load and the equipment threshold in the process of identifying the target tissue region can be reduced.
In another embodiment, the step S102 of extracting features from the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image may include the following steps:
inputting the high-magnification cervical tissue image into a high-magnification image feature extraction network to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image output by the high-magnification image feature extraction network; and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature corresponding to the low-magnification cervical tissue image output by the low-magnification image feature extraction network.
Specifically, the high-power image feature extraction network and the low-power image feature extraction network may be trained in advance.
In one example, the high-power image feature extraction network may be trained based on high-power sample cervical tissue images and associated cervical tissue cell feature labels, which may be information reflecting cervical tissue cell color, texture, cell structure, or spatial distribution characteristics; the low-power image feature extraction network can train based on a low-power sample cervical tissue image and associated cervical tissue feature labels, which can be information reflecting the texture, structure or spatial distribution characteristics of the cervical tissue.
After the trained high-power image feature extraction network and low-power image feature extraction network are obtained, the high-power cervical tissue image can be input into the high-power image feature extraction network to obtain a first cervical tissue feature output by the high-power image feature extraction network; and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature output by the network.
In an alternative embodiment, when the feature extraction is performed, as shown in fig. 4, after the cervical tissue image is acquired, a high-magnification image and a low-magnification image of the image may be acquired, and a plurality of image blocks corresponding to the high-magnification image and the low-magnification image are acquired respectively, so as to obtain a high-magnification cervical tissue Patch image and a low-magnification cervical tissue Patch image.
Then, the high-magnification cervical tissue Patch image can be input into the high-magnification image feature extraction network backup to obtain a first cervical tissue feature corresponding to each image block output by the high-magnification image feature extraction network, and the low-magnification cervical tissue Patch image can be input into the low-magnification image feature extraction network backup to obtain a second cervical tissue feature corresponding to each image block output by the low-magnification image feature extraction network, so that feature fusion can be performed to obtain the target cervical tissue feature.
In this embodiment, the high-magnification cervical tissue image and the low-magnification cervical tissue image are processed respectively through the high-magnification image feature extraction network and the low-magnification image feature extraction network which are independent of each other, so that feature extraction can be performed on the images with matched scales (or amplification factors) in a targeted manner, and the extraction precision and accuracy of the first cervical tissue feature and the second cervical tissue feature are improved.
In one embodiment, as shown in fig. 5, S104, determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map may include the steps of:
s201, determining a pixel value statistical feature corresponding to a plurality of pixel points in the multi-category probability distribution map according to the multi-category probability distribution map corresponding to each category of tissue.
The pixel value statistical feature may represent a distribution of pixel values of a plurality of pixel points in the multi-class probability distribution map.
After the multi-category probability distribution map is obtained, the pixel values of all the pixel points in the multi-category probability distribution map can be counted aiming at the multi-category probability distribution map corresponding to each category of tissue, so that the pixel value statistical characteristics corresponding to the plurality of pixel points in the multi-category probability distribution map can be obtained. In an example, the pixel value statistics may include at least one of: confidence histograms, relative class-to-class ratios between thresholds, relative full map area ratios between thresholds.
The confidence histogram may be a histogram generated based on pixel values of a plurality of pixels, and the histogram may include a plurality of pixel value intervals, which may reflect a distribution of pixel values of a plurality of pixels in the multi-category probability distribution map. The ratio of the threshold value to each category may be a ratio determined based on the number of pixels corresponding to any two pixel value intervals after dividing the plurality of pixel value intervals; the relative full-image area ratio between the thresholds may be determined based on the ratio of the number of pixels corresponding to any pixel value interval to the number of full-image pixels after dividing the plurality of pixel value intervals.
S202, inputting the pixel value statistical characteristics of each multi-category probability distribution map to a trained classifier, and determining a plurality of abnormal areas and types corresponding to the abnormal areas in the cervical tissue image based on the input pixel value statistical characteristics by the classifier.
As an example, the classifier may be obtained by training a neural network (e.g., a deep neural network), or may be a conventional machine learning classifier, such as a support vector machine (Support Vector Machine, SVM), a random forest algorithm (random forest), a Lightweight Grid-Based Cluster (LGBC), or an AdaBoost iterative algorithm.
In this step, the obtained pixel value statistical features of each multi-category probability distribution map may be input to a trained classifier, and classification reasoning is performed on the cervical tissue image by the classifier, specifically, the classifier may determine the distribution condition of a plurality of pixels in different multi-category probability distribution maps based on the input plurality of pixel value statistical features, so as to identify a plurality of abnormal regions in the cervical tissue image based on the distribution condition and determine a type corresponding to each abnormal region, where the type may be a tissue type (for example, an abnormal type of a cell).
S203, determining a target tissue region in the cervical tissue image according to the types corresponding to the abnormal regions.
When the types corresponding to the respective abnormal regions are obtained, the abnormal region of the identified type may be used as the target tissue region in the cervical tissue image. In an alternative embodiment, as shown in fig. 6, if the multi-category probability distribution map is a multi-category probability distribution map corresponding to each of a plurality of image blocks of the cervical tissue image, the obtained multi-category probability distribution maps may be combined, and the pixel values of each pixel point in the combined multi-category probability distribution maps may be counted, so as to obtain a pixel value statistical feature corresponding to the combined multi-category probability distribution map, which is used as a pixel value statistical feature of the overall view of the cervical tissue image, and then may be input to a trained classifier, so as to obtain a classification result of the overall view of the cervical tissue image, where the classification result may generally use a category of a tissue region with the most serious variation level (i.e., the highest variation degree) in the overall view.
In the embodiment, the multi-category probability distribution map obtained after the cervical features of different scales are fused can be combined, different types of target tissue areas of cervical tissues can be rapidly identified through the pre-trained classifier, and the identification efficiency and accuracy of the cervical tissues of the designated types are improved.
In one embodiment, the pixel value statistical feature of each multi-category probability distribution map is input to a trained classifier, and the classifier determines a plurality of abnormal regions and types corresponding to each abnormal region in the cervical tissue image based on each input pixel value statistical feature, and may include the following steps:
fusing the input statistical features of each pixel value with the target cervical tissue features to obtain fused image features; the fused image features are input into a trained classifier, and the classifier determines a plurality of abnormal areas and types corresponding to the abnormal areas in the cervical tissue image based on the fused image features.
Specifically, after the pixel value statistical features of the multiple multi-category probability distribution diagrams are obtained, the multiple pixel value statistical features and the target cervical tissue can be subjected to feature fusion, the fused image features are obtained based on feature fusion results, the fused image features can be input into a classifier, the classifier is used for identifying and classifying the abnormal regions based on the fused image features, and the types corresponding to the abnormal regions are obtained.
In an alternative embodiment, as shown in fig. 7, if the target cervical tissue feature is a target cervical tissue feature corresponding to each of the plurality of image blocks of the cervical tissue image, the multi-class probability distribution map corresponding to each class of tissue is a multi-class probability distribution map corresponding to each of the plurality of image blocks of the cervical tissue image, the plurality of target cervical tissue features may be combined to obtain the target cervical tissue feature of the overall view of the cervical tissue image, and the obtained plurality of multi-class probability distribution maps may be combined, and the pixel values of the pixels in the combined multi-class probability distribution maps may be counted to obtain the pixel value statistical feature corresponding to the combined multi-class probability distribution map as the pixel value statistical feature of the overall view of the cervical tissue image. Then, the target cervical tissue characteristics and the pixel value statistical characteristics of the cervical tissue image full map can be combined to be used as the image characteristics after fusion, and the image characteristics are input into a trained classifier to obtain the classification result of the cervical tissue image full map.
In this embodiment, when determining the abnormal region and the types corresponding to the abnormal region, the statistical features of the pixel values are fused with the target cervical tissue features and then input into the classifier, so that the distribution condition of the pixel values of a plurality of pixel points can be combined while using the multi-scale image features of the cervical tissue image, and the recognition accuracy of the abnormal region and the types thereof can be improved.
In one embodiment, S101 acquires a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image, which may include the following steps:
performing foreground region identification on the tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image; obtaining a low-magnification image corresponding to each image block in a plurality of image blocks corresponding to the cervical tissue image, obtaining a low-magnification cervical tissue image corresponding to the cervical tissue image, obtaining a high-magnification image corresponding to each image block in the plurality of image blocks, and segmenting the high-magnification image into a plurality of high-magnification image blocks to serve as a high-magnification cervical tissue image corresponding to the cervical tissue image.
Wherein the magnification of the tissue slice image is less than the magnification of the low magnification image.
Specifically, a cervical tissue sample of the patient may be obtained, and after the cervical tissue sample is prepared, a tissue slice image to be identified may be further obtained. The foreground region identification may then be performed on the tissue slice image to be identified, where the foreground region identification may be identifying an active region in the tissue slice image, which may be the image region of the tissue slice image in which cervical tissue is located. The foreground region identification may be performed at an extremely low magnification, where an extremely low magnification may refer to a magnification (e.g., lowest magnification) that is smaller than that of the low-magnification image. Taking the case where the magnification of the low-magnification image is 5, the magnification corresponding to the tissue slice image at the time of foreground region identification may be selected in the interval [1, 2.5], in other words, in this example, the ratio of the magnification of the tissue slice image to the magnification of the low-magnification image may be between [0.25, 0.5 ].
Thus, after the foreground region is identified, a cervical tissue image corresponding to the cervical tissue in the tissue slice image can be obtained based on the identification result of the foreground region identification. In an alternative embodiment, the cervical tissue image may be formed by a plurality of image blocks, and after obtaining the cervical tissue image, for each image block in the plurality of image blocks, a low-magnification image corresponding to each image block may be obtained, and the plurality of low-magnification images are taken as low-magnification cervical tissue images corresponding to the cervical tissue image, where the low-magnification images may better represent structural features.
Meanwhile, a high-magnification image corresponding to each image block in the plurality of image blocks can be obtained, the high-magnification image is segmented into a plurality of high-magnification image blocks to serve as a high-magnification cervical tissue image corresponding to the cervical tissue image, and the high-magnification image can show more detail features.
For example, as shown in fig. 8, in the example of clipping the high-low-magnification image and the high-magnification image of the same image block, the physical areas represented by the low-magnification image and the high-magnification image are the same physical area, but there is a difference in image size, taking the low-magnification image as a 5-magnification image and the high-magnification image as a 20-magnification image as an example, the physical area represented by 1 image with 5 magnification may be equal to the physical area represented by 4 images with 20 magnification, after the high-magnification image of the same image area is obtained, the image with 20 magnification may be segmented, for example, the image with 20 magnification may be clipped into 4 images (without adjusting the size to preserve details), and further, in performing feature extraction, 5 images (1 image with 5 magnification and 4 image with 20 magnification) may be simultaneously input into the feature extraction network.
In this embodiment, on the one hand, for a large number of irrelevant background areas possibly included in an image corresponding to a tissue slide, by performing foreground area identification on a tissue slice image, the background area irrelevant to target tissue area identification can be removed, and only the cervical tissue image where the cervical tissue is located is subjected to reasoning identification, so that invalid reasoning time is reduced, and the identification speed of the whole image of the tissue slice image is improved; on the other hand, the foreground region identification is carried out at a lower multiplying power, so that all foreground regions in tissue slice images can be rapidly positioned, and meanwhile, compared with missed diagnosis caused by searching of target tissue regions at an extremely low multiplying power in the related art, the method and the device can further respectively acquire a low multiplying power image and a high multiplying power image with higher multiplying power for feature extraction aiming at cervical tissue images of the corresponding regions of cervical tissues, can carry out multi-scale feature extraction in the images with higher multiplying power, and can remarkably improve recall rate and accuracy of target tissue region identification.
In one embodiment, the identifying the foreground region of the tissue slice image to be identified to obtain a cervical tissue image corresponding to the cervical tissue in the tissue slice image may include the following steps:
Performing binarization processing on the tissue slice image to be identified, and acquiring a plurality of image blocks corresponding to the tissue slice image subjected to the binarization processing; and determining the number of target pixel points in each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold as cervical tissue images corresponding to the cervical tissues.
The target pixel points are pixel points with pixel values meeting the preset pixel value conditions, and the preset pixel values are pixel values or pixel value intervals corresponding to the effective foreground areas after binarization processing.
In a specific implementation, the tissue image to be identified may be subjected to a binarization process, which may also be referred to as a mask process, and fig. 9a shows a tissue slice image after the binarization process. The binarized tissue slice image may then be segmented into a plurality of image blocks.
In an alternative embodiment, the plurality of image blocks may be acquired by a candidate region generating grid, fig. 9b shows an example diagram of a candidate region generating grid, which may specifically be a tailorable region coordinate set generated according to the actual size of the tissue slice image, the size of the image blocks to be cut, and an overlap (overlapping region) between each image block, wherein the size of the entire candidate region generating grid may be the same as the actual size of the tissue slice image, and the size of each grid in the candidate region generating grid is the size of the image block to be cut. Further, the candidate region generating grid may be superimposed on the binarized tissue slice image to obtain a corresponding plurality of image blocks, as shown in fig. 9c, for example.
After obtaining a plurality of image blocks, determining the number of target pixel points in each image block according to each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold as cervical tissue images corresponding to cervical tissues; if the number of target pixels does not exceed the number threshold, the image block is not determined to be a cervical tissue image, and continuing to process the next image block may be skipped.
For example, taking the tissue slice image after the binarization processing of fig. 9a as an example, the foreground effective area is binarized into white, and the pixel value is 1, if the number of target pixel points with the pixel value of 1 in the image block exceeds the preset number threshold value for each image block, the image block is determined to be a cervical tissue image, in an example, the number threshold value of the target pixel points may be 0, and when the number of the target pixel points is greater than 0, the image is determined to be a cervical tissue image, so that extraction of the target cervical tissue features can be ensured for all the foreground effective areas, and omission is avoided.
In this embodiment, the number of target pixel points in the plurality of image blocks obtained after binarization processing is counted and compared correspondingly, so that the area where the cervical tissue is located can be quickly identified and the identification efficiency of the target tissue area is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide a cervical tissue image processing apparatus for implementing the above-mentioned related cervical tissue image processing method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitations in one or more embodiments of the cervical tissue image processing device provided below may be referred to above as limitations of the cervical tissue image processing method, and will not be repeated here.
In one embodiment, as shown in fig. 10, there is provided a cervical tissue image processing apparatus comprising:
the high-low magnification tissue image acquisition module 1001 is used for acquiring a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
the feature extraction module 1002 is configured to perform feature extraction on the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and perform feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
the segmentation module 1003 is configured to input the target cervical tissue feature to a trained tissue image multi-class segmentation model, and obtain a multi-class probability distribution map corresponding to each of a plurality of classes of tissues output by the tissue image multi-class segmentation model; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
A target region determination module 1004 is configured to determine the target tissue region in the cervical tissue image according to the multi-category probability distribution map.
In one embodiment, the high-low magnification tissue image acquisition module 1001 is configured to:
performing foreground region identification on a tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image;
obtaining a low-magnification image corresponding to each image block in a plurality of image blocks corresponding to the cervical tissue image, obtaining a low-magnification cervical tissue image corresponding to the cervical tissue image, obtaining a high-magnification image corresponding to each image block in the plurality of image blocks, and segmenting the high-magnification image into a plurality of high-magnification image blocks serving as a high-magnification cervical tissue image corresponding to the cervical tissue image; the magnification of the tissue slice image is less than the magnification of the low magnification image.
In one embodiment, the high-low magnification tissue image acquisition module 1001 is configured to:
performing binarization processing on a tissue slice image to be identified, and acquiring a plurality of image blocks corresponding to the tissue slice image after the binarization processing;
Determining the number of target pixel points in each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold as cervical tissue images corresponding to cervical tissues; the target pixel points are pixel points with pixel values meeting the preset pixel value conditions.
In one embodiment, the target area determining module 1004 is configured to:
determining pixel value statistical features corresponding to a plurality of pixel points in a multi-category probability distribution map corresponding to each category of tissue;
inputting the pixel value statistical features of each multi-category probability distribution map to a trained classifier, and determining a plurality of abnormal areas in the cervical tissue image and types corresponding to each abnormal area by the classifier based on each input pixel value statistical feature;
and determining a target tissue region in the cervical tissue image according to the type corresponding to each abnormal region.
In one embodiment, the target area determining module 1004 is configured to:
fusing the input statistical characteristics of the pixel values with the target cervical tissue characteristics to obtain fused image characteristics;
Inputting the fused image features into a trained classifier, and determining a plurality of abnormal areas in the cervical tissue image and types corresponding to the abnormal areas by the classifier based on the fused image features.
In one embodiment, the feature extraction module 1002 is configured to:
inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, and respectively determining a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image by the feature extraction network.
In one embodiment, the feature extraction module 1002 is configured to:
inputting the high-magnification cervical tissue image into a high-magnification image feature extraction network to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image output by the high-magnification image feature extraction network;
the method comprises the steps of,
and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature corresponding to the low-magnification cervical tissue image output by the low-magnification image feature extraction network.
The respective modules in the above cervical tissue image processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing cervical tissue images. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a cervical tissue image processing method.
It will be appreciated by those skilled in the art that the structure shown in fig. 11 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain the tissue image multi-category segmentation model and outputting multi-category probability distribution graphs corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
And determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map.
In one embodiment, the steps of the other embodiments described above are also implemented when the processor executes a computer program.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain the tissue image multi-category segmentation model and outputting multi-category probability distribution graphs corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
And determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map.
In one embodiment, the computer program, when executed by a processor, also implements the steps of the other embodiments described above.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
obtaining a high-magnification cervical tissue image and a low-magnification cervical tissue image corresponding to the cervical tissue image;
extracting features of the high-magnification cervical tissue image and the low-magnification cervical tissue image to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image and a second cervical tissue feature corresponding to the low-magnification cervical tissue image, and performing feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain the tissue image multi-category segmentation model and outputting multi-category probability distribution graphs corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue;
And determining the target tissue region in the cervical tissue image according to the multi-category probability distribution map.
In one embodiment, the computer program, when executed by a processor, also implements the steps of the other embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as Static Random access memory (Static Random access memory AccessMemory, SRAM) or dynamic Random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A cervical tissue image processing method, the method comprising:
performing foreground region identification on a tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image;
acquiring a low-magnification cervical tissue image of each image block in a plurality of image blocks corresponding to the cervical tissue image based on the full-view slice image of the cervical tissue image, acquiring a high-magnification image representing the same physical area as the low-magnification image of each image block based on the full-view slice image of the cervical tissue image, and segmenting the high-magnification image into a plurality of high-magnification image blocks serving as a high-magnification cervical tissue image representing the same physical area as the low-magnification cervical tissue image of each image block;
Inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, respectively determining a first cervical tissue feature of a cervical tissue cell level corresponding to the high-magnification cervical tissue image and a second cervical tissue feature of a cervical tissue level corresponding to the low-magnification cervical tissue image by the same feature extraction network, and carrying out feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model to obtain the tissue image multi-category segmentation model and outputting multi-category probability distribution graphs corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue; the pixel value is obtained by converting the confidence coefficient of each pixel point by the tissue image multi-category segmentation model, and the confidence coefficient represents the confidence coefficient of the pixel point belonging to the target tissue region;
Determining pixel value statistical features corresponding to a plurality of pixel points in a multi-category probability distribution map corresponding to each category of tissue;
inputting the pixel value statistical features of each multi-category probability distribution map to a trained classifier, and determining a plurality of abnormal areas in the cervical tissue image and types corresponding to each abnormal area by the classifier based on each input pixel value statistical feature;
and determining a target tissue region in the cervical tissue image according to the type corresponding to each abnormal region.
2. The method of claim 1, wherein a magnification of the tissue slice image is less than a magnification of the low-magnification image.
3. The method according to claim 1, wherein the performing foreground region identification on the tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image comprises:
performing binarization processing on a tissue slice image to be identified, and acquiring a plurality of image blocks corresponding to the tissue slice image after the binarization processing;
determining the number of target pixel points in each image block, and determining the image blocks with the number of the target pixel points exceeding a number threshold as cervical tissue images corresponding to cervical tissues; the target pixel points are pixel points with pixel values meeting the preset pixel value conditions.
4. The method of claim 1, wherein the pixel values of the pixel points are positively correlated with the probability.
5. The method of claim 1, wherein said inputting the pixel value statistics of each multi-category probability distribution map to a trained classifier, determining, by the classifier, a plurality of abnormal regions in the cervical tissue image and types corresponding to each of the abnormal regions based on each of the input pixel value statistics, comprises:
fusing the input statistical characteristics of the pixel values with the target cervical tissue characteristics to obtain fused image characteristics;
inputting the fused image features into a trained classifier, and determining a plurality of abnormal areas in the cervical tissue image and types corresponding to the abnormal areas by the classifier based on the fused image features.
6. The method of any one of claims 1-5, wherein the first cervical tissue characteristic comprises at least one of: the color characteristics of the cervical tissue cells, the texture characteristics of the cervical tissue cells, the shape characteristics of the cervical tissue cells and the spatial relationship characteristics of a plurality of cervical tissue cells.
7. The method of any one of claims 1-5, wherein prior to said feature fusing the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature, the method further comprises:
inputting the high-magnification cervical tissue image into a high-magnification image feature extraction network to obtain a first cervical tissue feature corresponding to the high-magnification cervical tissue image output by the high-magnification image feature extraction network;
the method comprises the steps of,
and inputting the low-magnification cervical tissue image into a low-magnification image feature extraction network to obtain a second cervical tissue feature corresponding to the low-magnification cervical tissue image output by the low-magnification image feature extraction network.
8. A cervical tissue image processing apparatus, the apparatus comprising:
the high-low multiplying power tissue image acquisition module is used for carrying out foreground region identification on a tissue slice image to be identified to obtain a cervical tissue image corresponding to cervical tissue in the tissue slice image; acquiring a low-magnification cervical tissue image of each image block in a plurality of image blocks corresponding to the cervical tissue image based on the full-view slice image of the cervical tissue image, acquiring a high-magnification image representing the same physical area as the low-magnification image of each image block based on the full-view slice image of the cervical tissue image, and segmenting the high-magnification image into a plurality of high-magnification image blocks serving as a high-magnification cervical tissue image representing the same physical area as the low-magnification cervical tissue image of each image block;
The feature extraction module is used for inputting the high-magnification cervical tissue image and the low-magnification cervical tissue image into the same feature extraction network, respectively determining a first cervical tissue feature of a cervical tissue cell level corresponding to the high-magnification cervical tissue image and a second cervical tissue feature of a cervical tissue level corresponding to the low-magnification cervical tissue image by the same feature extraction network, and carrying out feature fusion on the first cervical tissue feature and the second cervical tissue feature to obtain a target cervical tissue feature;
the segmentation module is used for inputting the target cervical tissue characteristics into a trained tissue image multi-category segmentation model, obtaining the tissue image multi-category segmentation model and outputting multi-category probability distribution diagrams corresponding to various categories of tissues; in a multi-category probability distribution map corresponding to each category of tissue, the pixel value of each pixel represents the probability that the pixel belongs to a target tissue region corresponding to the category of tissue; the pixel value is obtained by converting the confidence coefficient of each pixel point by the tissue image multi-category segmentation model, and the confidence coefficient represents the confidence coefficient of the pixel point belonging to the target tissue region;
The target area determining module is used for determining pixel value statistical characteristics corresponding to a plurality of pixel points in a multi-category probability distribution map corresponding to each category of tissue; inputting the pixel value statistical features of each multi-category probability distribution map to a trained classifier, and determining a plurality of abnormal areas in the cervical tissue image and types corresponding to each abnormal area by the classifier based on each input pixel value statistical feature; and determining a target tissue region in the cervical tissue image according to the type corresponding to each abnormal region.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202310121906.8A 2023-02-16 2023-02-16 Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium Active CN115861604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310121906.8A CN115861604B (en) 2023-02-16 2023-02-16 Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310121906.8A CN115861604B (en) 2023-02-16 2023-02-16 Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115861604A CN115861604A (en) 2023-03-28
CN115861604B true CN115861604B (en) 2023-06-02

Family

ID=85658210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310121906.8A Active CN115861604B (en) 2023-02-16 2023-02-16 Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115861604B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034208A (en) * 2018-07-03 2018-12-18 怀光智能科技(武汉)有限公司 A kind of cervical cell pathological section classification method of high-low resolution combination
CN110610480A (en) * 2019-08-02 2019-12-24 成都上工医信科技有限公司 MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN113763386A (en) * 2021-07-13 2021-12-07 合肥工业大学 Multi-scale feature fusion based intelligent segmentation method and system for surgical instrument image
CN114550169A (en) * 2022-02-23 2022-05-27 腾讯科技(深圳)有限公司 Training method, device, equipment and medium for cell classification model
CN115457012A (en) * 2022-09-27 2022-12-09 云南大学 Pathological image segmentation method, system, storage medium, equipment and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034208A (en) * 2018-07-03 2018-12-18 怀光智能科技(武汉)有限公司 A kind of cervical cell pathological section classification method of high-low resolution combination
CN110610480A (en) * 2019-08-02 2019-12-24 成都上工医信科技有限公司 MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN113763386A (en) * 2021-07-13 2021-12-07 合肥工业大学 Multi-scale feature fusion based intelligent segmentation method and system for surgical instrument image
CN114550169A (en) * 2022-02-23 2022-05-27 腾讯科技(深圳)有限公司 Training method, device, equipment and medium for cell classification model
CN115457012A (en) * 2022-09-27 2022-12-09 云南大学 Pathological image segmentation method, system, storage medium, equipment and terminal

Also Published As

Publication number Publication date
CN115861604A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
US11373305B2 (en) Image processing method and device, computer apparatus, and storage medium
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN111612008A (en) Image segmentation method based on convolution network
CN112329702B (en) Method and device for rapid face density prediction and face detection, electronic equipment and storage medium
CN112132827A (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN111476806A (en) Image processing method, image processing device, computer equipment and storage medium
CN114445670A (en) Training method, device and equipment of image processing model and storage medium
CN111192678A (en) Pathological microscopic image diagnosis and model training method, device, equipment and medium
CN112598031A (en) Vegetable disease detection method and system
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN114693624A (en) Image detection method, device and equipment and readable storage medium
CN115908363B (en) Tumor cell statistics method, device, equipment and storage medium
CN115861604B (en) Cervical tissue image processing method, cervical tissue image processing device, computer equipment and storage medium
CN115760957B (en) Method for analyzing substances in cell nucleus by three-dimensional electron microscope
CN114037868B (en) Image recognition model generation method and device
CN115641317A (en) Pathological image-oriented dynamic knowledge backtracking multi-example learning and image classification method
CN112613521B (en) Multilevel data analysis system and method based on data conversion
CN115713769A (en) Training method and device of text detection model, computer equipment and storage medium
CN114496099A (en) Cell function annotation method, device, equipment and medium
CN107992853B (en) Human eye detection method and device, computer equipment and storage medium
CN113706449B (en) Pathological image-based cell analysis method, device, equipment and storage medium
CN114037702B (en) Method and device for screening and classifying slice-level cervical cancer
CN117095244B (en) Infrared target identification method, device, equipment and medium
CN115984583B (en) Data processing method, apparatus, computer device, storage medium, and program product
CN113128511B (en) Coke tissue identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant