CN110490159B - Method, device, equipment and storage medium for identifying cells in microscopic image - Google Patents

Method, device, equipment and storage medium for identifying cells in microscopic image Download PDF

Info

Publication number
CN110490159B
CN110490159B CN201910784865.4A CN201910784865A CN110490159B CN 110490159 B CN110490159 B CN 110490159B CN 201910784865 A CN201910784865 A CN 201910784865A CN 110490159 B CN110490159 B CN 110490159B
Authority
CN
China
Prior art keywords
cell
image
central point
color
cells
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910784865.4A
Other languages
Chinese (zh)
Other versions
CN110490159A (en
Inventor
沈昊成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910784865.4A priority Critical patent/CN110490159B/en
Publication of CN110490159A publication Critical patent/CN110490159A/en
Application granted granted Critical
Publication of CN110490159B publication Critical patent/CN110490159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, a device, equipment and a storage medium for identifying cells in microscopic images, and relates to the technical field of image processing. The method comprises the following steps: performing edge segmentation on each cell in the color microscopic image to obtain a cell edge image; acquiring the position of the central point of each cell in the color microscopic image according to the cell edge image; acquiring the gray value of the central point of each cell according to the position of the central point of each cell in the color microscopic image; and identifying the cell type of each cell according to the gray value of the central point of each cell. According to the scheme, the image recognition is not required to be carried out through a complex machine learning algorithm, so that the calculation resources and the processing time of cell recognition are reduced, and the efficiency of recognizing positive cells and negative cells from microscopic images can be obviously improved.

Description

Method, device, equipment and storage medium for identifying cells in microscopic image
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for identifying cells in a microscopic image.
Background
Immunohistochemistry is a method of assisting pathological diagnosis by staining cells in a cell sample population by a chemochromic reaction to highlight the positive cells in the cell sample population under a microscope.
In immunohistochemistry-based pathological diagnosis, the number or proportion of positive cells in a cell sample population is an important criterion for pathological diagnosis. In the related art, positive cells in a cell sample population may be identified by a pre-trained machine learning model. For example, a microscopic image of a stained cell sample group is subjected to gaussian filtering and edge segmentation, and then input to a machine learning model trained in advance, and positive cells and negative cells in the microscopic image are identified by the machine learning model.
However, in order to achieve a more accurate recognition effect, a complex machine learning model needs to be designed, and processing the segmentation image through the complex machine learning model consumes more computing resources and computing time, thereby affecting the efficiency of cell recognition.
Disclosure of Invention
The embodiment of the application provides a method, a device, computer equipment and a storage medium for identifying cells in a microscopic image, which can improve the efficiency of identifying the cells in a segmentation image, and the technical scheme is as follows:
in one aspect, there is provided a method of identifying cells in a microscopic image, the method being performed by a computer device, the method comprising:
performing edge segmentation on each cell in the color microscopic image to obtain a cell edge image, wherein the cell edge image indicates the edge of each cell; the color microscopic image is an image obtained by collecting the cell sample group under a microscope field after the cells in the cell sample group are subjected to color marking;
acquiring the position of the central point of each cell in the color microscopic image according to the cell edge image;
acquiring the gray value of the central point of each cell according to the position of the central point of each cell in the color microscopic image;
and identifying the cell type of each cell according to the gray value of the central point of each cell, wherein the cell type is a positive cell or a negative cell.
In one aspect, there is provided an apparatus for identifying cells in a microscopic image, for use in a computer device, the apparatus comprising:
the edge segmentation module is used for performing edge segmentation on each cell in the color microscopic image to obtain a cell edge image, and the cell edge image indicates the edge of each cell; the color microscopic image is an image obtained by collecting the cell sample group under a microscope field after the cells in the cell sample group are subjected to color marking;
the central position acquisition module is used for acquiring the position of the central point of each cell in the color microscopic image according to the cell edge image;
the gray value acquisition module is used for acquiring the gray value of the central point of each cell according to the position of the central point of each cell in the color microscopic image;
and the cell identification module is used for identifying the cell type of each cell according to the gray value of the central point of each cell, wherein the cell type is a positive cell or a negative cell.
In one possible implementation manner, the central location obtaining module includes:
a connected region determining unit configured to determine a connected region of each cell based on the edge of each cell indicated by the cell edge image;
a central point acquisition unit configured to acquire a central point of the connected region of each cell as a central point of each cell;
and the position acquisition unit is used for acquiring the position of the central point of each cell in the color microscopic image according to the mapping relation between each pixel point in the cell edge image and each pixel point in the color microscopic image.
In a possible implementation manner, the central location obtaining module further includes:
an area acquisition unit configured to acquire an area of the connected region of each cell before the central point acquisition unit acquires the central point of the connected region of each cell as the central point of each cell;
the regional filtering unit is used for acquiring the communication region with the corresponding area meeting the filtering condition as the communication region of each filtered cell; the filtering condition comprises that the corresponding area is in a designated area interval;
the central point acquiring unit is configured to acquire a central point of the filtered communication area of each cell as a central point of each cell.
In a possible implementation manner, the central location obtaining module further includes:
a merging unit, configured to merge the center points of the cells according to a merging condition before the gray value obtaining module obtains the position of the center point of each cell in the color microscope image according to the mapping relationship between each pixel point in the cell edge image and each pixel point in the color microscope image, so as to obtain the center point of each cell after merging;
the gray value obtaining module is configured to obtain, according to a mapping relationship between each pixel point in the cell edge image and each pixel point in the color microscopic image, a position of a center point of each cell in the color microscopic image after combination.
In a possible implementation manner, the merging unit is configured to,
for a first central point in the central points of the cells, determining a second central point corresponding to the first central point, wherein the first central point is any one of the central points of the cells, and the second central point is the central point of other cells closest to the first central point;
when the Euclidean distance between the first central point and the second central point is smaller than a distance threshold, combining the first central point and the second central point into a third central point, wherein the third central point is located at the midpoint of a connecting line between the first central point and the second central point.
In a possible implementation manner, the gray value obtaining module includes:
the gray processing unit is used for carrying out gray processing on the color microscopic image to obtain a gray image;
the first filtering unit is used for carrying out median filtering on the gray level image to obtain the gray level image after the median filtering;
and the gray value acquisition unit is used for acquiring the gray value of the central point of each cell in the gray image after median filtering according to the position of the central point of each cell in the color microscopic image.
In one possible implementation manner, the edge segmentation module includes:
the channel decomposition unit is used for carrying out color channel decomposition on the color microscopic image to obtain color channel images corresponding to at least two color spaces;
a channel extraction unit, configured to extract a target channel image corresponding to a target color space from the color channel image;
the second filtering unit is used for filtering the target channel image to obtain a filtered image;
and the segmentation unit is used for performing edge segmentation on the filtered image to obtain the cell edge image.
In a possible implementation manner, the second filtering unit is configured to filter the target channel image through a bilateral filtering algorithm to obtain the filtered image.
In a possible implementation manner, the segmentation unit is configured to perform edge segmentation on the filtered image through a watershed algorithm to obtain the cell edge image.
In one possible implementation, the apparatus further includes:
the color marking module is used for marking each cell in the color microscopic image by color according to the cell type of each cell to obtain a cell marking image;
and the image output module is used for outputting the cell marker image.
In one possible implementation, the apparatus further includes:
the counting module is used for counting and obtaining the number of positive cells in each cell and the number of negative cells in each cell;
a proportion calculation module for calculating the proportion of positive cells in each cell according to the number of positive cells in each cell and the number of negative cells in each cell;
and the proportion output module is used for outputting the proportion of the positive cells in each cell.
In one possible implementation, the positive cells are cells comprising the Ki-67 protein.
In one aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, the at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement the above-described method of identifying cells in a microscopic image.
In one aspect, a computer readable storage medium having stored therein at least one instruction, at least one program, code set, or set of instructions, which is loaded and executed by a processor to implement the above-described method of identifying cells in a microscopic image is provided.
In one aspect, a system for identifying cells in a microscopic image is provided, the system comprising: a microscope and an image processing apparatus;
the image processing device is used for executing the method for identifying the cells in the microscopic image.
The technical scheme provided by the application can comprise the following beneficial effects:
after the color microscopic image containing each cell is processed to obtain the cell edge image indicating the edge of the cell, the central point of the cell is determined according to the edge of the cell indicated by the cell edge image, and the positive cell and the negative cell are distinguished according to the gray value of the central point of the cell in the color microscopic image, without complex machine learning algorithm for image recognition, so that the calculation resource and the processing time of cell recognition are reduced, and the efficiency of recognizing the positive cell and the negative cell from the microscopic image can be remarkably improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a system configuration diagram of a cell recognition system according to each embodiment of the present application;
FIG. 2 is a schematic flow chart of a method of identifying cells in a microscopic image provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of a method for identifying cells in a microscopic image as provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic view of a watershed formation according to the embodiment shown in FIG. 3;
FIG. 5 is a block diagram of a flow chart for Ki-67 positive index identification of cells provided by an exemplary embodiment of the present application;
FIG. 6 is a graph comparing the results of colorectal cell identification according to the embodiment of FIG. 5;
FIG. 7 is a graph comparing the results of the identification of mammary cells according to the embodiment of FIG. 5;
FIG. 8 is a graph comparing the results of neuroendocrine tumor cell identification according to the embodiment shown in FIG. 5;
FIG. 9 is a block diagram illustrating the structure of an apparatus for identifying cells in a microscopic image according to an exemplary embodiment;
FIG. 10 is a schematic diagram illustrating a configuration of a computer device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The application provides a method for identifying cells in a microscopic image, which can improve the efficiency of cell identification under the condition of ensuring the accuracy of identifying positive cells and negative cells in a color microscopic image. For ease of understanding, several terms referred to in this application are explained below.
1) Immunohistochemistry (also called immunocytochemistry) refers to a new technology for qualitative, positioning and quantitative determination of corresponding antigens based on specific antibodies with color development reagent marks through antigen-antibody reaction and histochemical color reaction in situ of tissue cells.
2) Ki-67 is a protein encoded by the human MKI67 gene and is also a widely used immunohistochemical one of pathologies. The protein is closely related to cell proliferation, and Ki-67 protein can be detected in cells in mitosis and interphase, and no Ki-67 protein exists in cells with mitosis stop.
3) The Ki-67 positive index (also called proliferation index) is the percentage of positively stained cells in the total number of tumor cells, and is an index for judging the proliferation of tumor cells in pathological reports, wherein a higher positive index indicates that more tumor cells are proliferating and a higher degree of malignancy. From an image perspective, typically in immunohistochemically stained images, negatively stained cells appear blue and positively stained cells appear brown.
In general, 100-3000 cells exist in the field of view of 1 microscope Ki-67 section, and the manual cell counting is very time-consuming and labor-consuming. In actual pathological diagnosis, in order to reduce errors, a pathologist often needs to calculate Ki-67 positive indexes under a plurality of microscope fields (the total number of cells is required to be more than 1000), and then, an average Ki-67 positive index is calculated, so that the workload of the pathologist is greatly increased by artificial cell counting.
Referring to fig. 1, a system configuration diagram of a cell recognition system according to various embodiments of the present application is shown. As shown in fig. 1, the system includes a microscope 120 and a terminal 140. Optionally, the system further comprises a server 160 and a database 180.
The microscope 120 may be a conventional optical microscope, and an operator of the microscope 120 may capture microscopic images in an eyepiece of the microscope 120 via an image capture assembly (e.g., a camera or other device integrated with a camera).
For example, a camera cassette may be integrated on the microscope 120, and an operator of the microscope 120 may capture a microscope image in an eyepiece of the microscope 120 through a camera mounted on the camera cassette and then import the microscope image captured by the camera to the terminal 140 or the server 160 through an image output interface integrated in the camera.
Alternatively, the microscope 120 may be an electron microscope integrated with an image capturing component, the electron microscope further provides an image output interface to the outside, and an operator of the microscope 120 captures a microscope image in an eyepiece of the microscope 120 by operating an image capturing function of the electron microscope, and guides the microscope image to the terminal 140 through the image output interface.
The image output Interface may be a wired Interface, such as a Universal Serial Bus (USB) Interface, a High Definition Multimedia Interface (HDMI) Interface, or an ethernet Interface; alternatively, the image output interface may be a Wireless interface, such as a Wireless Local Area Network (WLAN) interface, a bluetooth interface, or the like.
Accordingly, depending on the type of the image output interface, the operator may export the microscope image captured by the camera in various ways, for example, importing the microscope image to the terminal 140 through a wired or short-distance wireless manner, or importing the microscope image to the terminal 140 or the server 160 through a local area network or the internet.
The terminal 140 may be installed with an application program for acquiring and presenting a processing result of the microscope image, and after the terminal 140 acquires the microscope image in the eyepiece of the microscope 140, the terminal may acquire and present a processing result obtained by processing the microscope image through the application program, so that a doctor can perform operations such as pathological diagnosis.
The terminal 140 may be a terminal device with certain processing capability and interface display function, for example, the terminal 140 may be a mobile phone, a tablet computer, an e-book reader, smart glasses, a laptop computer, a desktop computer, and the like.
In the system shown in fig. 1, the terminal 140 and the microscope 120 are physically separate physical devices. Alternatively, in another possible implementation, the terminal 140 and the microscope 120 may be integrated into a single physical device; for example, the microscope 120 may be an intelligent microscope having the computing and interface presentation functions of the terminal 140, or the microscope 120 may be an intelligent microscope having the computing capabilities of the terminal 140, which may output the image processing results through a wired or wireless interface.
The server 160 is a server, or a plurality of servers, or a virtualization platform, or a cloud computing service center.
The server 160 may be a server that provides a background service for the application program installed in the terminal 140 or the microscope 120, and the background server may be version management of the application program, perform background processing on the microscope image acquired by the application program, and return a processing result.
The database 180 may be a Redis database, or may be another type of database. The database 180 is used for storing various types of data.
Optionally, the terminal 140 and the server 160 are connected via a communication network. Optionally, the microscope 120 is connected to the server 160 via a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the system may further include a management device (not shown in fig. 1), which is connected to the server 160 through a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the wireless network or wired network described above uses standard communication techniques and/or protocols. The Network is typically the Internet, but may be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
Referring to fig. 2, a flow chart of a method for identifying cells in a microscopic image according to an exemplary embodiment of the present application is shown. The method of identifying cells in a microscopic image may be performed by a computer device. The computer device may be a single device, such as the terminal 140 or the server 160 in the system of FIG. 1; alternatively, the computer device may be a collection of devices, for example, the computer device may include the terminal 140 and the server 160 in the system shown in fig. 1, that is, the method may be executed interactively by the terminal 140 and the server 160 in the system shown in fig. 1. As shown in fig. 2, the method of identifying cells in a microscopic image may include the steps of:
step 210, performing edge segmentation on each cell in the color microscopic image to obtain a cell edge image, where the cell edge image indicates an edge of each cell.
Wherein the color microscopic image is an image obtained by collecting the cell sample group under a microscope field after color marking of cells in the cell sample group.
Wherein the cell sample group is a group consisting of individual cells contained in a cell specimen under an objective lens of a microscope.
The cell edge image is an image in which an edge region of a cell is highlighted, and the cell edge image can be obtained by processing an image including the cell by an edge segmentation algorithm.
And step 220, acquiring the position of the central point of each cell in the color microscopic image according to the cell edge image.
In the embodiment of the present application, the position of the center point of the cell in the color microscopic image may be a pixel position of the center point of the cell in the color microscopic image.
And step 230, acquiring a gray value of the central point of each cell according to the position of the central point of each cell in the color microscopic image.
And 240, identifying the cell type of each cell according to the gray value of the central point of each cell, wherein the cell type is a positive cell or a negative cell.
In summary, in the solution shown in the embodiment of the present application, after the color microscopic image including each cell is processed to obtain the cell edge image indicating the edge of the cell, the computer device determines the center point of the cell according to the edge of the cell indicated by the cell edge image, and distinguishes the positive cell from the negative cell according to the gray value of the center point of the cell in the color microscopic image, so that image recognition by a complex machine learning algorithm is not required, thereby reducing the calculation resources and processing time for cell recognition, and significantly improving the efficiency of recognizing the positive cell and the negative cell from the microscopic image.
In addition, according to the scheme shown in the embodiment of the application, compared with the scheme for cell identification through a machine learning model, a large number of manual labels are not needed to train the model, and the labor and time for labeling at a cell level are saved, so that the development period can be shortened, and the development and updating efficiency of products is improved.
Referring to fig. 3, a flow chart of a method for identifying cells in a microscopic image according to an exemplary embodiment of the present application is shown. The method of identifying cells in a microscopic image may be performed by a computer device. The computer device may be a single device, such as the terminal 140 or the server 160 in the system of FIG. 1; alternatively, the computer device may be a collection of devices, for example, the computer device may include the terminal 140 and the server 160 in the system shown in fig. 1. For example, the method is executed by the terminal 140 and/or the server 160 in the system shown in fig. 1, and as shown in fig. 3, the method for identifying the cells in the microscopic image may include the following steps:
step 301, acquiring a color microscopic image, wherein the color microscopic image is an image acquired from the cell sample group under a microscope field after the cells in the cell sample group are color-labeled.
In a possible implementation manner, the color microscopic image may be imported to an application program in the terminal through a wired or wireless network, the application program sends an identification request containing the color microscopic image to the server, and the server extracts the color microscopic image after receiving the identification request.
In another possible implementation manner, the color microscopic image may be imported to an application program in the terminal through a wired or wireless network, and then the color microscopic image is directly processed by the terminal through the application program.
Step 302, performing edge segmentation on each cell in the color microscopic image to obtain a cell edge image, where the cell edge image indicates an edge of each cell.
In an embodiment of the present application, the step of edge segmentation of each cell in the color microscopic image may be as follows:
s302a, carrying out color channel decomposition on the color microscopic image to obtain color channel images corresponding to at least two color spaces.
In one possible implementation, taking as an example that the positive cells are cells containing the Ki-67 protein, the terminal or server may perform a color deconvolution operation on the input RGB image (i.e., the color microscopy image described above), calculating the contribution of each stain based on the absorbance of the particular stain. The color channel decomposition is an orthogonal transformation of RGB information of an image through an Optical density matrix, also called an absorbance (OD) matrix, of each RGB channel, and converts the image from an RGB color space into an H-E-DAB color space of hematoxylin (H), eosin (E), and Diaminobenzidine (DAB) dyes.
S302b, a target channel image corresponding to the target color space is extracted from the color channel image.
Taking the subsequent identification scene of positive cells containing Ki-67 protein as an example, the terminal or the server can extract hematoxylin (H) channel images from the images of the H-E-DAB color space obtained by color channel decomposition as a subsequent processing image, wherein the H channel images are the target channel images.
The above steps are described by taking a scene for identifying positive cells containing Ki-67 protein as an example, and optionally, according to different identification scenes (i.e., different identified positive cells), the terminal or the server may extract channel images corresponding to other color spaces for subsequent processing.
S302c, filtering the target channel image to obtain a filtered image.
In this embodiment, the terminal or the server may filter the target channel image through a bilateral filtering algorithm to obtain the filtered image.
Bilateral filtering is a nonlinear filtering, which can achieve the effects of keeping edges and reducing noise and smoothing. The bilateral filtering adopts a weighted average method, wherein the intensity of a certain pixel is represented by the weighted average of the brightness values of peripheral pixels, and the weighted average is based on Gaussian distribution. The weighting of the bilateral filtering considers not only the euclidean distance of the pixel (like ordinary gaussian low-pass filtering, only the influence of the position on the central pixel is considered), but also the radiation difference in the pixel range domain (such as the similarity between the pixel and the central pixel in the convolution kernel, the color intensity, the depth distance, etc.).
Taking a scene for identifying positive cells containing Ki-67 protein as an example, in the embodiment of the present application, a terminal or a server performs bilateral filtering on an H-channel image through preset parameters for bilateral filtering (such as parameters of a diameter range of a neighborhood, a standard deviation of a color space, a standard in a coordinate space, and the like) to obtain a filtered image.
Due to the fact that bilateral filtering is used for removing noise interference on the H-channel image, cell edge information in the image can be accurately kept, and compared with Gaussian filtering in the related technology, accuracy of follow-up edge segmentation can be improved.
S302d, performing edge segmentation on the filtered image to obtain the cell edge image.
In this embodiment, the terminal or the server may perform edge segmentation on the filtered image through a watershed algorithm to obtain the cell edge image.
In the embodiment of the application, when performing edge segmentation on the filtered image, a user (for example, a doctor) may select a partial region in the filtered image as a region for subsequent cell identification, and when performing edge segmentation, the partial region selected by the user may be segmented, so as to further improve the cell identification efficiency. Wherein, the edge segmentation step may include the following 4 steps:
1) cell region segmentation
Taking a scene for identifying positive cells containing Ki-67 protein as an example, the terminal or the server can perform secondary classification on the H channel image by using an Otsu segmentation (Otsu) method, extract a binary cell region, and remove fine-grained tissues and noise by using morphology, wherein the extracted cell region is used as one input of a subsequent watershed algorithm.
2) Local extremum extraction
The terminal or the server may perform full-scale filtering on the filtered image again by using maximum filtering, find local extrema points (for example, the neighborhood range may be set to 6) in the image after the filtering again, use each local extrema point as an initial seed point of one cell, and use the extracted local extrema point as an input of a subsequent watershed algorithm.
In the embodiment of the present application, after finding all the local extreme points, the terminal or the server performs connected region analysis on all the local extreme points, and assigns different labels to each connected region to form a labeled image, which is used as an input of a subsequent watershed algorithm.
3) ROI area determination
In the embodiment of the present application, a user may delineate a region of interest (ROI) from a color microscope image, and when the terminal or the server performs cell edge segmentation, it may be determined that the user manually delineates a region of the ROI (e.g., a tumor cell count region) as an input of a subsequent watershed algorithm.
4) Watershed segmentation
The watershed algorithm is a segmentation method of mathematical morphology based on a topological theory, and the basic idea is that an image is regarded as a topological landform on geodesic science, the gray value of each pixel in the image represents the altitude of the point, each local extreme point and the influence area of the local extreme point are called as a water collecting basin, and the boundary of the water collecting basin forms a watershed.
The concept and formation of watersheds can be represented by simulating the immersion process; for example, as shown in fig. 4, which shows a schematic diagram of watershed formation according to an embodiment of the present application, as shown in fig. 4, a small hole is pierced on the surface of each local extreme point, then the whole model is slowly immersed in water, as the immersion is deepened, the influence area of each local extreme point is gradually expanded outwards, and a dam is built at the junction of two water collecting basins, i.e., a watershed is formed.
The watershed algorithm has the advantage of being able to respond well to weak edges to ensure that closed continuous edges are obtained. In the embodiment of the application, the terminal or the server performs example segmentation on each cell in an intersection region of an ROI (region of interest) and a cell region outlined by a user through a watershed algorithm, and completely segments the edge of each cell, so that useless segmentation on a non-cell region and a non-tumor cell counting region is avoided, and the number of false positives for cell segmentation is reduced.
In the embodiment of the present application, a watershed algorithm is used to perform cell edge segmentation, and optionally, the watershed algorithm may be replaced by another unsupervised segmentation algorithm, for example, a level-set algorithm may be used.
Step 303, determining the connected region of each cell according to the edge of each cell indicated by the cell edge image.
The connected region of the cell is a region surrounded by edges of the corresponding cell in the cell edge image. The connected region of a cell may directly indicate the size of the corresponding cell.
Optionally, the terminal or the server may further obtain the area of the connected region of each cell, and obtain the connected region of which the corresponding area meets the filtering condition as the filtered connected region of each cell; the filtering condition includes that the corresponding area is within a specified area interval.
In the embodiment of the present application, because noise interference inevitably occurs in the image processing process (including the processes of filtering, segmenting, and the like) in the above steps, accuracy of the edge of the cell indicated by the cell edge image is affected, and in order to avoid the above noise interference from affecting the subsequent identification process, in this step, the terminal or the server may further screen the connected region of the cell to remove obvious noise interference.
For example, the terminal or the server calculates the area of the connected region of each cell edge segmented by the cell segmentation algorithm (such as the watershed algorithm), and if the area of the connected region is smaller than a first prior threshold (10 pixels) or larger than a second prior threshold (3000 pixels), the connected region is determined as a segmented region caused by noise, and is removed in the subsequent calculation process. Wherein the first prior threshold and the second prior threshold are a lower limit and an upper limit of the designated area interval, respectively.
The above-mentioned elimination of the connected region means that the connected region is not regarded as a region where the cell is located in the subsequent calculation process, or the connected region is regarded as a region where the cell is not present.
The area-designated section may be preset in a terminal or a server by a developer.
And 304, acquiring the central point of the communication area of each cell as the central point of each cell.
When the connected region of each cell is the filtered connected region of each cell, the server may obtain a central point of the filtered connected region of each cell as a central point corresponding to each cell.
Optionally, after the central point of the connected region of each cell is obtained as the central point of each cell, and before the subsequent steps are performed, the terminal or the server may further merge the central points of each cell according to the merging conditions to obtain the central point of each merged cell.
In the embodiment of the present application, when the edge of the cell is segmented by the segmentation algorithm, an over-segmentation problem may occur, for example, one cell is segmented into a plurality of cells by mistake, and in order to further reduce the influence of the over-segmentation problem in the pre-segmentation processing step on the subsequent cell identification, the terminal or the server may further merge the central points of the respective cells after determining the central points of the respective cells, so as to merge the centers of the over-segmented cells into the center of a single cell.
Optionally, when the central points of the cells are merged according to the merging conditions to obtain the central points of the merged cells, for a first central point of the central points of the cells, the terminal or the server may determine a second central point corresponding to the first central point, where the first central point is any one of the central points of the cells, and the second central point is a central point of another cell closest to the first central point; when the Euclidean distance between the first center point and the second center point is smaller than a distance threshold, the first center point and the second center point are combined into a third center point, and the third center point is located at the midpoint of a connecting line between the first center point and the second center point.
In this embodiment of the application, when the center points of the cells are combined, the terminal or the server may calculate, for the center point of each cell, a euclidean distance between the center points of the other cells closest to the center point of each cell, and if the calculated euclidean distance is smaller than a certain a priori distance threshold (for example, 16), combine the two center points, for example, calculate a mean value of pixel coordinates of the two center points to replace the two center points, so as to combine the center points of the two cells into the center point of a single cell.
Step 305, obtaining the position of the center point of each cell in the color microscopic image according to the mapping relationship between each pixel point in the cell edge image and each pixel point in the color microscopic image.
Optionally, if the central point of each cell is the central point of each merged cell, the terminal or the server may obtain the position of the central point of each merged cell in the color microscope image according to the mapping relationship between each pixel point in the cell edge image and each pixel point in the color microscope image.
In a possible implementation manner, if the resolution of the image is kept unchanged during the processes of filtering, edge segmentation and the like of the color microscopic image, that is, at this time, each pixel point in the cell edge image and each pixel point in the color microscopic image are in a one-to-one correspondence relationship, the terminal or the server may directly use the position of the center point of each cell in the edge segmented image after combination as the position of the center point of the corresponding cell in the color microscopic image.
In another possible implementation manner, if the resolution of the image is changed, for example, the resolution is reduced, during the processing of filtering and edge segmentation on the color microscopic image, the server may determine a mapping relationship between each pixel point in the cell edge image and each pixel point in the color microscopic image according to a proportional relationship before and after the resolution is changed, and then determine a position of a center point of each cell in the color microscopic image after combination according to the determined mapping relationship and the position of the center point of each cell in the edge segmented image after combination.
And step 306, acquiring the gray value of the central point of each cell according to the position of the central point of each cell in the color microscopic image.
Optionally, when the gray value of the central point of each cell is obtained according to the position of the central point of each cell in the color microscope image, the terminal or the server may perform gray processing on the color microscope image to obtain a gray image; then, carrying out median filtering on the gray level image to obtain a gray level image after the median filtering; and then, acquiring the gray value of the central point of each cell in the gray image after median filtering according to the position of the central point of each cell in the color microscopic image.
For example, the terminal or the server may convert an originally input RGB image (i.e., the color microscope image) into a grayscale image, and in consideration of neighborhood information around the central point of each cell, the terminal or the server further performs median filtering on the grayscale image (e.g., performs median filtering according to a 5 × 5 region), and finally obtains a grayscale value corresponding to the position of the central point of each cell in the grayscale image after the median filtering as the grayscale value of the midpoint of each cell.
In the scheme, when the terminal or the server acquires the gray value of the central point of each cell, the original color microscopic image is converted into the gray image, and then the gray image is subjected to median filtering, so that noise interference can be effectively reduced, and the accuracy of subsequent cell identification is further improved.
And 307, identifying the cell type of each cell according to the gray value of the central point of each cell, wherein the cell type is a positive cell or a negative cell.
For example, if the gray value of the central point of the cell after the median filtering is greater than a prior threshold (such as 60), the terminal or the server may identify the cell as a negative cell, and conversely, the terminal or the server may identify the cell as a positive cell.
Optionally, the terminal or the server may further perform color labeling on each cell in the color microscopic image according to the cell type of each cell to obtain a cell labeling image; and outputting the cell marker image.
Optionally, the terminal or the server may further count the number of positive cells in each cell and the number of negative cells in each cell; calculating the proportion of positive cells in each cell according to the number of positive cells in each cell and the number of negative cells in each cell; and outputting the proportion of positive cells in each of the cells.
Alternatively, the cell marker image and the proportion of positive cells may be combined and output.
For example, when the steps of the above method are performed by the server, the server may represent negative cells on the color microscope image using green solid dots, and represent positive cells on the color microscope image using red solid dots, and superimpose the calculated proportion of positive cells on the color microscope image marked by color. And the server transmits the color microscopic image which is subjected to color marking and is superimposed with the proportion of the positive cells to the terminal, and the terminal displays the color microscopic image.
Alternatively, when the steps of the above method are performed by the terminal, the server may directly present the color microscopic image after generating the image after color marking and superimposing the proportion of positive cells.
In summary, in the solution shown in the embodiment of the present application, after the color microscope image including each cell is processed to obtain the cell edge image indicating the edge of the cell, the center point of the cell is determined according to the edge of the cell indicated by the cell edge image, and the positive cell and the negative cell are distinguished according to the gray value of the center point of the cell in the color microscope image, so that image recognition by a complex machine learning algorithm is not required, thereby reducing the calculation resource and processing time of cell recognition, and significantly improving the efficiency of recognizing the positive cell and the negative cell from the microscope image.
In addition, according to the scheme shown in the embodiment of the application, compared with the scheme for cell identification through a machine learning model, a large number of manual labels are not needed to train the model, and the labor and time for labeling at a cell level are saved, so that the development period can be shortened, and the development and updating efficiency of products is improved.
In addition, in the scheme shown in the embodiment of the present application, before the central point of the connected region of each cell is obtained as the central point of each cell, the connected region is screened according to the area of the connected region, the connected region which does not meet the condition is excluded, noise interference generated in the previous step is reduced, and the accuracy of subsequent cell identification is improved.
In addition, in the scheme shown in the embodiment of the present application, after the central point of the connected region of each cell is obtained as the central point of each cell, the central points of each cell are also merged according to the euclidean distance between adjacent central points, so that the influence of over-segmentation in the edge segmentation step is reduced, and the accuracy of subsequent cell identification is further improved.
In addition, in the scheme shown in the embodiment of the application, before edge segmentation is performed, the target channel image is filtered through a bilateral filtering algorithm, so that the edge information of the cells in the target channel image can be effectively retained, the accuracy of subsequent edge segmentation is improved, and the accuracy of subsequent cell identification is further improved.
In addition, in the scheme shown in the embodiment of the application, in the process of acquiring the gray value of the center point of each cell, the gray value image obtained by converting the color microscopic image is subjected to median filtering, and the gray value of the center point of each cell is acquired through the gray value image after the median filtering, so that noise interference is effectively reduced, and the accuracy of subsequent cell identification is further improved.
In a possible implementation manner, the scheme shown in fig. 2 or fig. 3 may be implemented to provide services to the outside by means of a software interface, that is, a user (such as a doctor) may access the software interface providing the cell identification service through a terminal, input the color microscope image of the cell sample to the software interface, and receive the cell identification result returned by the cell identification service through the software interface.
The cell identification service may be executed on the terminal side (i.e., the terminal performs the steps shown in fig. 2 or fig. 3) or on the server side (i.e., the server performs the steps shown in fig. 3 or the soil, and returns the identification result to the terminal).
Taking the scenario that the scheme shown in fig. 2 or fig. 3 is applied to identify the Ki-67 positive index of the cell as an example, please refer to fig. 5, which shows a flow chart of identifying the Ki-67 positive index of the cell according to an exemplary embodiment of the present application. As shown in fig. 5, the above process of identifying Ki-67 positive indices of cells is mainly divided into four steps of preprocessing S51, cell segmentation S52, post-processing S53 and cell classification counting S54, each of which can be divided into the following sub-steps:
the preprocessing step S51 may be performed by a preprocessing module, and the preprocessing step may include the following sub-steps:
s51a, carrying out color channel decomposition on the input original microscopic image to obtain a multicolor space image, namely an H-E-DAB color space image.
Wherein the original microscopic image is a color microscopic image obtained by collecting a cell sample group which is subjected to color development by an immunohistochemical method under a microscope.
S51b, an H-channel image is extracted from the multicolor space image.
S51c, performs noise reduction processing on the H-channel image by using a bilateral filtering method.
The cell segmentation step S52 may be performed by a cell segmentation module, and the cell segmentation step may include the following sub-steps:
s52a, processing the H channel image after the noise reduction processing by a cell region segmentation method to obtain an input of a watershed segmentation algorithm.
S52b, processing the H channel image after the noise reduction processing by a local extremum extraction method to obtain another input of the watershed segmentation algorithm.
And S52c, processing the two inputs by using a watershed algorithm to obtain edge information of each cell in the H-channel image so as to realize example segmentation of each cell.
The post-processing step S53 may be performed by a post-processing module, and the post-processing step may include the following sub-steps:
s53a, the connected region of the cell with the appropriate area is selected by the area method, and the connected region with the inappropriate area is extracted.
S53b, the center point of the connected region of the selected cell is calculated.
S53c, the central points of the respective cells are combined by euclidean distance to reduce the influence of over-segmentation and remove the false positive cells.
The above-mentioned cell classifying and counting step S54 may be performed by a cell classifying and counting module, and the cell classifying and counting step may include the following sub-steps:
and S54a, obtaining the gray value of the central point of each cell after performing median filtering on the gray image of the original microscopic image, and performing negative and positive secondary classification on the cells in a threshold classification mode.
S54b, counting the number of negative and positive cells, and outputting the Ki-67 positive index of the original microscopic image.
Alternatively, in the scheme shown in FIG. 5 above, the input is the field of view of a microscope Ki-67 image, the output is positive cell markers (red) and negative cell markers (green), and a positive index of Ki-67 is given simultaneously.
According to the scheme shown in fig. 5, bilateral filtering is adopted to replace Gaussian filtering to reduce noise of the H-channel image, and the bilateral filtering can effectively retain edge information in the image while removing background noise of the image, so that the accuracy of cell segmentation is improved.
In the scheme shown in fig. 5, a post-processing operation is added to the cell segmentation result of the watershed algorithm to screen the segmented cells. The post-treatment operation is mainly divided into two steps: 1. setting a cell area threshold value, and removing a segmentation region which does not meet the area threshold value; 2. and setting a Euclidean distance threshold value between cells, and merging two cells smaller than the distance threshold value. The post-processing operation emphatically filters non-cell segmentation areas caused by noise and reduces the over-segmentation problem of the watershed algorithm, so that the Ki-67 positive index is calculated more accurately;
in addition, in the scheme shown in fig. 5, the geometric center point of the divided cell region is calculated, and the cells are classified according to the center point and the gray values of the surrounding pixels within a certain range, so that the calculation complexity is reduced.
Through the scheme shown in the embodiment of the application, the following effects can be achieved:
1) improve the cell recognition efficiency. In application, the time period for processing a microscopic image by the scheme shown in fig. 5 can be shortened to less than 1 second, so that a pathologist can be assisted to diagnose, the workload of artificial cell counting is greatly reduced, and the working efficiency of the pathologist is improved.
2) The user (pathologist) can automatically outline the region, and the algorithm only calculates the Ki-67 positive index of the region in which the pathologist is interested.
3) The algorithm is an unsupervised image processing method, and compared with a supervised method, the unsupervised algorithm does not need time-consuming manual labeling and model training processes.
4) The algorithm can be deployed by a common desktop computer or a notebook computer without depending on hardware environments such as a high-computation-power GPU.
5) Has stronger applicability, and can process Ki-67 pictures of different disease species such as colorectal, mammary gland, neuroendocrine and the like. For example, please refer to fig. 6 to 8.
Fig. 6 is a graph showing a comparison of identification results of colorectal cells according to an example of the present application. The left is the color microscopic image of the original colorectal cells and the right is the image color labeled and overlaid with the Ki-67 positive index (69%).
Fig. 7 is a graph showing a comparison of the identification results of breast cells according to the example of the present application. The left is the color microscopic image of the original breast cells and the right is the image color-labeled and overlaid with the Ki-67 positive index (42%).
Fig. 8 is a graph showing a comparison of recognition results of neuroendocrine tumor cells according to an embodiment of the present application. The left is a color microscopic image of the original neuroendocrine tumor cells and the right is an image color-labeled and overlaid with a Ki-67 positive index (1%).
In an exemplary embodiment of the present application, there is also provided a system for identifying cells in a microscopic image, the system comprising a microscope and an image processing apparatus.
Wherein the image processing device may be used to perform all or part of the steps of the method of identifying cells in a microscopic image as illustrated in fig. 2, 3 or 5 above.
In a possible display mode, the microscope may be an intelligent microscope integrating functions of computing, network communication, image acquisition, and graphic display on a conventional optical microscope. For example, the microscope may be the microscope 120 in the system shown in fig. 1, and the image processing device may be the terminal 140 or the server 160 in the system shown in fig. 1.
In an exemplary scheme, the image processing device may provide a software interface to the outside, and the microscope and the image processing device perform data interaction through the software interface, that is, the image processing device provides services for the microscope through the form of the software interface.
For example, the microscope may send the color microscopic image to the image processing device through the software interface, and correspondingly, the image processing device receives the color microscopic image sent by the microscope through the software interface; after the image processing equipment identifies positive cells and negative cells in the color microscopic image, returning a processing result to the microscope through a software interface, wherein the processing result comprises the proportion of the cell marking image to the positive cells; the cell marker image is an image obtained by color-marking each cell in the color microscopic image according to the cell type. Accordingly, after the microscope receives the processing result through the software interface, the processing result can be displayed in an eyepiece of the microscope.
The process of obtaining the cell marker image and the proportion of positive cells can refer to the description in the embodiment shown in fig. 3, and is not repeated here.
For example, taking the Ki-67 positive index for identifying cells as an example, the physician places a sample of cells on the objective of a microscope, and the section visual field of the microscope is switched to the Ki-67 section visual field, at the moment, the microscope collects a color microscopic image under the Ki-67 section visual field through an image collecting assembly in the microscope, and the color microscopic image is transmitted to a server (namely the image processing device) through a software interface, the server performs cell identification through the scheme shown in each method embodiment, after the proportion of the cell marker image and the positive cells is obtained, returning the cell marker image and the proportion of the positive cells to the microscope, wherein the microscope can display the cell marker image with the proportion of the positive cells superimposed on the ocular lens, and the cell marker image with the proportion of the positive cells superimposed on the ocular lens can refer to the above fig. 6 to fig. 8. Because the recognition algorithm in the application does not need complex machine learning model processing, the time consumption of cell recognition can be obviously shortened, and the recognition result is directly displayed in the eyepiece of a microscope, a doctor can observe the recognition result of positive cells within a very short time delay (within 1 s) when observing a cell sample through the microscope, and the doctor can synchronously check the cell sample and the cell recognition result, thereby realizing the effect of 'what you see is what you get', and greatly improving the diagnosis efficiency of the doctor.
Fig. 9 is a block diagram illustrating a structure of an apparatus for identifying cells in a microscopic image according to an exemplary embodiment. The means for identifying cells in the microscopic image may be executed by a computer device (such as the terminal and/or server shown in fig. 1) to perform all or part of the steps of the method shown in the corresponding embodiment of fig. 2, 3 or 5. The means for identifying cells in the microscopic image may comprise:
an edge segmentation module 901, configured to perform edge segmentation on each cell in the color microscopic image to obtain a cell edge image, where the cell edge image indicates an edge of each cell; the color microscopic image is an image obtained by collecting the cell sample group under a microscope field after the cells in the cell sample group are subjected to color marking;
a central position obtaining module 902, configured to obtain, according to the cell edge image, positions of central points of the cells in the color microscope image;
a gray value obtaining module 903, configured to obtain a gray value of the center point of each cell according to the position of the center point of each cell in the color microscope image;
a cell identification module 904, configured to identify a cell type of each cell according to the gray value of the central point of each cell, where the cell type is a positive cell or a negative cell.
In a possible implementation manner, the central location obtaining module 902 includes:
a connected region determining unit configured to determine a connected region of each cell based on the edge of each cell indicated by the cell edge image;
a central point acquisition unit configured to acquire a central point of the connected region of each cell as a central point of each cell;
and the position acquisition unit is used for acquiring the position of the central point of each cell in the color microscopic image according to the mapping relation between each pixel point in the cell edge image and each pixel point in the color microscopic image.
In a possible implementation manner, the central location obtaining module 902 further includes:
an area acquisition unit configured to acquire an area of the connected region of each cell before the central point acquisition unit acquires the central point of the connected region of each cell as the central point of each cell;
the regional filtering unit is used for acquiring the communication region with the corresponding area meeting the filtering condition as the communication region of each filtered cell; the filtering condition comprises that the corresponding area is in a designated area interval;
the central point acquiring unit is configured to acquire a central point of the filtered communication area of each cell as a central point of each cell.
In a possible implementation manner, the central location obtaining module 902 further includes:
a merging unit, configured to merge the central points of the cells according to a merging condition before the gray value obtaining module 903 obtains the position of the central point of each cell in the color microscope image according to the mapping relationship between each pixel point in the cell edge image and each pixel point in the color microscope image, so as to obtain the central point of each cell after merging;
the gray value obtaining module 903 is configured to obtain, according to a mapping relationship between each pixel point in the cell edge image and each pixel point in the color microscope image, a position of a center point of each cell in the color microscope image after combination.
In a possible implementation manner, the merging unit is configured to,
for a first central point in the central points of the cells, determining a second central point corresponding to the first central point, wherein the first central point is any one of the central points of the cells, and the second central point is the central point of other cells closest to the first central point;
when the Euclidean distance between the first central point and the second central point is smaller than a distance threshold, combining the first central point and the second central point into a third central point, wherein the third central point is located at the midpoint of a connecting line between the first central point and the second central point.
In a possible implementation manner, the gray value obtaining module 903 includes:
the gray processing unit is used for carrying out gray processing on the color microscopic image to obtain a gray image;
the first filtering unit is used for carrying out median filtering on the gray level image to obtain the gray level image after the median filtering;
and the gray value acquisition unit is used for acquiring the gray value of the central point of each cell in the gray image after median filtering according to the position of the central point of each cell in the color microscopic image.
In a possible implementation manner, the edge segmentation module 901 includes:
the channel decomposition unit is used for carrying out color channel decomposition on the color microscopic image to obtain color channel images corresponding to at least two color spaces;
a channel extraction unit, configured to extract a target channel image corresponding to a target color space from the color channel image;
the second filtering unit is used for filtering the target channel image to obtain a filtered image;
and the segmentation unit is used for performing edge segmentation on the filtered image to obtain the cell edge image.
In a possible implementation manner, the second filtering unit is configured to filter the target channel image through a bilateral filtering algorithm to obtain the filtered image.
In a possible implementation manner, the segmentation unit is configured to perform edge segmentation on the filtered image through a watershed algorithm to obtain the cell edge image.
In one possible implementation, the apparatus further includes:
a color labeling module 905, configured to perform color labeling on each cell in the color microscopic image according to the cell type of each cell, so as to obtain a cell labeling image;
an image output module 906, configured to output the cell marker image.
In one possible implementation, the apparatus further includes:
a counting module 907 for counting the number of positive cells in each cell and the number of negative cells in each cell;
a proportion calculation module 908 for calculating a proportion of positive cells in each of the cells according to the number of positive cells in each of the cells and the number of negative cells in each of the cells;
a ratio output module 909 for outputting the ratio of the positive cells in the respective cells.
In one possible implementation, the positive cells are cells comprising the Ki-67 protein.
In summary, in the solution shown in the embodiment of the present application, after the color microscope image including each cell is processed to obtain the cell edge image indicating the edge of the cell, the center point of the cell is determined according to the edge of the cell indicated by the cell edge image, and the positive cell and the negative cell are distinguished according to the gray value of the center point of the cell in the color microscope image, so that image recognition by a complex machine learning algorithm is not required, thereby reducing the calculation resource and processing time of cell recognition, and significantly improving the efficiency of recognizing the positive cell and the negative cell from the microscope image.
In addition, according to the scheme shown in the embodiment of the application, compared with the scheme for cell identification through a machine learning model, a large number of manual labels are not needed to train the model, and the labor and time for labeling at a cell level are saved, so that the development period can be shortened, and the development and updating efficiency of products is improved.
In addition, in the scheme shown in the embodiment of the present application, before the central point of the connected region of each cell is obtained as the central point of each cell, the connected region is screened according to the area of the connected region, the connected region which does not meet the condition is excluded, noise interference generated in the previous step is reduced, and the accuracy of subsequent cell identification is improved.
In addition, in the scheme shown in the embodiment of the present application, after the central point of the connected region of each cell is obtained as the central point of each cell, the central points of each cell are also merged according to the euclidean distance between adjacent central points, so that the influence of over-segmentation in the edge segmentation step is reduced, and the accuracy of subsequent cell identification is further improved.
In addition, in the scheme shown in the embodiment of the application, before edge segmentation is performed, the target channel image is filtered through a bilateral filtering algorithm, so that the edge information of the cells in the target channel image can be effectively retained, the accuracy of subsequent edge segmentation is improved, and the accuracy of subsequent cell identification is further improved.
In addition, in the scheme shown in the embodiment of the application, in the process of acquiring the gray value of the center point of each cell, the gray value image obtained by converting the color microscopic image is subjected to median filtering, and the gray value of the center point of each cell is acquired through the gray value image after the median filtering, so that noise interference is effectively reduced, and the accuracy of subsequent cell identification is further improved.
FIG. 10 is a block diagram illustrating a computer device according to an example embodiment. The computer device may be implemented as a terminal, such as terminal 140 in the system of fig. 1, or as a server, such as server 160 in the system of fig. 1.
The computer apparatus 1000 includes a Central Processing Unit (CPU)1001, a system memory 1004 including a Random Access Memory (RAM)1002 and a Read Only Memory (ROM)1003, and a system bus 1005 connecting the system memory 1004 and the central processing unit 1001. The computer device 1000 also includes a basic input/output system (I/O system) 1006, which facilitates the transfer of information between devices within the computer, and a mass storage device 1007, which stores an operating system 1013, application programs 1014, and other program modules 1015.
The basic input/output system 1006 includes a display 1008 for displaying information and an input device 1009, such as a mouse, keyboard, etc., for user input of information. Wherein the display 1008 and input device 1009 are connected to the central processing unit 1001 through an input-output controller 1010 connected to the system bus 1005. The basic input/output system 1006 may also include an input/output controller 1010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input-output controller 1010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1007 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 1007 and its associated computer-readable media provide non-volatile storage for the computer device 1000. That is, the mass storage device 1007 may include a computer readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1004 and mass storage device 1007 described above may be collectively referred to as memory.
The computer device 1000 may be connected to the internet or other network devices through a network interface unit 1011 connected to the system bus 1005.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 1001 implements all or part of the steps of the method shown in fig. 2, 3, or 5 by executing the one or more programs.
An embodiment of the present application further provides a computer device, which includes a memory and a processor, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded by the processor and implements all or part of the steps in the method described in fig. 2, fig. 3, or fig. 5.
Embodiments of the present application also provide a computer-readable storage medium, which stores at least one instruction, at least one program, a code set, or a set of instructions, where the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement all or part of the steps in the method shown in fig. 2, fig. 3, or fig. 5.
The present application also provides a computer program product for causing a computer to perform all or part of the steps of the method described above with reference to fig. 2, 3 or 5 when the computer program product runs on the computer.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. Stored on the computer readable storage medium is at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement all or part of the steps of the method as described in fig. 2, 3, or 5.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A method of identifying cells in a microscopic image, the method being performed by a computer device, the method comprising:
performing edge segmentation on each cell in the color microscopic image to obtain a cell edge image, wherein the cell edge image indicates the edge of each cell; the color microscopic image is an image obtained by collecting the cell sample group under a microscope field after the cells in the cell sample group are subjected to color marking;
determining a connected region of each cell according to the edge of each cell indicated by the cell edge image;
acquiring the central point of the communication area of each cell as the central point of each cell;
combining the central points of the cells according to a combination condition to obtain the combined central points of the cells;
acquiring the position of the central point of each cell in the color microscopic image according to the mapping relation between each pixel point in the cell edge image and each pixel point in the color microscopic image;
acquiring the gray value of the central point of each combined cell according to the position of the central point of each combined cell in the color microscopic image;
and identifying the cell type of each cell according to the gray value of the central point of each cell after combination, wherein the cell type is a positive cell or a negative cell.
2. The method of claim 1, wherein before obtaining the center point of the connected region of each cell as the center point of each cell, further comprising:
obtaining the area of the communication region of each cell;
acquiring the communication region with the corresponding area meeting the filtering condition as the communication region of each filtered cell; the filtering condition comprises that the corresponding area is in a designated area interval;
the obtaining the central point of the connected region of each cell as the central point of each cell includes:
and acquiring the central point of the filtered communication area of each cell as the central point of each cell.
3. The method of claim 1,
the obtaining the position of the center point of each cell in the color microscopic image according to the mapping relationship between each pixel point in the cell edge image and each pixel point in the color microscopic image includes:
and acquiring the position of the center point of each cell in the color microscopic image after combination according to the mapping relation between each pixel point in the cell edge image and each pixel point in the color microscopic image.
4. The method according to claim 3, wherein said combining the central points of the cells according to the combining condition to obtain the combined central points of the cells comprises:
for a first central point in the central points of the cells, determining a second central point corresponding to the first central point, wherein the first central point is any one of the central points of the cells, and the second central point is the central point of other cells closest to the first central point;
when the Euclidean distance between the first central point and the second central point is smaller than a distance threshold, combining the first central point and the second central point into a third central point, wherein the third central point is located at the midpoint of a connecting line between the first central point and the second central point.
5. The method according to claim 1, wherein the obtaining the gray value of the central point of each cell according to the position of the central point of each cell in the color microscope image comprises:
carrying out gray level processing on the color microscopic image to obtain a gray level image;
performing median filtering on the gray level image to obtain the gray level image after the median filtering;
and acquiring the gray value of the central point of each cell in the gray image after median filtering according to the position of the central point of each cell in the color microscopic image.
6. The method of claim 1, wherein the edge segmentation of each cell in the color microscopic image to obtain a cell edge image comprises:
carrying out color channel decomposition on the color microscopic image to obtain color channel images corresponding to at least two color spaces;
extracting a target channel image corresponding to a target color space from the color channel image;
filtering the target channel image to obtain a filtered image;
and performing edge segmentation on the filtered image to obtain the cell edge image.
7. The method of claim 6, wherein the filtering the target channel image to obtain a filtered image comprises:
and filtering the target channel image through a bilateral filtering algorithm to obtain the filtered image.
8. The method of claim 1, further comprising:
according to the cell type of each cell, carrying out color marking on each cell in the color microscopic image to obtain a cell marking image;
outputting the cell marker image.
9. The method of claim 1, further comprising:
counting the number of positive cells in each cell and the number of negative cells in each cell;
calculating the proportion of positive cells in each cell according to the number of positive cells in each cell and the number of negative cells in each cell;
outputting the proportion of positive cells in the respective cells.
10. An apparatus for identifying cells in a microscopic image, the apparatus being for use in a computer device, the apparatus comprising:
the edge segmentation module is used for performing edge segmentation on each cell in the color microscopic image to obtain a cell edge image, and the cell edge image indicates the edge of each cell; the color microscopic image is an image obtained by collecting the cell sample group under a microscope field after the cells in the cell sample group are subjected to color marking;
the central position acquisition module is used for acquiring the position of the central point of each cell in the color microscopic image according to the cell edge image;
the central position acquisition module includes:
a connected region determining unit configured to determine a connected region of each cell based on the edge of each cell indicated by the cell edge image;
a central point acquisition unit configured to acquire a central point of the connected region of each cell as a central point of each cell;
a merging unit, configured to merge the central points of the cells according to a merging condition, so as to obtain a merged central point of each cell;
the position acquisition unit is used for acquiring the position of the central point of each cell in the color microscopic image according to the mapping relation between each pixel point in the cell edge image and each pixel point in the color microscopic image;
the gray value acquisition module is used for acquiring the gray value of the central point of each combined cell according to the position of the central point of each combined cell in the color microscopic image;
and the cell identification module is used for identifying the cell type of each cell according to the gray value of the central point of each cell after combination, wherein the cell type is a positive cell or a negative cell.
11. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, said at least one instruction, said at least one program, said set of codes, or said set of instructions being loaded and executed by said processor to implement a method of identifying cells in a microscopic image according to any one of claims 1 to 9.
12. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of identifying cells in a microscopic image according to any one of claims 1 to 9.
13. A system for identifying cells in a microscopic image, the system comprising: a microscope and an image processing apparatus;
the image processing apparatus for performing the method of identifying cells in a microscopic image according to any one of claims 1 to 9.
14. The system according to claim 13, wherein the image processing apparatus externally provides a software interface;
the microscope is used for sending the color microscopic image to the image processing equipment through the software interface;
the image processing device is used for returning a processing result to the microscope through the software interface, and the processing result comprises a cell marker image and the proportion of positive cells; the cell marker image is an image obtained by color-marking each cell in the color microscopic image according to the cell type;
the microscope is used for displaying the processing result in an eyepiece of the microscope.
CN201910784865.4A 2019-08-23 2019-08-23 Method, device, equipment and storage medium for identifying cells in microscopic image Active CN110490159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910784865.4A CN110490159B (en) 2019-08-23 2019-08-23 Method, device, equipment and storage medium for identifying cells in microscopic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910784865.4A CN110490159B (en) 2019-08-23 2019-08-23 Method, device, equipment and storage medium for identifying cells in microscopic image

Publications (2)

Publication Number Publication Date
CN110490159A CN110490159A (en) 2019-11-22
CN110490159B true CN110490159B (en) 2020-12-25

Family

ID=68553342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910784865.4A Active CN110490159B (en) 2019-08-23 2019-08-23 Method, device, equipment and storage medium for identifying cells in microscopic image

Country Status (1)

Country Link
CN (1) CN110490159B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260677B (en) * 2020-02-20 2023-03-03 腾讯医疗健康(深圳)有限公司 Cell analysis method, device, equipment and storage medium based on microscopic image
CN112584367B (en) * 2020-12-03 2022-09-27 杭州市第一人民医院 Microscope control method, device, equipment and storage medium
CN115908363B (en) * 2022-12-07 2023-09-22 赛维森(广州)医疗科技服务有限公司 Tumor cell statistics method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102094000A (en) * 2010-12-15 2011-06-15 南京医科大学 Method for improving in-vitro expression of cytochrome P450 enzyme family

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101254282B (en) * 2008-03-21 2012-05-30 舒财 Combination for curing cervical vertebra disease
CN101947232B (en) * 2008-09-09 2012-06-13 刘华钢 Use of cactus pear fruit polysaccharides in preparation of medicaments or health-care products
CN102831607B (en) * 2012-08-08 2015-04-22 深圳市迈科龙生物技术有限公司 Method for segmenting cervix uteri liquid base cell image
CN103345748B (en) * 2013-06-26 2016-08-10 福建师范大学 A kind of locating segmentation method of human tissue cell two-photon micro-image
CN103745257B (en) * 2013-12-23 2016-08-17 温州医科大学眼视光器械有限公司 A kind of computational methods of cone cell density based on image recognition
CN107957397A (en) * 2017-11-22 2018-04-24 大连海事大学 A kind of microalgae classification and Detection device and detection method based on hologram image feature

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102094000A (en) * 2010-12-15 2011-06-15 南京医科大学 Method for improving in-vitro expression of cytochrome P450 enzyme family

Also Published As

Publication number Publication date
CN110490159A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
US10706535B2 (en) Tissue staining quality determination
Chakraborty et al. Modified cuckoo search algorithm in microscopic image segmentation of hippocampus
JP6660313B2 (en) Detection of nuclear edges using image analysis
CN110678903B (en) System and method for analysis of ectopic ossification in 3D images
US8488863B2 (en) Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials
Zhang et al. Segmentation of cytoplasm and nuclei of abnormal cells in cervical cytology using global and local graph cuts
Salvi et al. Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images
CN110472616B (en) Image recognition method and device, computer equipment and storage medium
US20180012365A1 (en) System and method for image segmentation
US10650221B2 (en) Systems and methods for comprehensive multi-assay tissue analysis
CN111448569B (en) Method for storing and retrieving digital pathology analysis results
Chen et al. A flexible and robust approach for segmenting cell nuclei from 2D microscopy images using supervised learning and template matching
Bergmeir et al. Segmentation of cervical cell nuclei in high-resolution microscopic images: a new algorithm and a web-based software framework
EP2987142B1 (en) Systems and methods for multiplexed biomarker quantitation using single cell segmentation on sequentially stained tissue
EP3175389B1 (en) Automatic glandular and tubule detection in histological grading of breast cancer
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
Tosta et al. Segmentation methods of H&E-stained histological images of lymphoma: A review
CN110490159B (en) Method, device, equipment and storage medium for identifying cells in microscopic image
US11538261B2 (en) Systems and methods for automated cell segmentation and labeling in immunofluorescence microscopy
US11176412B2 (en) Systems and methods for encoding image features of high-resolution digital images of biological specimens
US11959848B2 (en) Method of storing and retrieving digital pathology analysis results
Alegro et al. Automating cell detection and classification in human brain fluorescent microscopy images using dictionary learning and sparse coding
Casiraghi et al. A novel computational method for automatic segmentation, quantification and comparative analysis of immunohistochemically labeled tissue sections
Apou et al. Detection of lobular structures in normal breast tissue
WO2014006421A1 (en) Identification of mitotic cells within a tumor region

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant