CN109975196B - Reticulocyte detection method and system - Google Patents

Reticulocyte detection method and system Download PDF

Info

Publication number
CN109975196B
CN109975196B CN201910154979.0A CN201910154979A CN109975196B CN 109975196 B CN109975196 B CN 109975196B CN 201910154979 A CN201910154979 A CN 201910154979A CN 109975196 B CN109975196 B CN 109975196B
Authority
CN
China
Prior art keywords
cell
image
pixel
area
reticulocyte
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910154979.0A
Other languages
Chinese (zh)
Other versions
CN109975196A (en
Inventor
钟小品
郭俊佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201910154979.0A priority Critical patent/CN109975196B/en
Publication of CN109975196A publication Critical patent/CN109975196A/en
Application granted granted Critical
Publication of CN109975196B publication Critical patent/CN109975196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N2015/1006Investigating individual particles for cytology

Landscapes

  • Chemical & Material Sciences (AREA)
  • Dispersion Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The invention discloses a method and a system for detecting reticulocytes, wherein the method comprises the following steps: extracting a cell edge according to the cell image, and obtaining pixel characteristics in the cell edge; sending the pixel characteristics into a classifier to classify the pixels to obtain a target pixel area; and identifying the reticulocytes through the position relation of the target pixel area and the cell edge. Since the pixel features are classified by the classifier and the reticulocytes are detected and identified by the position relation between the target pixel region and the cell edge, the precision and recall ratio of the detected reticulocytes can be improved.

Description

Reticulocyte detection method and system
Technical Field
The invention relates to the technical field of reticulocyte detection, in particular to a reticulocyte detection method and a reticulocyte detection system.
Background
Reticulocytes (reticulocytes) are transient cells that are expelled from the nucleus until the intracellular RNA content is lost, resulting in the formation of mature erythrocytes. Normally, red blood cells in the bone marrow are released into the blood circulation only when they are anucleated.
In the prior art, the detection method of reticulocytes mainly comprises an artificial microscopic examination method and an instrument method based on a flow cytometry technology. The manual microscopic examination method has low cost, but has low repeatability and is easily influenced by subjective factors; although the instrument method based on the flow cytometry overcomes the defects of the artificial microscopic examination method, the instrument method is easily interfered by white blood cells, blood platelets and other nucleated or nucleic acid substances in blood, and particularly, the detection accuracy is low when the reticulocyte proportion is increased to about 20 percent.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method and a system for detecting reticulocytes, aiming at solving the problem of low accuracy of detecting reticulocytes in the prior art.
The technical scheme adopted by the invention for solving the technical problem is as follows:
a reticulocyte detection method comprises the following steps:
extracting a cell edge according to the cell image, and obtaining pixel characteristics in the cell edge;
sending the pixel characteristics into a classifier to classify the pixels to obtain a target pixel area;
and identifying the reticulocytes through the position relation of the target pixel area and the cell edge.
The reticulocyte detection method is characterized in that the classifier is obtained by adopting the following steps:
adopting a pixel set of an RNA staining area in a typical reticulocyte as a positive sample set, and adopting a pixel set of a non-RNA staining area and an internal pixel set of the non-reticulocyte as a negative sample set, and extracting pixel characteristics;
and carrying out supervised learning and cross validation on the pixel characteristics of the marked positive and negative samples to form a classifier.
The reticulocyte detection method, wherein the step of extracting the cell edge according to the cell image specifically comprises:
decoupling the cell image to obtain a coupling background image;
decoupling the cell image according to the coupling background image and then normalizing to obtain a decoupling image;
carrying out binarization processing on the decoupling image to obtain a binary image;
and performing topology analysis on the binary image, and tracking the outermost boundary to obtain the cell edge.
The reticulocyte detection method is characterized in that the coupling background image is calculated by adopting the following formula:
Figure BDA0001982614540000021
wherein the content of the first and second substances,
Figure BDA0001982614540000022
for the background image, n represents the number of selected cell images, IiThe ith gray scale map is represented, and Σ is the summation operation.
The reticulocyte detection method is characterized in that the decoupling image is calculated by adopting the following formula:
Figure BDA0001982614540000023
wherein, I'iIs a decoupled image of the ith gray scale map, min [ ·]Denotes a minimization operation, max ·]Indicating a max operation.
The reticulocyte detection method comprises the following steps of:
carrying out binarization processing on the decoupling image based on an Otsu algorithm, a Niblack algorithm and a Canny operator to respectively obtain operation result graphs of Otsu, Niblack and Canny;
and after performing OR operation on the operation result graphs of Otsu, Niblack and Canny, filling the hole, performing denoising processing and morphological processing to obtain a binary image.
The reticulocyte detection method, wherein the step of identifying the reticulocyte through the position relationship between the target pixel area and the cell edge specifically comprises:
determining whether the target pixel region is within the cell by the number of times the ray crosses the cell boundary;
when only one target pixel region is arranged in the cell and the area of the region is larger than a first preset area threshold value, the cell is a reticulocyte;
when the number of target pixel regions in the cell exceeds a preset number and the area of the region is larger than a second preset area threshold value, the cell is a reticulocyte.
The reticulocyte detection method, wherein the slope of the ray is k:
Figure BDA0001982614540000031
wherein x isc、ycRespectively the abscissa and ordinate, x, of the centroid of the target pixel regionj、yjRespectively, are the abscissa and ordinate vectors of the edge coordinates of a single cell.
The reticulocyte detection method, wherein the pixel characteristics include: color space conversion, color features, and texture features.
A reticulocyte detection system, comprising: a processor, and a memory coupled to the processor,
the memory stores a reticulocyte detection program that when executed by the processor implements the steps of:
extracting a cell edge according to the cell image, and obtaining pixel characteristics in the cell edge;
sending the pixel characteristics into a classifier to classify the pixels to obtain a target pixel area;
and identifying the reticulocytes through the position relation of the target pixel area and the cell edge.
Has the advantages that: since the pixel features are classified by the classifier and the reticulocytes are detected and identified by the position relation between the target pixel region and the cell edge, the precision and recall ratio of the detected reticulocytes can be improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the method for detecting reticulocytes of the present invention.
FIG. 2 is a first image of reticulocytes of the present invention.
FIG. 3 is a second image of reticulocytes of the present invention.
FIG. 4 is a third image of reticulocytes of the present invention.
FIG. 5 is a fourth image of reticulocytes of the present invention.
FIG. 6 is a functional block diagram of a preferred embodiment of the reticulocyte detection system of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to FIGS. 1-5, several embodiments of a method for detecting reticulocytes are provided.
As shown in fig. 1, the method for detecting reticulocytes of the present invention comprises the following steps:
and S100, extracting the cell edge according to the cell image, and obtaining the pixel characteristics in the cell edge.
In the present invention, the RNA of cells is stained with a basic substance (Brilliant Taraxacum blue, New methylene blue, etc.), the RNA substance remaining in reticulocytes is stained blue or blue-green, and the RNA remaining in normal erythrocytes is not stained. And (3) shooting by adopting a camera after dyeing to obtain a cell image, and extracting the edge contour of the cell, wherein the process mainly comprises the processes of decoupling, binaryzation, edge extraction and the like.
Specifically, step S100 includes the steps of:
and step S110, decoupling the cell image to obtain a coupling background image.
Decoupling is a process of separating and processing the factors that affect each other by using a mathematical method. The collected cell images have the problems of lens pollution, over or under exposure and the like, and considering that continuous blood sample collection pictures have close imaging conditions, the decoupling process of the invention is as follows:
the coupling background map is calculated by adopting the following formula:
Figure BDA0001982614540000051
wherein the content of the first and second substances,
Figure BDA0001982614540000052
for the background image, n represents the number of selected cell images, IiThe ith gray scale map is represented, and Σ is the summation operation.
And S120, decoupling the cell image according to the coupling background image and then normalizing to obtain a decoupling image.
Compared with a cell image, the decoupling image has the advantages that exposure unevenness and most of lens pollution are removed, the contrast of the cell foreground and the background of the image is enhanced, and the effect of binaryzation is improved.
The decoupling image is calculated by adopting the following formula:
Figure BDA0001982614540000053
wherein, I'iIs a decoupled image of the ith gray scale map, min [ ·]Denotes a minimization operation, max ·]Indicating a max operation.
And S130, carrying out binarization processing on the decoupling image to obtain a binary image.
The binarization operation in cell segmentation mostly adopts a combined method. The global and local binaryzation is used for determining the cell area, and the cell area is further combined with the cell edge, so that the cell edge contour is smoother and more accurate. Based on good segmentation effect of Otsu and Niblack algorithm in global and local thresholding, and excellent performance of Canny operator [8] in edge detection.
Step S130 specifically includes:
step S131, based on an Otsu algorithm, a Niblack algorithm and a Canny operator, carrying out binarization processing on the decoupling image to obtain operation result graphs of Otsu, Niblack and Canny respectively.
And S132, performing OR operation on the operation result graphs of Otsu, Niblack and Canny, and filling the hole, denoising and morphologically processing to obtain a binary image.
Specifically, the results of the three (Otsu, Niblack, and Canny) or operations are shown below:
IB=Io|IN|IC
wherein, Io,INAnd ICThe results of Otsu, Niblack and Canny operations, IBIs the result graph of the OR operation of the three. Otsu, Niblack and Canny operation results map the situation that edge gaps of the same cell are in different positions, and a binary image is obtained after OR operation, hole filling, denoising treatment and morphological treatment, cells in the binary image can be closed well, and the edges are more accurate.
And step S140, carrying out topology analysis on the binary image, and tracking the outermost boundary to obtain the cell edge.
The more accurate the binary image, the simpler the topological analysis, and the cell edge contour can be obtained by only tracing the outermost boundary.
And S150, extracting pixel characteristics in the cell edge.
The pixel feature extraction is carried out based on the binary image, which is beneficial to improving the calculation speed and reducing the interference of an extracellular staining part. The pixel features include: color space conversion, color features, and texture features.
(1) And (4) color space conversion.
In practical applications it is difficult to find a single color space that can resolve all colors well. The correlation between RGB channels is high, and the brightness and chroma components are also highly mixed; in HSI, the chroma and the brightness are completely separated, and the best effect is achieved in the target search of various colors; the LUVs are perceptually uniform and achieve the best results in multi-color space comparisons. Therefore, RGB, HSI, and LUV are selected here as comparison objects of the present invention. The following color features are extracted in three spaces respectively, then the classification effect comparison is carried out independently or in combination with the texture features, and finally the color features are extracted in the LUV space.
(2) And (5) extracting color features.
According to the local spatial similarity model, when the influence of spatial factors on the central pixel is measured, the method is used for measuring by using a Gaussian function commonly used in image filtering smoothing.
a. A local window is created.
With pixel m as center, creating window with size of dxdAnd (4) a mouth. Taking 5 × 5 as an example, the red frame i is the window center, and n is a pixel located at an arbitrary position within the window. Their coordinates depend on (x)m,ym),(xn,yn) Gray values of g in orderm,gn
b. Local spatial features are calculated.
The local spatial characteristics mainly consider a distance factor, namely the influence of the distance between a pixel point and a target pixel point on the target pixel point. The two-dimensional image is expressed as the relationship between the pixel point coordinates.
Spatial feature sf is measured by using a wide range of gaussian functions (as shown below) in image processingmn
Figure BDA0001982614540000071
Wherein σsRepresenting the same standard deviation of the x-direction and y-direction values, also serves to weight the pixels according to distance.
c. Local gray level features are calculated.
Local gray level feature gfmnIt is used to measure the local intensity inhomogeneity. The gray-level image is expressed as a functional relation among gray-level values of all pixel points in the image:
Figure BDA0001982614540000072
wherein λ isgIs a global scale factor that is a function of,
Figure BDA0001982614540000073
is a density function used to reflect local gray scale non-uniformities.
d. Pixel level features are generated.
Color feature pcf of the final pixel levelmComprises the following steps:
Figure BDA0001982614540000074
wherein N ismRepresenting the number of pixels in the window centered, and Σ is the summation operation.
(3) And (5) extracting texture features.
Texture features are one of the commonly used methods in image segmentation, and are usually combined with color features to achieve better results. In image processing, texture features are embodied as a certain repetitive pattern and a combination of the frequencies at which such a pattern occurs. The invention selects a signal processing method and two statistical methods for comparison.
a. And (4) color space conversion.
Since the invention collects the blue coloration of the reticuloendothelial staining area, the texture features are calculated on Cr channel according to the color space model of YCbCr. Before calculation, the Y channel is first processed with nonlinear median filtering to suppress the resulting fine texture of the gray variations in the image, thereby enhancing the sparse subband parameters (the sparse subbands and coefficients).
b. A measure of the energy of the sub-band coefficients.
The Gabor filter can produce a directional band of frequencies in any direction, thereby measuring this energy. The Gabor filter is a filter G (x, y) formed by a linear combination of basis filters, which can be rotated in any direction, and is represented as follows:
Figure BDA0001982614540000081
wherein, bk(theta) is an interpolation function of any rotation angle theta, controlling the direction of the filter; hK(x, y) is the rotated version of the impulse response of the base filter at θ. Edges in the image can then be detected by the basis filters in the corresponding directions.
For the Gabor filter, calculating the texture features in the window, namely performing convolution operation in the window, and the method adopts the mean value of convolution operation results in all directions as the characteristic value of the central pixel of the window.
c. Texture features based on statistical methods.
The gray level co-occurrence matrix (GLCM) and the Local Binary Pattern (LBP) are both the conditions of gray level variation among pixels in the statistical window. Except that GLCM is a relationship between a statistical pair of pixels (i.e., two pixels). That is, GLCM represents the ratio of the number of occurrences of a gray scale pixel pair formed in a certain direction to the total number of gray scale pixel pairs that may occur at the gray level of the image, namely:
Figure BDA0001982614540000082
wherein (g)m,gn) Is the gray level of any two pixels in the window, t (g)m,gn) Representing the number of times this pair of gray levels occurs. T denotes the total number of all possible pairs of gray levels in the current image.
And the LBP is the difference between the pixel of the statistical center point and the pixel of the neighborhood. LBP is thresholded at the central pixel gray value and then all pixels at a certain distance from the central pixel are compared to form a binary sequence, namely:
Figure BDA0001982614540000091
at this time, gmOnly the window center element, gnIs the actual presence or calculated grey value of a pixel from a certain distance. u-LBP (Uniform LBP) is that the number of adjacent two bits 0-1 and 1-0 in the LBP binary sequence is not more than 2, and the number of the adjacent two bits is used as the local coding of the window, which is more widely used than the traditional LBP.
The invention directly uses the local coding of u-LBP as the texture characteristic of the central pixel of the window. The GLCM result does not directly indicate the texture feature, which is more sensitive to the region with strong intensity variation (such as edge contour).
And S200, sending the pixel characteristics into a classifier to classify the pixels to obtain a target pixel area.
The classifier is preset, and specifically, the classifier is obtained by adopting the following steps:
and S10, extracting pixel characteristics by taking the pixel set of the typical RNA staining area in the reticulocyte as a positive sample set and taking the pixel set of the non-RNA staining area and the pixel set in the non-reticulocyte as a negative sample set.
Since the invention adopts the feature classification at the pixel level, the sample size is inevitably large and redundant from the perspective of the whole image, and the final calculation is time-consuming and even difficult. Therefore, a typical reticulocyte and non-reticulocyte set (Cell set) were selected as training sets. Of course, the typical reticulocytes and non-reticulocytes are identified manually.
And extracting a plurality of pixel features to be selected. Converting three color spaces on the Cell set, and extracting color features on each color space respectively. In addition, 6 texture features are extracted from the Cr channel, and the specific steps can be seen in step S100.
And S20, performing supervised learning and cross validation on the pixel characteristics of the marked positive and negative samples to form a classifier.
And performing cross validation on the features to be selected and selecting the features. At this time, for fast calculation, the penalty coefficient of the SVM and the expansion constant of the kernel function are set to 1. The 10-fold average was used. And selecting the color space and the texture characteristics with the best effect through comparison.
And performing cross validation on the selected feature combination, wherein the cross validation is to adjust the penalty coefficient of the SVM and the expansion constant of the kernel function RBF. In order to accelerate the adjustment speed, the invention firstly adopts a repeated downsampling-granular support vector machine (RU-GSVM), namely, the sample edge information is retained to the maximum degree, and the downsampling is carried out on most types. Then, the parameters are adjusted by adopting a coarse-fine double-layer grid cross validation method and a 10-fold cross validation method. And finally, using the sampling samples and the parameters, setting the parameters and training the SVM to form a Classifier (SVM Classifier).
And step S300, identifying the reticulocytes according to the position relation between the target pixel area and the cell edge.
Step S300 specifically includes:
step S310, whether the target pixel area is in the cell is determined through the number of times the ray passes through the cell boundary.
The slope of the ray is k:
Figure BDA0001982614540000101
wherein x isc、ycRespectively the abscissa and ordinate, x, of the centroid of the target pixel regionj、yjRespectively, are the abscissa and ordinate vectors of the edge coordinates of a single cell.
Specifically, it is determined whether the RNA-stained region is intracellular by determining the number of times the ray crosses the cell boundary, and if the number of passes is odd, it is intracellular, and if it is even, it is extracellular.
In step S320, when there is only one target pixel region in the cell and the area of the region is greater than the first predetermined area threshold, the cell is a reticulocyte.
Step S330, when the number of target pixel regions in the cell exceeds a predetermined number and the area of the region is greater than a second predetermined area threshold, the cell is a reticulocyte.
As shown in fig. 2 to 5, considering the case where there is a single connected domain with a large area or a plurality of connected domains with a small area in the red RNA stained area of the reticulum, 3 thresholds are set here: a first preset area threshold, a preset number and a second preset area threshold. When only one target staining domain exists in the cell, judging whether the cell is reticulocyte red or not through a first preset area threshold value; and when the number of target dyeing domains in the cells is not less than the preset number and the dyeing domains with the areas larger than a second preset area threshold value exist, the dyeing domains are the reticulum red.
It is worth explaining that 1 color feature is respectively extracted from three color spaces of RGB, HSI and LUV, 6 texture features including a Gabor feature, a gray level co-occurrence matrix and local contrast are extracted from a Cr channel of YCbCr, then the recognition effects of each feature combination of color and texture are compared, and the Gabor texture feature is selected to be combined with the LUV color feature. Then, an SVM classifier is used to classify the pixel-level features, an RNA staining region is detected, and the position, the number, the area and the like of the region are used to judge whether the target cells are reticulocytes. The precision ratio of the method to reticulocytes is 98.4 percent, and the recall ratio is 98.0 percent.
The invention also provides a preferred embodiment of the reticulocyte detection system, which comprises the following steps:
as shown in fig. 6, a reticulocyte detection system according to an embodiment of the present invention includes: a processor, and a memory coupled to the processor,
the memory stores a reticulocyte detection program that when executed by the processor implements the steps of:
extracting a cell edge according to the cell image, and obtaining pixel characteristics in the cell edge;
sending the pixel characteristics into a classifier to classify the pixels to obtain a target pixel area;
the reticulocytes are identified by the positional relationship of the target pixel region to the cell edge, as described above.
When the reticulocyte detection program is executed by the processor, the following steps are realized:
adopting a pixel set of an RNA staining area in a typical reticulocyte as a positive sample set, and adopting a pixel set of a non-RNA staining area and an internal pixel set of the non-reticulocyte as a negative sample set, and extracting pixel characteristics;
supervised learning and cross validation are performed on the pixel features of the marked positive and negative samples to form a classifier, which is specifically described above.
When the reticulocyte detection program is executed by the processor, the following steps are realized:
decoupling the cell image to obtain a coupling background image;
decoupling the cell image according to the coupling background image and then normalizing to obtain a decoupling image;
carrying out binarization processing on the decoupling image to obtain a binary image;
topology analysis is performed on the binary image, and the outermost boundary is tracked to obtain the cell edge, as described above.
In this embodiment, the coupling background map is calculated by using the following formula:
Figure BDA0001982614540000121
wherein the content of the first and second substances,
Figure BDA0001982614540000122
for the background image, n represents the number of selected cell images, IiThe ith gray scale map is represented, Σ being the summation operation, as described above.
In this embodiment, the decoupling image is calculated by using the following formula:
Figure BDA0001982614540000123
wherein, I'iIs a decoupled image of the ith gray scale map, min [ ·]Denotes a minimization operation, max ·]This means the maximum finding operation, as described above.
When the reticulocyte detection program is executed by the processor, the following steps are realized:
carrying out binarization processing on the decoupling image based on an Otsu algorithm, a Niblack algorithm and a Canny operator to respectively obtain operation result graphs of Otsu, Niblack and Canny;
after performing or operation on the operation result graphs of Otsu, Niblack and Canny, filling the hole, performing denoising processing and morphological processing to obtain a binary image, which is specifically described above.
When the reticulocyte detection program is executed by the processor, the following steps are realized:
determining whether the target pixel region is within the cell by the number of times the ray crosses the cell boundary;
when only one target pixel region is arranged in the cell and the area of the region is larger than a first preset area threshold value, the cell is a reticulocyte;
when the number of target pixel regions in the cell exceeds a predetermined number and the area of the region is greater than a second predetermined area threshold, the cell is a reticulocyte, as described above.
In this embodiment, the slope of the ray is k:
Figure BDA0001982614540000131
wherein x isc、ycRespectively the abscissa and ordinate, x, of the centroid of the target pixel regionj、yjRespectively, are the abscissa and ordinate vectors of the edge coordinates of the individual cells, as described above.
In this embodiment, the pixel characteristics include: color space conversion, color features, and texture features, as described above.
In summary, the method for detecting reticulocytes and the system thereof provided by the present invention comprises the following steps: extracting a cell edge according to the cell image, and obtaining pixel characteristics in the cell edge; sending the pixel characteristics into a classifier to classify the pixels to obtain a target pixel area; and identifying the reticulocytes through the position relation of the target pixel area and the cell edge. Since the pixel features are classified by the classifier and the reticulocytes are detected and identified by the position relation between the target pixel region and the cell edge, the precision and recall ratio of the detected reticulocytes can be improved.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (6)

1. A reticulocyte detection method is characterized by comprising the following steps:
extracting a cell edge according to the cell image, and obtaining pixel characteristics in the cell edge;
sending the pixel characteristics into a classifier to classify the pixels to obtain a target pixel area;
identifying reticulocytes according to the position relation between the target pixel area and the cell edge;
the classifier is obtained by adopting the following steps:
adopting a pixel set of an RNA staining area in a typical reticulocyte as a positive sample set, and adopting a pixel set of a non-RNA staining area and an internal pixel set of the non-reticulocyte as a negative sample set, and extracting pixel characteristics;
carrying out supervised learning and cross validation on the pixel characteristics of the marked positive and negative samples to form a classifier;
the step of identifying reticulocytes according to the position relationship between the target pixel area and the cell edge specifically comprises the following steps:
determining whether the target pixel region is within the cell by the number of times the ray crosses the cell boundary;
when only one target pixel region is arranged in the cell and the area of the region is larger than a first preset area threshold value, the cell is a reticulocyte;
when the target pixel areas in the cells exceed a preset number and the area of the areas is larger than a second preset area threshold value, the cells are reticulocytes;
the slope of the ray is k:
Figure FDA0003132767360000011
wherein x isc、ycRespectively the abscissa and ordinate, x, of the centroid of the target pixel regionj、yjRespectively is a horizontal coordinate vector and a vertical coordinate vector of the edge coordinate of a single cell, and n represents the number of the selected cell images;
the pixel features include: color space conversion, color features, and texture features.
2. The method for detecting reticulocytes of claim 1, wherein the step of extracting cell edges from the cell image comprises:
decoupling the cell image to obtain a coupling background image;
decoupling the cell image according to the coupling background image and then normalizing to obtain a decoupling image;
carrying out binarization processing on the decoupling image to obtain a binary image;
and performing topology analysis on the binary image, and tracking the outermost boundary to obtain the cell edge.
3. The method of claim 2, wherein the background map is calculated using the following formula:
Figure FDA0003132767360000021
wherein the content of the first and second substances,
Figure FDA0003132767360000022
for the background image, n represents the number of selected cell images, IiThe ith gray scale map is represented, and Σ is the summation operation.
4. The method of claim 3, wherein the decoupled image is calculated using the following formula:
Figure FDA0003132767360000023
wherein, I'iIs a decoupled image of the ith gray scale map, min [ ·]Denotes a minimization operation, max ·]Indicating a max operation.
5. The method for detecting reticulocytes according to claim 2, wherein the step of binarizing the decoupled image to obtain a binary image comprises:
carrying out binarization processing on the decoupling image based on an Otsu algorithm, a Niblack algorithm and a Canny operator to respectively obtain operation result graphs of Otsu, Niblack and Canny;
and after performing OR operation on the operation result graphs of Otsu, Niblack and Canny, filling the hole, performing denoising processing and morphological processing to obtain a binary image.
6. A reticulocyte detection system, comprising: a processor, and a memory coupled to the processor,
the memory stores a reticulocyte detection program that when executed by the processor implements the steps of:
extracting a cell edge according to the cell image, and obtaining pixel characteristics in the cell edge;
sending the pixel characteristics into a classifier to classify the pixels to obtain a target pixel area;
identifying reticulocytes according to the position relation between the target pixel area and the cell edge;
the reticulocyte detection program when executed by the processor further implements the steps of:
adopting a pixel set of an RNA staining area in a typical reticulocyte as a positive sample set, and adopting a pixel set of a non-RNA staining area and an internal pixel set of the non-reticulocyte as a negative sample set, and extracting pixel characteristics;
carrying out supervised learning and cross validation on the pixel characteristics of the marked positive and negative samples to form a classifier;
the reticulocyte detection program when executed by the processor further implements the steps of:
determining whether the target pixel region is within the cell by the number of times the ray crosses the cell boundary;
when only one target pixel region is arranged in the cell and the area of the region is larger than a first preset area threshold value, the cell is a reticulocyte;
when the target pixel areas in the cells exceed a preset number and the area of the areas is larger than a second preset area threshold value, the cells are reticulocytes;
the slope of the ray is k:
Figure FDA0003132767360000031
wherein x isc、ycRespectively the abscissa and ordinate, x, of the centroid of the target pixel regionj、yjRespectively is a horizontal coordinate vector and a vertical coordinate vector of the edge coordinate of a single cell, and n represents the number of the selected cell images;
the pixel features include: color space conversion, color features, and texture features.
CN201910154979.0A 2019-03-01 2019-03-01 Reticulocyte detection method and system Active CN109975196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910154979.0A CN109975196B (en) 2019-03-01 2019-03-01 Reticulocyte detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910154979.0A CN109975196B (en) 2019-03-01 2019-03-01 Reticulocyte detection method and system

Publications (2)

Publication Number Publication Date
CN109975196A CN109975196A (en) 2019-07-05
CN109975196B true CN109975196B (en) 2021-10-08

Family

ID=67077667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910154979.0A Active CN109975196B (en) 2019-03-01 2019-03-01 Reticulocyte detection method and system

Country Status (1)

Country Link
CN (1) CN109975196B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633676B (en) * 2019-09-18 2023-04-18 东北大学 Method for automatically identifying cerebrospinal fluid cell image information
CN112767349B (en) * 2021-01-18 2024-05-03 桂林优利特医疗电子有限公司 Reticulocyte identification method and system
CN113552126A (en) * 2021-07-23 2021-10-26 福州金域医学检验实验室有限公司 Reticulocyte detection method and system
CN114419619B (en) * 2022-03-29 2022-06-10 北京小蝇科技有限责任公司 Erythrocyte detection and classification method and device, computer storage medium and electronic equipment
CN117853850B (en) * 2024-03-07 2024-06-18 威海紫光科技园有限公司 Detection and evaluation system and method for NK cell cultivation process

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03140864A (en) * 1989-10-27 1991-06-14 Mitsubishi Kasei Corp Counting apparatus of reticulated corpuscle
WO2001057785A1 (en) * 2000-02-01 2001-08-09 Chromavision Medical Systems, Inc. Method and apparatus for automated image analysis of biological specimens
CN102831607A (en) * 2012-08-08 2012-12-19 深圳市迈科龙生物技术有限公司 Method for segmenting cervix uteri liquid base cell image
CN108021903A (en) * 2017-12-19 2018-05-11 南京大学 The error calibrating method and device of artificial mark leucocyte based on neutral net
CN108364288A (en) * 2018-03-01 2018-08-03 北京航空航天大学 Dividing method and device for breast cancer pathological image
CN108564114A (en) * 2018-03-28 2018-09-21 电子科技大学 A kind of human excrement and urine's leucocyte automatic identifying method based on machine learning
CN108693342A (en) * 2017-12-13 2018-10-23 青岛汉朗智能医疗科技有限公司 Cervical carcinoma, the detection method of uterine cancer and system
CN108961301A (en) * 2018-07-12 2018-12-07 中国海洋大学 It is a kind of based on the unsupervised Chaetoceros image partition method classified pixel-by-pixel
CN109378052A (en) * 2018-08-31 2019-02-22 透彻影像(北京)科技有限公司 The preprocess method and system of image labeling

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03140864A (en) * 1989-10-27 1991-06-14 Mitsubishi Kasei Corp Counting apparatus of reticulated corpuscle
WO2001057785A1 (en) * 2000-02-01 2001-08-09 Chromavision Medical Systems, Inc. Method and apparatus for automated image analysis of biological specimens
CN102831607A (en) * 2012-08-08 2012-12-19 深圳市迈科龙生物技术有限公司 Method for segmenting cervix uteri liquid base cell image
CN108693342A (en) * 2017-12-13 2018-10-23 青岛汉朗智能医疗科技有限公司 Cervical carcinoma, the detection method of uterine cancer and system
CN108021903A (en) * 2017-12-19 2018-05-11 南京大学 The error calibrating method and device of artificial mark leucocyte based on neutral net
CN108364288A (en) * 2018-03-01 2018-08-03 北京航空航天大学 Dividing method and device for breast cancer pathological image
CN108564114A (en) * 2018-03-28 2018-09-21 电子科技大学 A kind of human excrement and urine's leucocyte automatic identifying method based on machine learning
CN108961301A (en) * 2018-07-12 2018-12-07 中国海洋大学 It is a kind of based on the unsupervised Chaetoceros image partition method classified pixel-by-pixel
CN109378052A (en) * 2018-08-31 2019-02-22 透彻影像(北京)科技有限公司 The preprocess method and system of image labeling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Reticulocyte count and extended reticulocyte parameters by Mindray BC-6800: Reference intervals and comparison with Sysmex XE-5000;M. Buttarello1 et al.;《International Journal of Laboratory Hematology》;20161231;第1-8页 *
基于超像素和支持向量机的阴道细菌自动检测;宋有义等;《中国生物医学工程学报》;20150430;第34卷(第2期);第204-211页 *

Also Published As

Publication number Publication date
CN109975196A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN109975196B (en) Reticulocyte detection method and system
JP6660313B2 (en) Detection of nuclear edges using image analysis
CN110286124B (en) Machine vision-based refractory brick measuring system
Quelhas et al. Cell nuclei and cytoplasm joint segmentation using the sliding band filter
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
CN101794434B (en) Image processing apparatus and image processing method
EP3343440A1 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
Sulaiman et al. Denoising-based clustering algorithms for segmentation of low level salt-and-pepper noise-corrupted images
CN110120056B (en) Blood leukocyte segmentation method based on adaptive histogram threshold and contour detection
Ragothaman et al. Unsupervised segmentation of cervical cell images using Gaussian mixture model
US20060204953A1 (en) Method and apparatus for automated analysis of biological specimen
CN111161222B (en) Printing roller defect detection method based on visual saliency
CN110569782A (en) Target detection method based on deep learning
US20100040276A1 (en) Method and apparatus for determining a cell contour of a cell
CN114118144A (en) Anti-interference accurate aerial remote sensing image shadow detection method
US20110002516A1 (en) Method and device for dividing area of image of particle in urine
Shahin et al. A novel white blood cells segmentation algorithm based on adaptive neutrosophic similarity score
US11538261B2 (en) Systems and methods for automated cell segmentation and labeling in immunofluorescence microscopy
CN109035196B (en) Saliency-based image local blur detection method
CN115082451B (en) Stainless steel soup ladle defect detection method based on image processing
US20170309017A1 (en) Device and method for finding cell nucleus of target cell from cell image
CN110728185B (en) Detection method for judging existence of handheld mobile phone conversation behavior of driver
CN111507426A (en) No-reference image quality grading evaluation method and device based on visual fusion characteristics
CN113221881B (en) Multi-level smart phone screen defect detection method
Zheng et al. A novel algorithm based on visual saliency attention for localization and segmentation in rapidly-stained leukocyte images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant