CN107527028B - Target cell identification method and device and terminal - Google Patents

Target cell identification method and device and terminal Download PDF

Info

Publication number
CN107527028B
CN107527028B CN201710712810.3A CN201710712810A CN107527028B CN 107527028 B CN107527028 B CN 107527028B CN 201710712810 A CN201710712810 A CN 201710712810A CN 107527028 B CN107527028 B CN 107527028B
Authority
CN
China
Prior art keywords
image
target cell
determining
candidate
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710712810.3A
Other languages
Chinese (zh)
Other versions
CN107527028A (en
Inventor
敖堂东
尹洪兵
黄鹏
林仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Universal Intelligent Medical Instrument Co Ltd
Original Assignee
Shenzhen Universal Intelligent Medical Instrument Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Universal Intelligent Medical Instrument Co Ltd filed Critical Shenzhen Universal Intelligent Medical Instrument Co Ltd
Priority to CN201710712810.3A priority Critical patent/CN107527028B/en
Publication of CN107527028A publication Critical patent/CN107527028A/en
Application granted granted Critical
Publication of CN107527028B publication Critical patent/CN107527028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a target cell identification method, a target cell identification device and a target cell identification terminal, and belongs to the technical field of computer application. The method comprises the following steps: selecting an identification image for target cell identification from an image frame, carrying out image segmentation on the identification image, determining a background and cells in the identification image, removing non-target cells from the cells in the identification image through morphological analysis, and determining the target cells in the identification image. In addition, a target cell recognition device and a terminal are also provided. The target cell identification method, the target cell identification device and the target cell identification terminal can conveniently and quickly identify the target cells.

Description

Target cell identification method and device and terminal
Technical Field
The invention relates to the technical field of computer application, in particular to a target cell identification method, a target cell identification device and a target cell identification terminal.
Background
With the increasing concern on health, the need for detecting cells in body fluids is becoming stronger. Currently, there are two main methods for cell recognition of body fluids: conventional detection methods and computer-assisted methods.
In a conventional detection method, the extracted body fluid is directly observed by a microscope, and then the cell concentration, the survival rate and other indicators are detected according to the observation result. For example, in semen analysis, relatively severe infertility symptoms, mainly including azoospermia, necrospermia and some teratospermia, can be diagnosed by conventional detection methods, but these symptoms account for only a small proportion of male infertility. The semen analysis and diagnosis result of the sterile patient is normal or mild abnormal frequently in clinic. The reason is that the conventional detection method has great subjectivity, the movement of the sperms is disordered and has high concentration and too high movement speed, and the accurate diagnosis conclusion is difficult to be given by only depending on the visual observation and the empirical conjecture of medical staff and inspection staff. Therefore, the conventional detection method is greatly influenced by conditions such as detection environment, experience and level of inspectors and the like, wastes time and labor, has low accuracy, easily causes errors of detection results, and brings difficulty to clinical treatment and scientific research work.
The computer-aided method is to apply computer technology and advanced image processing technology to the analysis of cell morphology and dynamics, to acquire the motion track image of the cell by tracking and shooting, and to analyze the motion image of the cell, to provide quantitative data of cell dynamics (such as the forward motion and super-activation motion of sperm, etc.), thereby identifying the quality of the cell. The computer-aided method can overcome the subjectivity of manual observation, can detect the fine morphological characteristics which can not be observed by human eyes, can more effectively evaluate the cell movement capacity and morphology, and has the advantages of high detection speed, simplicity, quickness and repeatable experimental results.
However, both of the above two cell detection methods require the patient himself to go to a medical institution for professional cell sampling and analysis, and detection in the medical institution can obtain a relatively comprehensive and reliable detection result, but the detection is time-consuming, and the detection result cannot be obtained quickly, so that the time cost is relatively high, and continuous monitoring of the body cannot be realized.
Disclosure of Invention
The invention provides a target cell identification method, a device and a terminal, aiming at solving the technical problem that the target cell identification cannot be conveniently and rapidly carried out in the related technology.
In a first aspect, a target cell identification method is provided, including:
selecting an identification image for identifying the target cell from the image frame;
carrying out image segmentation on the identification image, and determining a background and cells in the identification image;
and removing non-target cells from the cells of the identification image through morphological analysis, and determining the target cells in the identification image.
In a second aspect, there is provided a target cell identification apparatus comprising:
the identification image selection module is used for selecting an identification image for identifying the target cell from the image frame;
the image segmentation module is used for carrying out image segmentation on the identification image and determining a background and cells in the identification image;
and the target cell determining module is used for removing non-target cells from the cells of the identification image through morphological analysis and determining the target cells in the identification image.
In a third aspect, a terminal is provided, where the terminal includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
In a fourth aspect, a computer readable storage medium is provided for storing a program, characterized in that the program, when executed, causes a terminal to perform the method according to the first aspect.
The technical scheme provided by the embodiment of the invention can obtain the following beneficial effects:
when the target cell is identified, an identification image for identifying the target cell is selected from the image frame, after the background and the cell in the identification image are determined through image segmentation, the target cell is determined through morphological analysis, the interference of the edge part of the image frame on the identification of the target cell is eliminated through selecting the identification image, the accuracy of the identification of the target cell is ensured, the size of the image is reduced through selecting the identification image for identifying the target cell from the image frame, the data calculation amount is reduced, the analysis time is reduced, and the target cell is conveniently and quickly identified.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method of target cell identification according to an exemplary embodiment.
Fig. 2 illustrates a local recognition image before and after image segmentation by mean shift according to an exemplary embodiment.
Fig. 3 is a gray distribution histogram of a local recognition image after image segmentation by a mean shift method and a final binarized image according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating a specific implementation of step S110 in the target cell identification method according to the corresponding embodiment of fig. 1.
Fig. 5 is a schematic diagram illustrating before and after binarization of an image frame according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating a specific implementation of step S112 according to the corresponding embodiment in fig. 4.
Fig. 7 is a flowchart illustrating a specific implementation of step S1123 according to the corresponding embodiment of fig. 6.
Fig. 8 is a diagram illustrating an image frame with a plurality of regions to be identified preset therein according to an exemplary embodiment.
Fig. 9 shows another target cell identification method according to the corresponding embodiment of fig. 1.
Fig. 10 is a schematic diagram illustrating a difference image according to an exemplary embodiment.
Fig. 11 is a flowchart illustrating a specific implementation of step S320 according to the corresponding embodiment in fig. 9.
FIG. 12 is a schematic diagram of a connected domain according to an example illustration.
Fig. 13 is a flowchart illustrating a specific implementation of step S322 according to the corresponding embodiment of fig. 11.
Fig. 14 is a flowchart of a specific implementation of step S3223 according to the corresponding embodiment in fig. 13.
Fig. 15 is a flowchart illustrating another specific implementation of step S322 according to the corresponding embodiment in fig. 11.
FIG. 16 is a block diagram illustrating a target cell identification device according to an example embodiment.
Fig. 17 is a block diagram of the recognition image selecting module 110 in the target cell recognition apparatus according to the corresponding embodiment of fig. 16.
Fig. 18 is a block diagram of the recognition image determination sub-module 112 according to the corresponding embodiment of fig. 17.
Fig. 19 is a block diagram of another target cell identification apparatus according to the corresponding embodiment of fig. 16.
Fig. 20 is a block diagram of the target cell motion recognition module 320 in the target cell recognition apparatus according to the corresponding embodiment of fig. 19.
Fig. 21 is a block diagram of the position determination sub-module 322 shown in accordance with the corresponding embodiment of fig. 20.
Fig. 22 is a block diagram of the position determining unit 3223 shown according to the corresponding embodiment of fig. 21.
Fig. 23 is another block diagram of the target cell movement identification module 320 according to the corresponding embodiment of fig. 20.
Fig. 24 is a block diagram illustrating a terminal according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as set forth in the claims below.
Fig. 1 is a flowchart illustrating a target cell identification method according to an exemplary embodiment, and the target cell identification method may include the following steps, as shown in fig. 1.
In step S110, an identification image for identifying a target cell is selected from the image frames.
The image frame is an image.
The image frame may be an image captured by shooting alone or a frame image extracted from a captured video.
When an identification image for identifying the target cell is selected from the image frames, the image frames can be image frames acquired in real time; or the image frame extracted from the stored video after the video is collected and stored; other forms of image frames are also possible.
The target cell is a recognition target to be subjected to cell recognition.
The target cells include stationary target cells and moving target cells. For example, in performing a counting analysis of semen, the target cells are sperm, and all sperm will be identified from the semen; in kinetic analysis of semen, the target is motile sperm, and sperm with motility will be identified from the semen.
The recognition image is an image of an area in the image frame where the target cell recognition is to be performed.
It should be noted that, when an image frame is acquired, the edges of the image frame will be distorted in shape and color due to the acquisition equipment, the acquisition method, the placement of body fluid, and other factors.
If the target cell is directly identified on the image frame, the accuracy of the target cell identification is affected. Therefore, the identification image is selected from the image frame, and the target cell is identified in the identification image, so that the accuracy of target cell identification is ensured.
The method comprises the following steps of selecting an identification image for identifying target cells from an image frame in various ways, and selecting the image in a central area from the image frame as the identification image through presetting the central area; the image in the central area can be selected from the image frame in a self-adaptive manner according to the color (or gray scale) and texture distribution in the image frame to serve as the identification image; the identification image may be selected from the image frame by other means, which are not limited herein.
In step S120, the recognition image is subjected to image segmentation, and the background and the cells in the recognition image are determined.
The image segmentation is to divide the identification image into a plurality of non-overlapping areas according to the characteristics of color (or gray scale), texture, shape and the like, to ensure that the characteristics present similarity in the same area and obvious difference among different areas, and further to identify the background and cells in the identification image according to the plurality of non-overlapping areas.
The background is to identify areas in the image where no cells are present.
It should be noted that there is a distinction between identifying certain features in the image between areas where cells are present and areas where cells are not present.
For example, in an image frame acquired of semen, there are more dark regions of semen and less light regions of semen that are not present.
In a specific exemplary embodiment, the identification image is firstly subjected to image segmentation by a mean shift method, and then thresholded to obtain a binary image.
The mean shift method searches for the maximum point of the probability density (i.e., the class center) using the kernel density estimate, and then replaces the colors of all pixels of this class with the color of this class center, thereby smoothing the image. And finally, clustering the class centers according to needs, and combining the regions with too few pixel points and the regions with too short class center distance to avoid over-segmentation caused by too many classes.
The principle of the mean shift method is as follows:
d-dimensional feature space RdThe basic form of the Mean Shift vector at point x in (1) is:
Figure BDA0001383092840000051
wherein xiLocated within a high-dimensional sphere region of radius h centered on x.
Consider each sample point xiMay be of different importance, and considering that points closer to x are more effective at estimating the statistical properties around x, the Mean Shift vector may be generalized to:
Figure BDA0001383092840000052
wherein w (x)i) Is each sample point xiG (x) is a kernel function.
Mean Shift vector Mh(x) Is a normalized probability density gradient. The Mean Shift algorithm is a non-parametric eigenspace analysis method based on kernel density estimation, which converges rapidly to local maxima of the probability density function through iteration of adaptive step sizes. The iterative process is as follows: calculating the offset mean value of the current point, moving the point to the offset mean value, then recalculating the offset mean value by taking the offset mean value as a new starting point, and continuing moving until a certain condition is met.
Fig. 2 illustrates a local recognition image before and after image segmentation by mean shift according to an exemplary embodiment. It can be seen that after the image segmentation is performed by the mean shift method, the distinction of the background and the cells in the identified image is more definite.
Fig. 3 is a gray distribution histogram of a local recognition image after image segmentation by a mean shift method and a final binarized image according to an exemplary embodiment. Since the background pixel fraction is much larger than the cell, the highest peak in the histogram corresponds to the background, the others to the cell.
In step S130, non-target cells are removed from the cells of the recognition image by morphological analysis, and target cells in the recognition image are determined.
The morphological analysis is an analysis process based on morphological characteristics.
The morphological analysis comprises a morphological filtering method, morphological constraint condition processing and the like.
The morphological filtering method comprises erosion and swelling methods, and the cavities in the cells can be filled or the cells adhered together can be separated by the morphological filtering method.
Morphological constraints include, but are not limited to, circularity, major-minor axis ratio, perimeter, area, and the like. For example, since sperm are generally round and have substantially similar sizes, when sperm is identified from semen, elongated bacteria, non-sperm cells such as large spots, small noise spots, and the like can be eliminated by using morphological constraints.
It is to be understood that, in step S120, the cells identified in the image include various cells in the body fluid. For example, white blood cells and red blood cells, which are different from each other in morphology, are extracted from the recognition image by morphological analysis as non-target cells that do not conform to the morphology of the target cell, and as target cells, cells that conform to the morphology of the target cell.
For example, when sperm is identified in semen, the identified cells may include sperm, white blood cells, red blood cells, and the like, and the sperm in the identified image may be identified by removing the white blood cells, red blood cells, and the like that are not sperm by morphological analysis according to the morphology of the sperm.
Optionally, after the target cells in the identification image are determined, numbering each pixel in the identification image, where the pixels where the background is located in the identification image are all numbered 0, and the numbers of the target cells are sequentially increased from 1. The pixel where a target cell is located is represented by the number of the target cell, so that the target cell to which the pixel belongs can be known according to the number of the pixel. And finally, outputting the target cell set, and counting the number of the target cells (namely the maximum value of the target cell number).
Optionally, after the pixel where each target cell is located is determined, the edge pixel of each target cell may be obtained according to the position of each pixel in the image frame, and then a graph in a closed shape, that is, a boundary line of the target cell may be drawn according to the edge pixel.
By using the method, when the target cell is identified, the identification image for identifying the target cell is selected from the image frame, after the background and the cell in the identification image are determined by image segmentation, the target cell is determined by morphological analysis, and the interference of the edge part of the image frame on the identification of the target cell is eliminated by selecting the identification image, so that the accuracy of the identification of the target cell is ensured, the size of the image is reduced by selecting the identification image for identifying the target cell from the image frame, the data calculation amount is reduced, the analysis time is shortened, and the target cell is identified conveniently and quickly.
Fig. 4 is a detailed description of step S110 in the target cell identification method according to the corresponding embodiment shown in fig. 1, where step S110 may include the following steps:
in step S111, the image frame is binarized to obtain a binarized image.
It should be noted that, for an 8-bit gray image, the pixel value is between 0 and 255.
Binarization is a process of setting the pixel value of each pixel in an image frame to 0 or 1, and converts the entire image frame into a binarized image having only two colors of black and white by binarization.
When the image frame is binarized, a preset gray threshold value can be adopted, and the pixel value of each pixel is set to be 0 or 1 by comparing the pixel value of each pixel with the preset gray threshold value, so that the image frame is binarized; or calculating an adaptive gray threshold by an adaptive algorithm (for example, a mean shift method, a moment invariant method, a maximum inter-class difference method, etc.), and setting a pixel value of each pixel in the image frame to 0 or 1 according to the adaptive gray threshold; the image frame may also be binarized in other ways.
In the binarized image, the foreground is generally set to white (pixel value of 1) and the background to black (pixel value of 0).
Fig. 5 is a schematic diagram illustrating before and after binarization of an image frame according to an exemplary embodiment. By comparing the image frame and the binary image, it can be seen that: the corresponding relation of the background in the image frame in the binary image is black at the center, white at the edge, and the corresponding relation of the foreground in the image frame in the binary image is white at the center and black at the edge.
In step S112, an identification image is determined from the binarized image based on the pixel values of the pixels in the binarized image.
As described above, the pixel value of each pixel in the binarized image is 0 or 1.
When the identification image is determined according to the pixel value of each pixel in the binary image, the identification image can be determined according to the proportion of 0 or 1 of the pixel value in the image of different areas; or the identification image can be determined by changing the change size of the proportion of 0 or 1 pixel value in the area image after the area of the selected area is changed; or selecting area images of different areas from the binary image according to a preset area size, and comparing the proportion of 0 or 1 pixel value in each area image to determine an identification image; the identification image may also be determined from the pixel values of the pixels in the binarized image in other ways.
By the method, the image frame is binarized to obtain a binarized image, and then the identification image is determined according to the pixel value of each pixel, so that the target cell is identified in the identification image, and the accuracy of target cell identification is ensured.
Fig. 6 is a detailed description of step S112 in the target cell identification method shown in fig. 4 according to the corresponding embodiment, as shown in fig. 6, a plurality of regions to be identified with the same center and different areas are preset in the binarized image, and step S112 in the target cell identification method may include the following steps:
in step S1121, an initial region image located in the initial region to be recognized is selected from the binarized image according to a preset initial region to be recognized.
The region to be identified is a plurality of regions with the same center and different areas preset in the binary image.
In a specific exemplary embodiment, the center of the region to be recognized is located at the very center of the binarized image.
Optionally, the region to be identified is a rectangular, circular or other shaped region.
The initial region to be recognized is a central region having the smallest area among the regions to be recognized. The initial region image is an image of the initial region to be recognized in the binarized image.
In step S1122, the cell existence probability of the initial region image is calculated from the pixel values of the pixels in the initial region image, and the base probability is obtained.
The cell existence probability is the magnitude of the probability that a cell is present in the image of the selected region.
For example, the pixel value of a pixel in the region image where a cell is present is 1, and the pixel value of a pixel in the region image where a cell is not present is 0. If the number of pixels in the image a is 100, the number of pixels having a pixel value of 1 is 30, and the number of pixels having a pixel value of 0 is 70, the cell existence probability in the image a is 30%.
The base probability is the probability of the presence of a cell in the initial region image.
In step S1123, region images of the regions to be recognized in different areas are sequentially selected from the binarized image according to the area size of the region to be recognized, and the recognition image is determined in the binarized image according to the cell existence probability and the basis probability of the region image.
As described above, the area sizes of the preset regions to be identified are different.
And selecting the area images of different areas to be identified, and determining the identification image in the binary image by comparing the cell existence probability of the area images with the basic probability.
By using the method, the identification image is determined from the binary image by comparing the cell existence probability and the basic probability of the image of the area to be identified, so that the identification image can better reflect the cell characteristics in the actual test body fluid, and the accuracy of identifying the target cell in the identification image is ensured.
Fig. 7 is a detailed description of step S1123 in the target cell identification method according to the corresponding embodiment shown in fig. 6, and as shown in fig. 7, step S1123 in the target cell identification method may include the following steps:
in step S11231, a first region image located in the first region to be recognized is selected from the binarized image according to the first region to be recognized, which is larger than the initial region to be recognized, in the region to be recognized.
In step S11232, the cell existence probability of the first region image is calculated, and the first cell existence probability is obtained.
In step S11233, when the difference between the first cell existence probability and the basic probability is smaller than the preset critical threshold, sequentially selecting the regions to be identified from the preset multiple regions to be identified according to the order of the areas from small to large until the difference between the cell existence probability and the basic probability of the region image located in the selected region to be identified is larger than the critical threshold.
The critical threshold is preset.
In order to make the analysis statistically significant, the area of the identified image should be made as large as possible to contain a sufficient number of target cells. However, when the image frames of the body fluid are acquired, the background colors of the central region and the edge region in the binarized image are greatly different due to uneven illumination of the image, the acquisition method, and the like, and the cell image of the edge region is significantly distorted. Therefore, when determining the recognition image, the recognition image should be located in the central region, while excluding the edge region.
When the selected area to be identified is gradually increased, if the selected area to be identified is still in the central area, the cell existence probability of the area image positioned in the area to be identified is less changed; on the contrary, when the selected region to be identified exceeds the central region, the background color gradually exchanges, and a large number of white regions (actually, the background) appear, so that the calculated cell existence probability is rapidly increased.
Fig. 8 is a diagram illustrating an image frame with a plurality of regions to be identified preset therein according to an exemplary embodiment. As shown in fig. 8, 4 square regions to be recognized are preset in the image frame, and the region images located in the 4 regions to be recognized from small to large are respectively S0, S1, S2, and S3. Where white areas represent cells and black represents background. The cell existence probabilities of the region images S0, S1, S2 and S3 are P0, P1, P2 and P3 respectively, and the preset critical threshold T is P0/4. Calculating a difference V1 between S1 and S0, determining the region image S0 as an identification image when V1> P0/4; otherwise, continuing to calculate the difference V2 between S2 and S1, when V2> P0/4, determining the region image S1 as the recognition image; otherwise, continuing to calculate the difference V3 between S3 and S2, when V3> P0/4, determining the region image S2 as the recognition image; otherwise, the analogy is repeated until the identification image is determined.
In step S11234, a region image in the region to be recognized that is one of the regions to be recognized before the finally selected region to be recognized is determined as a recognition image.
By using the method, the cell existence probability and the basic probability of the image of the area to be identified are determined from the binary image to identify the image, so that the area of the identified image is as large as possible, and the image of the selected edge area is excluded, thereby enabling the identified image to better reflect the cell characteristics in the actual test body fluid, and ensuring the accuracy of target cell identification in the identified image.
Optionally, before step S120 in the embodiment corresponding to fig. 1, the method for identifying target cells may further include the following steps:
and carrying out image preprocessing on the identification image.
Typically, the image frames are color images. Through image preprocessing, the image frame is grayed, so that the color image is converted into a gray image, and the calculation amount for target cell identification is greatly reduced. The color image can also be converted from an RGB format to other formats (such as HSV and the like), and then the information of a certain channel (such as an H channel) is only used for subsequent analysis, so that the basically same purpose can be achieved.
Optionally, the image preprocessing further includes, but is not limited to: contrast enhancement, median filtering, image sharpening, etc.
Fig. 9 is a flowchart illustrating another target cell identification method according to the corresponding embodiment of fig. 1, and as shown in fig. 8, the target cell identification method may further include the following steps.
In step S310, two image frames are selected from the video and subjected to inter-frame difference operation to obtain a difference image.
Generally, when a target cell is identified in a body fluid, the body fluid is amplified to a certain degree and then a video is acquired, and the acquired video includes a plurality of image frames.
Optionally, the magnification of the body fluid is 100-200 times, and the frame rate for video acquisition is 25-120 fps.
Since the position of the still cells in the video is always fixed, the still cells can be eliminated by continuously comparing the cell differences among several image frames.
The inter-frame difference operation is to perform difference operation on two image frames selected from the video.
The inter-frame difference operation may be a difference operation performed on two adjacent image frames in the video, or a difference operation performed on any two image frames in the video, which is not limited herein.
The pixel values in the difference image are based on the difference between the pixel values in the two image frames to which the difference image corresponds.
It is understood that the difference image is also a binarized image because the pixel values in the difference image are based on the difference between the pixel values in the two image frames to which the difference image corresponds.
Fig. 10 is a schematic diagram illustrating a difference image according to an exemplary embodiment.
In step S320, the motion of the target cell is recognized based on the pixel values of the pixels in the difference image.
In a specific exemplary embodiment, the pixel values of the pixels in the two selected image frames are subtracted, then the absolute values are taken, and finally the absolute values are thresholded into a binary image, i.e., a difference image. In the differential image, a pixel with a pixel value of 1 indicates that the cell moves away from the position of the pixel, or the cell moves to the position of the pixel from other positions; when the pixel value is 0, it indicates that the positions of the pixel points in the two image frames are not covered by the moving cells, or the positions of the pixel points in the two image frames are covered by the moving cells.
Thus, the motion of the target cell can be recognized by the pixel value of each pixel in the difference image.
By using the method, the difference image is obtained by performing interframe difference operation on the two image frames, and then the motion of the target cell is identified according to the pixel value of each pixel in the difference image, so that the calculation amount of the motion identification of the target cell is greatly reduced, and the motion track of the target cell between the two image frames can be obtained.
Optionally, in the target cell identification method shown in the corresponding embodiment of fig. 9, step S320 may further include:
noise is excluded from the differential image.
It can be understood that, due to the reason that the edge recognition of the same static target cell in two frames may have slight difference, part of the target cell moves too slowly, the distance of moving between two consecutive image frames is small, the target cell has slight deformation in the movement, and the like, some noise will exist in the difference image obtained by performing the inter-frame difference operation on the two image frames.
Therefore, before the movement of the target cell is identified, noise is eliminated from the differential image, the noise is prevented from influencing the movement identification of the target cell, and the accuracy of the movement identification of the target cell is further improved.
The noise can be eliminated from the difference image by morphological filtering, median filtering, or the like.
Fig. 11 is a detailed description of step S320 in the target cell identification method according to the corresponding embodiment of fig. 9, in which step S320 may include the following steps:
in step S321, a connected component in the difference image is determined according to the pixel value of each pixel in the difference image.
The connected component is a continuous region of pixel values 1 in the difference image.
FIG. 12 is a schematic diagram of a connected domain according to an example illustration. In view a of FIG. 12, two white regions are separated by a black region, so there are two connected domains in view a; in b, the white regions are not separated by the black regions, so that there is only one connected domain in b; in the diagram c, the white area contains a small black area inside, but the white area itself is not isolated by the black area, so that the diagram c only has a connected domain.
In step S322, the difference image and the two image frames are respectively matched according to the connected domain, and the positions of the target cells corresponding to the connected domain in the two image frames are determined.
It is understood that the connected domain is located in a region where the target cell is altered.
Thus, the location of the connected component must be the area covered by the target cell in at least one image frame.
The matching operation is a tracking operation on the target cell.
It can be understood that, in a difference image obtained by performing inter-frame difference operation on two image frames of a video, if a moving distance of a certain target cell is large, a certain pixel point is located in a position range of the target cell C1 in a previous image frame, but is located in a position range of the target cell C2 in a subsequent image frame.
Therefore, in order to ensure the correct tracking of the target cells, the corresponding positions of the target cells corresponding to each connected domain in the differential image in the two image frames are determined by respectively performing matching operation on the differential image and the two image frames, and then the corresponding positions of the target cells in the image frames are determined by selecting two different image frames from the video.
When the difference image and the two image frames are respectively matched, the whole difference image and the two image frames can be respectively matched, the image in each connected domain in the difference image and the image in the corresponding position in the two image frames can be respectively matched, a specific matching region can be selected according to each connected domain, the image in the matching region in the difference image and the image in the corresponding position in the two image frames can be respectively matched, and the difference image and the two image frames can be respectively matched in other modes.
In step S323, the movement trajectory of the target cell is determined according to the corresponding positions of the target cell in the two image frames.
It will be appreciated that there is a corresponding temporal relationship of image frames in the video.
Therefore, according to the time relation of each image frame in the video and the corresponding position of the target cell in the two image frames, the motion track of the target cell can be determined. And further, determining the motion track of each target cell in the whole video according to the corresponding position of each target cell in each image frame of the video and the time relation between each image frame in the video.
By using the method, the difference image and the two image frames are subjected to matching operation according to the connected domains, the corresponding position of each target cell in the difference image in the two image frames is determined, the target cell motion recognition error caused by the wrong corresponding relation of the target cell in each image frame is avoided, and the accuracy of the target cell motion recognition is greatly improved.
Fig. 13 is a detailed description of step S322 in the target cell identification method according to the corresponding embodiment of fig. 11, in which step S322 may include the following steps:
in step S3221, a candidate region is determined from the difference image according to the connected component.
The candidate region is a region including a connected component.
It is understood that the communicating domains may comprise the entire target cell or may be only a portion of the target cell.
Therefore, if only the images in each connected domain are individually matched with the images in the corresponding positions of the two image frames, the accuracy of the matching operation will be affected.
However, if the whole differential image and the two image frames are respectively matched for the target cells corresponding to each connected domain, the calculation amount of the matching operation is greatly increased.
Therefore, in order to further improve the accuracy of the matching operation and avoid increasing the calculation amount of the matching operation, the candidate region is determined according to the connected domain, and then the matching operation is performed on the image in the candidate region.
The method for determining the candidate region from the difference image according to the connected domain has various modes, and a circumscribed rectangular region of the connected domain can be selected as the candidate region according to the outline of the connected domain; or selecting a region with a preset area size as a candidate region according to the contour of the connected domain; or a circular area with a certain area can be selected according to the center and the area of the connected domain; the candidate region may be determined by other methods, and the method of determining the candidate region from the difference image based on the connected component is not limited herein.
In step S3222, candidate images located in the candidate regions are selected from the two image frames and the difference image according to corresponding positions of the candidate regions in the two image frames and the difference image, respectively.
In step S3223, the candidate images in the difference image and the candidate images in the two image frames are respectively subjected to matching operation, so as to determine the positions of the target cells corresponding to the connected component in the two image frames.
By using the method, the candidate area is determined according to the connected domain, and then the image in the candidate area in the differential image is matched with the image in the corresponding position in the two image frames, so that the corresponding position of each target cell in the differential image in the two image frames is determined, the target cell motion recognition error caused by the wrong corresponding relation of the target cell in each image frame is avoided, the target cell motion is accurately recognized, meanwhile, the calculation amount of matching operation is greatly reduced, and the target cell motion recognition efficiency is improved.
Fig. 14 is a detailed description of step S3223 in the target cell identification method according to the corresponding embodiment of fig. 13, wherein step S3223 may include the following steps:
in step S32231, the candidate images in the two image frames are binarized, respectively.
The binarization manner is similar to that of the image frame in the foregoing embodiment.
In step S32232, a candidate image in the difference image and a candidate image after binarization in the two image frames are respectively subjected to matching operation, and if a foreground intersection (Y) exists between the connected domain and the candidate images in the two image frames, step S32233 is executed; if the connected component has only foreground intersection (N) with the candidate image of one of the image frames, step S32234 is performed.
In a specific exemplary embodiment, the pixel value of each pixel in the candidate image of the difference image is compared with the pixel value of the corresponding pixel in the candidate image after binarization in the two image frames, and then it is determined whether the connected domain has a foreground intersection with only the candidate image of one of the image frames according to the comparison result (i.e. the pixel values of the same pixel are all 1).
The foreground is a region of pixel value 1.
If the connected domain has a foreground intersection only with the candidate image of one of the image frames (generally, as shown in a and b of fig. 12), the corresponding position of the target cell corresponding to the connected domain in the image frame is determined according to the matching operation result between the candidate image in the difference image and the candidate image in the image frame, i.e. the corresponding position of the target cell in the image frame is determined.
If the connected domain and the candidate images of the two image frames have a foreground intersection (generally, as shown in a c diagram in fig. 12), respectively determining the corresponding positions of the target cells corresponding to the connected domain in the two image frames, i.e., determining the corresponding positions of the target cells in the two image frames, respectively, according to the matching operation result between the candidate images in the difference image and the candidate images in the two image frames.
In step S32233, the positions of the target cells corresponding to the connected components in the two image frames are determined according to the result of the matching operation.
In step S32234, the corresponding positions of the target cells corresponding to the connected component in the image frame are determined according to the result of the matching operation, and then the corresponding positions of the target cells corresponding to the connected component in another image frame are determined through the matching operation based on the mean shift method.
In a specific exemplary embodiment, the corresponding position of the target cell corresponding to the connected component in the other image frame is determined by a matching operation based on a mean shift method.
The matching operation based on the mean shift method comprises the following steps:
1. according to the reference target center position y0(for example, in step S32232, the connected component only has foreground with the candidate image of one image frameDuring intersection, according to the center position of the target cell corresponding to the connected domain determined by the result of the matching budget in the image frame corresponding to the target cell), calculating the probability density { q ] of the reference targetu}u=1,...,m
2. According to the candidate target center position y corresponding to the candidate image which is not matched in another image frame1And calculating a probability density { p) of the candidate objectu(y1)}u=1,...,m
3. Calculating the similarity between the candidate target and the reference target;
4. repeating the step 2-3, and calculating the similarity between the candidate target corresponding to all the other unmatched candidate images and the reference target;
5. finding out the position of the candidate target corresponding to the minimum similarity, namely the position of the target cell corresponding to the connected domain in the other image frame;
the principle of the matching operation based on the mean shift method will be described in detail below.
The mean shift method uses a kernel function weighted color histogram to describe a target, respectively calculates the probability of characteristic values of pixels in a reference target region and a candidate target region, measures the similarity between the reference target region and the candidate target region by using a similarity function, selects the candidate target with the maximum similarity function, and obtains a mean shift vector related to the candidate target, wherein the mean shift vector is a vector of target cells moving from an initial position to a correct position. And calculating the mean shift vector through continuous iteration, and finally converging the mean shift vector to the real position of the target to realize the matching of the target cells.
The image frame may be represented by coordinates and colors of each pixel thereof. Assuming that p represents the dimension of color, i.e., p1 represents a grayscale map and p3 represents a color map, an image frame can be represented by a p + 2-dimensional vector x (x ═ 2)s,xr) And (4) showing. x is the number ofsRepresenting the coordinates of each pixel, i.e. the spatial information, x, of the imagerRepresenting color information of the image.
It should be noted that the matching operation performed by the mean shift method is not only applicable to the gray scale space, but also applicable to other color spaces, including the RBG color space, the HSV color space, and the CMYB color space.
At the initial image frame, for each reference target (i.e., target cell), assume zi(i 1.. n) represents the coordinates of each pixel in the reference target, n represents the number of pixels included in the reference target, and y represents the number of pixels included in the reference target0=(x0,x1) To reference the center coordinates of the target, the gray space is uniformly divided into m, u 1i) Denotes ziAt the gray index of the pixel, the model of the reference object can be expressed as:
Figure BDA0001383092840000131
Figure BDA0001383092840000132
where K is the kernel function and h is the kernel function window size, which determines the weight distribution. δ (x) is the Kronecker function, δ [ b (z)i)-u]The function being to determine z in the reference targetiAnd if the pixel value of the pixel belongs to the u-th gray index, the value is 1, otherwise, the value is 0. C is a normalization coefficient. The kernel function plays a smoothing role in kernel estimation of the probability density function, and includes a uniform kernel function, a gaussian kernel function, an Epannechnikov kernel function, a dual-exponential kernel function, and the like. One embodiment employs an Epannechnikov kernel defined as:
Figure BDA0001383092840000133
at the k frame, the center y of the reference object in the k-1 frame is taken0And obtaining the center y of the candidate target for searching the center. By zi(i 1.., n) represents the pixel location in the candidate object, then the model of the candidate object may be represented as:
Figure BDA0001383092840000134
the similarity function is used to describe the similarity between the reference object and the candidate object. One embodiment uses the Bhattacharyya coefficient as the similarity function, which is defined as:
Figure BDA0001383092840000135
the larger the coefficient, the more similar the reference object is to the candidate object. That is, the candidate target center y that maximizes the similarity function is the position of the center of the reference target in the frame. Thus, the problem of finding the optimal candidate object is converted into the problem of finding the optimal position y to maximize the similarity function, which can be done by means of Mean Shift iteration.
Given an initial point y0Error margin ε, then the mean-shift iteration is of the form:
Figure BDA0001383092840000141
wherein the content of the first and second substances,
Figure BDA0001383092840000142
G(x)=-K'(x)
therefore, after the corresponding position of the target cell in one image frame is determined, the search range of the matching operation based on the mean shift method is narrowed from the whole image frame of the other image to the target cell which is not matched, and the operation amount is greatly reduced.
By using the method, when the connected domain only has the pixel points with the same pixel value as the candidate image of one image frame, the corresponding position of the target cell corresponding to the connected domain in the image frame is determined according to the result of the matching budget, and then the corresponding position of the target cell corresponding to the connected domain in the other image frame is determined according to the corresponding position of the target cell corresponding to the connected domain in the image frame through the matching operation based on the mean shift method, so that the error of the target cell motion recognition caused by the error of the corresponding relation of the target cell in each image frame is avoided, and the accuracy of the target cell motion recognition is greatly improved.
Fig. 15 shows another target cell identification method according to the embodiment corresponding to fig. 11, as shown in fig. 15, after step S322, the target cell identification method may further include the following steps:
in step S325, a first target cell image and a second target cell image corresponding to the connected component are extracted from the two image frames according to the positions of the target cells corresponding to the connected component in the two image frames.
In step S326, feature extraction is performed on the first target cell image and the second target cell image, respectively, to establish respective feature models.
The feature model is a data model created from the features of the extracted target cell image.
In step S327, a similarity between the first target cell image and the second target cell image is calculated based on the feature models of the first target cell image and the second target cell image.
When the similarity between the first target cell image and the second target cell image is calculated, aiming at each pixel point, comparing the pixel values of the first target cell image and the second target cell image according to the characteristics of the pixel point in the characteristic model between the first target cell image and the second target cell image, and further calculating the similarity between the first target cell image and the second target cell image; or extracting gray level histograms of the first target cell image and the second target cell image respectively, and calculating the similarity between the first target cell image and the second target cell image according to the gray level histograms; the similarity between the first target cell image and the second target cell image can also be calculated by performing image comparison in other ways.
In step S328, the feature model is updated by comparing the similarity with the set threshold.
And when the similarity reaches a preset similarity threshold, updating the characteristic models of the target cell images of the two image frames according to a preset updating model.
In a specific exemplary embodiment, the image models of the first target cell image and the second target cell image are q respectivelyuAnd pu(y) when the similarity between the first target cell image and the second target cell image reaches a preset similarity threshold value rhomAnd then, updating the feature model according to a preset updating mode, wherein the preset updating mode is as follows:
qu=αpu(y)+(1-α)qu
compared with the characteristic model of the target cell, the characteristic model of the target cell is fixed as each target cell characteristic extracted from the image frame of the video, when the similarity between the first target cell image and the second target cell image is high, the characteristic model is updated in the image updating mode, so that the accumulated error caused by illumination change, target deformation and the like can be reduced, and the accuracy of target cell motion recognition is further improved.
The following are embodiments of the system of the present invention that may be used to perform the above-described embodiments of the target cell identification method. For details not disclosed in the embodiments of the system of the present invention, please refer to the embodiments of the method for identifying target cells of the present invention.
FIG. 16 is a block diagram illustrating a target cell identification device according to an exemplary embodiment, the system including, but not limited to: an identification image selection module 110, an image segmentation module 120 and a target cell determination module 130.
An identification image selecting module 110, configured to select an identification image for identifying a target cell from an image frame;
an image segmentation module 120, configured to perform image segmentation on the identification image, and determine a background and cells in the identification image;
and a target cell determination module 130, configured to remove non-target cells from the cells in the recognition image through morphological analysis, and determine target cells in the recognition image.
The implementation process of the functions and actions of each module in the device is specifically described in the implementation process of the corresponding step in the target cell identification method, and is not described herein again.
Optionally, fig. 17 is a block diagram of the recognition image selecting module 110 in the target cell recognition apparatus shown in fig. 16 according to the corresponding embodiment, and as shown in fig. 14, the recognition image selecting module 110 includes, but is not limited to: a binarization sub-module 111 and an identification image determination sub-module 112.
A binarization submodule 111, configured to binarize the image frame to obtain a binarized image;
and an identification image determining sub-module 112, configured to determine an identification image from the binarized image according to the pixel value of each pixel in the binarized image.
Optionally, fig. 18 is a block diagram of the identification image determining submodule 112 in the target cell identification apparatus shown in fig. 17 corresponding to the embodiment, as shown in fig. 18, a plurality of regions to be identified having the same center and different areas are preset in the binarized image, and the identification image determining submodule 112 includes, but is not limited to: an initial region image selecting unit 1121, a basis probability acquiring unit 1122, and a determination recognition image unit 1123.
An initial region image selecting unit 1121, configured to select an initial region image located in an initial region to be identified from the binarized image according to a preset initial region to be identified;
a basic probability obtaining unit 1122, configured to calculate a cell existence probability of the initial region image according to a pixel value of a pixel in the initial region image, so as to obtain a basic probability;
and an identifying image determining unit 1123, configured to sequentially select, according to the area size of the region to be identified, region images of the region to be identified, which are located in different areas, from the binarized image, and determine an identifying image in the binarized image according to the cell existence probability and the basic probability of the region image.
Optionally, fig. 19 is a block diagram of another target cell identification device shown in fig. 16 according to the corresponding embodiment, as shown in fig. 19, the target cell identification device in fig. 16 further includes but is not limited to: an inter-frame difference operation module 310 and a target cell motion recognition module 320.
The inter-frame difference operation module 310 is configured to select two image frames from a video to perform inter-frame difference operation, so as to obtain a difference image;
and a target cell motion recognition module 320, configured to recognize motion of the target cell according to the pixel value of each pixel in the difference image.
Alternatively, fig. 20 is a block diagram of the target cell motion recognition module 320 in the target cell recognition apparatus according to the corresponding embodiment shown in fig. 19, and as shown in fig. 20, the target cell motion recognition module 320 includes but is not limited to: a connected component determining submodule 321, a position determining submodule 322 and a motion trajectory determining submodule 323.
A connected domain determining submodule 321, configured to determine a connected domain in the difference image according to the pixel value of each pixel in the difference image;
the position determining submodule 322 is used for respectively performing matching operation on the difference image and the two image frames according to the connected domain, and determining the positions of the target cells corresponding to the connected domain in the two image frames respectively;
and the motion track determining submodule 323 is used for determining the motion track of the target cell according to the corresponding positions of the target cell in the two image frames respectively.
Optionally, fig. 21 is a block diagram of the position determination submodule 322 in the target cell identification apparatus according to the corresponding embodiment shown in fig. 20, and as shown in fig. 21, the position determination submodule 322 includes, but is not limited to: a candidate region determining unit 3221, a candidate image selecting unit 3222, and a position determining unit 3223.
A candidate region determining unit 3221 configured to determine a candidate region from the difference image according to the connected component;
a candidate image selecting unit 3222, configured to select, according to corresponding positions of the candidate regions in the two image frames and the difference image, candidate images located in the candidate regions from the two image frames and the difference image, respectively;
the position determining unit 3223 is configured to determine positions of the target cells corresponding to the connected component in the two image frames by performing matching operation on the candidate image in the difference image and the candidate images in the two image frames, respectively.
Alternatively, fig. 22 is a block diagram of the position determining unit 3223 in the target cell recognition apparatus according to the corresponding embodiment shown in fig. 21, and as shown in fig. 21, the position determining unit 3223 includes, but is not limited to: a candidate image binarization sub-unit 32231, a candidate image matching operation sub-unit 32232, a first position determining sub-unit 32233, and a second position determining sub-unit 32234.
A candidate image binarization sub-unit 32231 configured to binarize candidate images in two image frames, respectively;
a candidate image matching operation subunit 32232, configured to perform matching operation on the candidate image in the difference image and the candidate image binarized in the two image frames respectively;
a first position determining subunit 32233, configured to determine, according to a result of the matching operation, a corresponding position of a target cell corresponding to the connected domain in an image frame if the connected domain only has a foreground intersection with a candidate image of one of the image frames, and then determine, by using a mean shift method, a corresponding position of a target cell corresponding to the connected domain in another image frame;
a second position determining subunit 32234, configured to determine, according to a result of the matching operation, positions, in the two image frames, of target cells corresponding to the connected component, if there is a foreground intersection between the connected component and the candidate images of the two image frames, respectively.
Optionally, fig. 23 is another block diagram of the target cell motion recognition module 320 in the target cell recognition apparatus according to the corresponding embodiment shown in fig. 20, and as shown in fig. 23, the target cell motion recognition module 320 further includes but is not limited to: a target cell image extraction sub-module 325, a similarity operator module 326, and an image update sub-module 327.
The target cell image extraction submodule 325 is configured to extract a first target cell image and a second target cell image corresponding to the target cell in the connected domain from the two image frames according to the corresponding positions of the target cell in the two image frames;
the feature model establishing submodule 326 is configured to perform feature extraction on the first target cell image and the second target cell image respectively, and establish respective feature models;
a similarity operator module 327, configured to calculate a similarity between the first target cell image and the second target cell image according to the feature models of the first target cell image and the second target cell image;
and a feature model updating submodule 328 for updating the feature model by comparing the similarity with a set threshold.
Fig. 24 is a block diagram illustrating a terminal 100 according to an example embodiment. Referring to fig. 24, the terminal 100 may include one or more of the following components: a processing component 101, a memory 102, a power component 103, a multimedia component 104, an audio component 105, a sensor component 107 and a communication component 108. The above components are not all necessary, and the terminal 100 may add other components or reduce some components according to its own functional requirements, which is not limited in this embodiment.
The processing component 101 generally controls overall operations of the terminal 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 101 may include one or more processors 109 to execute instructions to perform all or a portion of the above-described operations. Further, the processing component 101 may include one or more modules that facilitate interaction between the processing component 101 and other components. For example, the processing component 101 may include a multimedia module to facilitate interaction between the multimedia component 104 and the processing component 101.
The memory 102 is configured to store various types of data to support operations at the terminal 100. Examples of such data include instructions for any application or method operating on terminal 100. The Memory 102 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as an SRAM (Static Random access Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), a PROM (Programmable Read-Only Memory), a ROM (Read-Only Memory), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk. Also stored in memory 102 are one or more modules configured to be executed by the one or more processors 109 to perform all or a portion of the steps of any of the methods illustrated in fig. 1, 4, 6, 7, 9, 11, 13, 14, and 15.
The power supply component 103 provides power to the various components of the terminal 100. The power components 103 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal 100.
The multimedia component 104 includes a screen providing an output interface between the terminal 100 and the user. In some embodiments, the screen may include an LCD (Liquid Crystal Display) and a TP (touch panel). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 105 is configured to output and/or input audio signals. For example, the audio component 105 includes a microphone configured to receive external audio signals when the terminal 100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 102 or transmitted via the communication component 108. In some embodiments, audio component 105 also includes a speaker for outputting audio signals.
The sensor assembly 107 includes one or more sensors for providing various aspects of state assessment for the terminal 100. For example, the sensor assembly 107 can detect an open/close state of the terminal 100, a relative positioning of the components, a change in coordinates of the terminal 100 or a component of the terminal 100, and a change in temperature of the terminal 100. In some embodiments, the sensor assembly 107 may also include a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 108 is configured to facilitate communications between the terminal 100 and other devices in a wired or wireless manner. The terminal 100 may access a Wireless network based on a communication standard, such as WiFi (Wireless-Fidelity), 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 108 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 108 further includes a Near Field Communication (NFC) module to facilitate short-range Communication. For example, the NFC module may be implemented based on an RFID (Radio Frequency Identification) technology, an IrDA (Infrared data association) technology, an UWB (Ultra-Wideband) technology, a BT (Bluetooth) technology, and other technologies.
In an exemplary embodiment, the terminal 100 may be implemented by one or more ASICs (Application specific integrated circuits), DSPs (Digital Signal processors), PLDs (Programmable Logic devices), FPGAs (Field Programmable gate arrays), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
The specific manner in which the processor in the terminal performs the operations in this embodiment has been described in detail in the embodiment related to the target cell identification method, and will not be elaborated upon here.
Optionally, the present invention further provides a terminal for performing all or part of the steps of the target cell identification method shown in any one of fig. 1, fig. 4, fig. 6, fig. 7, fig. 9, fig. 11, fig. 13, fig. 14 and fig. 15. The terminal includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to any one of the above exemplary embodiments.
The specific manner in which the processor in the terminal performs the operations in this embodiment has been described in detail in the embodiment related to the target cell identification method, and will not be described in detail here.
In an exemplary embodiment, a storage medium is also provided that is a computer-readable storage medium, such as may be transitory and non-transitory computer-readable storage media, including instructions. The storage medium includes, for example, a memory 102 of instructions executable by a processor 109 of the terminal 100 to perform the target cell identification method described above.
It is to be understood that the invention is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be effected therein by one skilled in the art without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (13)

1. A method for identifying a target cell, the method comprising:
carrying out binarization on the image frame to obtain a binarized image; a plurality of regions to be identified with the same center and different areas are preset in the binary image;
determining an identification image from the binary image according to the pixel value of each pixel in the binary image;
carrying out image segmentation on the identification image, and determining a background and cells in the identification image;
selecting two image frames from a video to perform inter-frame difference operation to obtain a difference image;
identifying the movement of the target cell according to the pixel value of each pixel in the differential image;
removing non-target cells from the cells of the identification image by morphological analysis, and determining target cells in the identification image;
wherein the step of determining an identification image from the binarized image based on the pixel values of the pixels in the binarized image comprises:
selecting an initial region image positioned in the initial region to be identified from the binary image according to a preset initial region to be identified;
calculating the cell existence probability of the initial region image according to the pixel value of the pixel in the initial region image to obtain a basic probability;
and sequentially selecting area images of the areas to be identified in different areas from the binarized image according to the area size of the areas to be identified, and determining an identification image in the binarized image according to the cell existence probability and the basic probability of the area images.
2. The method of claim 1, wherein the step of identifying the movement of the target cell according to the pixel value of each pixel in the difference image comprises:
determining a connected domain in the differential image according to the pixel value of each pixel in the differential image;
respectively performing matching operation on the difference image and the two image frames according to the connected domain, and determining the positions of the target cells corresponding to the connected domain in the two image frames respectively;
and determining the motion trail of the target cell according to the corresponding positions of the target cell in the two image frames respectively.
3. The method according to claim 2, wherein the matching operation is performed on the difference image and the two image frames according to the connected component, and the step of determining the corresponding positions of the target cells in the two image frames according to the connected component comprises:
determining a candidate region from the difference image according to the connected domain;
according to the corresponding positions of the candidate areas in the two image frames and the difference image, respectively selecting candidate images in the candidate areas from the two image frames and the difference image;
and respectively performing matching operation on the candidate image in the differential image and the candidate images in the two image frames to determine the corresponding positions of the target cells corresponding to the connected domain in the two image frames.
4. The method according to claim 3, wherein the step of determining the corresponding positions of the target cells corresponding to the connected component in the two image frames by performing a matching operation on the candidate image in the difference image and the candidate images in the two image frames respectively comprises:
respectively carrying out binarization on the candidate images in the two image frames;
respectively performing matching operation on the candidate image in the difference image and the candidate image subjected to binarization in the two image frames;
if the connected domain only has foreground intersection with the candidate image of one image frame, determining the corresponding position of the target cell corresponding to the connected domain in the image frame according to the result of the matching operation, and determining the corresponding position of the target cell corresponding to the connected domain in the other image frame by a mean shift method;
and if the connected domain and the candidate images of the two image frames have foreground intersection, determining the corresponding positions of the target cells corresponding to the connected domain in the two image frames according to the result of matching operation.
5. The method according to claim 2, wherein after the step of matching the difference image with the two image frames according to the connected component to determine the corresponding positions of the target cells in the two image frames, the method further comprises:
extracting a first target cell image and a second target cell image from the two image frames respectively according to the positions of the target cells corresponding to the connected domain in the two image frames respectively;
respectively extracting the characteristics of the first target cell image and the second target cell image, and establishing respective characteristic models;
calculating the similarity between the first target cell image and the second target cell image according to the respective feature models of the first target cell image and the second target cell image;
and updating the characteristic model by comparing the similarity with a set threshold value.
6. A target cell identification apparatus, the apparatus comprising:
the binarization submodule is used for carrying out binarization on the image frame to obtain a binarization image; a plurality of regions to be identified with the same center and different areas are preset in the binary image;
the identification image determining submodule is used for determining an identification image from the binary image according to the pixel value of each pixel in the binary image;
the image segmentation module is used for carrying out image segmentation on the identification image and determining a background and cells in the identification image;
the target cell determining module is used for removing non-target cells from the cells of the identification image through morphological analysis and determining the target cells in the identification image;
wherein the recognition image determination sub-module further comprises:
an initial region image selection unit, configured to select, according to a preset initial region to be identified, an initial region image located in the initial region to be identified from the binarized image;
a basic probability obtaining unit, configured to calculate a cell existence probability of the initial region image according to a pixel value of a pixel in the initial region image, so as to obtain a basic probability;
and the identification image determining unit is used for sequentially selecting the area images of the areas to be identified in different areas from the binarized image according to the area size of the area to be identified, and determining the identification image in the binarized image according to the cell existence probability and the basic probability of the area images.
7. The apparatus of claim 6, further comprising:
the inter-frame difference operation module is used for selecting two image frames from the video to perform inter-frame difference operation to obtain a difference image;
and the target cell motion identification module is used for identifying the motion of the target cell according to the pixel value of each pixel in the differential image.
8. The apparatus of claim 7, wherein the target cell movement recognition module comprises:
a connected domain determining submodule for determining a connected domain in the difference image according to the pixel value of each pixel in the difference image;
the position determining submodule is used for respectively performing matching operation on the difference image and the two image frames according to the connected domain and determining the positions of the target cells corresponding to the connected domain in the two image frames respectively;
and the motion track determining submodule is used for determining the motion track of the target cell according to the positions of the target cell in the two image frames respectively corresponding to each other.
9. The apparatus of claim 8, wherein the location determination submodule comprises:
a candidate region determining unit, configured to determine a candidate region from the difference image according to the connected component;
a candidate image selecting unit, configured to select candidate images located in the candidate regions from the two image frames and the difference image according to corresponding positions of the candidate regions in the two image frames and the difference image, respectively;
and the position determining unit is used for respectively performing matching operation on the candidate image in the difference image and the candidate images in the two image frames to determine the positions of the target cells corresponding to the connected domain in the two image frames.
10. The apparatus of claim 9, wherein the position determining unit comprises:
a candidate image binarization subunit, configured to perform binarization on the candidate images in the two image frames respectively;
a candidate image matching operation subunit, configured to perform matching operation on the candidate image in the difference image and the candidate image binarized in the two image frames respectively;
the first position determining subunit is used for determining the corresponding position of the target cell corresponding to the connected domain in the image frame according to the result of matching operation if the connected domain only has foreground intersection with the candidate image of one image frame, and then determining the corresponding position of the target cell corresponding to the connected domain in the other image frame by a mean shift method;
and the second position determining subunit is used for determining the positions of the target cells corresponding to the connected domain in the two image frames respectively according to the matching operation result if the connected domain and the candidate images of the two image frames have foreground intersection.
11. The apparatus of claim 8, wherein the target cell movement recognition module comprises:
the target cell image extraction submodule is used for respectively extracting a first target cell image and a second target cell image from the two image frames according to the positions of the target cells corresponding to the connected domain in the two image frames;
the characteristic model establishing submodule is used for respectively extracting the characteristics of the first target cell image and the second target cell image and establishing respective characteristic models;
the similarity calculation operator module is used for calculating the similarity between the first target cell image and the second target cell image according to the respective characteristic models of the first target cell image and the second target cell image;
and the characteristic model updating submodule is used for updating the characteristic model through the comparison between the similarity and a set threshold value.
12. A terminal, characterized in that the terminal comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
13. A computer-readable storage medium storing a program, wherein the program, when executed, causes a server to perform the method of any one of claims 1-5.
CN201710712810.3A 2017-08-18 2017-08-18 Target cell identification method and device and terminal Active CN107527028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710712810.3A CN107527028B (en) 2017-08-18 2017-08-18 Target cell identification method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710712810.3A CN107527028B (en) 2017-08-18 2017-08-18 Target cell identification method and device and terminal

Publications (2)

Publication Number Publication Date
CN107527028A CN107527028A (en) 2017-12-29
CN107527028B true CN107527028B (en) 2020-03-24

Family

ID=60681599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710712810.3A Active CN107527028B (en) 2017-08-18 2017-08-18 Target cell identification method and device and terminal

Country Status (1)

Country Link
CN (1) CN107527028B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10281386B2 (en) 2016-05-11 2019-05-07 Bonraybio Co., Ltd. Automated testing apparatus
US10852290B2 (en) 2016-05-11 2020-12-01 Bonraybio Co., Ltd. Analysis accuracy improvement in automated testing apparatus
US10324022B2 (en) 2016-05-11 2019-06-18 Bonraybio Co., Ltd. Analysis accuracy improvement in automated testing apparatus
EP3762934A4 (en) * 2018-03-07 2022-04-06 Verdict Holdings Pty Ltd Methods for identifying biological material by microscopy
CN109191470A (en) * 2018-08-18 2019-01-11 北京洛必达科技有限公司 Image partition method and device suitable for big data image
CN110874734A (en) * 2018-08-31 2020-03-10 北京意锐新创科技有限公司 Dynamic two-dimensional code generation method and device
CN110874898A (en) * 2018-08-31 2020-03-10 北京意锐新创科技有限公司 Cash registering method and device based on mobile payment equipment
CN110910592A (en) * 2018-09-18 2020-03-24 北京意锐新创科技有限公司 Mobile payment equipment-based cash registering method and device applied to parking lot
CN109359569B (en) * 2018-09-30 2022-05-13 桂林优利特医疗电子有限公司 Erythrocyte image sub-classification method based on CNN
CN109767450A (en) * 2018-12-14 2019-05-17 杭州大数云智科技有限公司 A kind of mask method for sperm morphology intelligence diagosis system
CN109685814B (en) * 2019-01-02 2020-10-23 兰州交通大学 Full-automatic cholecystolithiasis ultrasonic image segmentation method based on MSPCNN
CN110349217A (en) * 2019-07-19 2019-10-18 四川长虹电器股份有限公司 A kind of target candidate location estimation method and its device based on depth image
CN110866937A (en) * 2019-10-25 2020-03-06 深圳市瑞图生物技术有限公司 Sperm movement track reconstruction and classification method
CN111401137A (en) * 2020-02-24 2020-07-10 中国建设银行股份有限公司 Method and device for identifying certificate column
CN111458269A (en) * 2020-05-07 2020-07-28 厦门汉舒捷医疗科技有限公司 Artificial intelligent identification method for peripheral blood lymph micronucleus cell image
CN112288704B (en) * 2020-10-26 2021-09-28 中国人民解放军陆军军医大学第一附属医院 Visualization method for quantifying glioma invasiveness based on nuclear density function
CN112481346B (en) * 2020-12-03 2022-12-09 中国人民解放军陆军军医大学第二附属医院 Automatic early warning system and method for detecting abnormal cells in peripheral blood cell morphology
CN112435259B (en) * 2021-01-27 2021-04-02 核工业四一六医院 Cell distribution model construction and cell counting method based on single sample learning
CN113610760B (en) * 2021-07-05 2024-03-12 河海大学 Cell image segmentation tracing method based on U-shaped residual neural network
CN113780145A (en) * 2021-09-06 2021-12-10 苏州贝康智能制造有限公司 Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN115035057B (en) * 2022-05-31 2023-07-11 中国医学科学院生物医学工程研究所 Aqueous humor cell concentration acquisition method, apparatus, storage medium and device for anterior chamber of eye

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159859A (en) * 2007-11-29 2008-04-09 北京中星微电子有限公司 Motion detection method, device and an intelligent monitoring system
CN102044069A (en) * 2010-12-01 2011-05-04 华中科技大学 Method for segmenting white blood cell image
CN103530894A (en) * 2013-10-25 2014-01-22 合肥工业大学 Video target tracking method based on multi-scale block sparse representation and system thereof
CN106483129A (en) * 2016-09-23 2017-03-08 电子科技大学 A kind of method of the leukorrhea trichomonad automatic detection based on motion estimate

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159859A (en) * 2007-11-29 2008-04-09 北京中星微电子有限公司 Motion detection method, device and an intelligent monitoring system
CN102044069A (en) * 2010-12-01 2011-05-04 华中科技大学 Method for segmenting white blood cell image
CN103530894A (en) * 2013-10-25 2014-01-22 合肥工业大学 Video target tracking method based on multi-scale block sparse representation and system thereof
CN106483129A (en) * 2016-09-23 2017-03-08 电子科技大学 A kind of method of the leukorrhea trichomonad automatic detection based on motion estimate

Also Published As

Publication number Publication date
CN107527028A (en) 2017-12-29

Similar Documents

Publication Publication Date Title
CN107527028B (en) Target cell identification method and device and terminal
US9934571B2 (en) Image processing device, program, image processing method, computer-readable medium, and image processing system
Xia et al. Automatic identification and counting of small size pests in greenhouse conditions with low computational cost
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
CN110838126B (en) Cell image segmentation method, cell image segmentation device, computer equipment and storage medium
Madhloom et al. An image processing application for the localization and segmentation of lymphoblast cell using peripheral blood images
CN109684981B (en) Identification method and equipment of cyan eye image and screening system
EP3455782A1 (en) System and method for detecting plant diseases
CN106228556B (en) image quality analysis method and device
US10395091B2 (en) Image processing apparatus, image processing method, and storage medium identifying cell candidate area
CN108257124B (en) Image-based white blood cell counting method and system
CN109697716B (en) Identification method and equipment of cyan eye image and screening system
Zhu et al. Automated counting of bacterial colonies on agar plates based on images captured at near-infrared light
WO2019184851A1 (en) Image processing method and apparatus, and training method for neural network model
CN114240978B (en) Cell edge segmentation method and device based on adaptive morphology
Koniar et al. Machine vision application in animal trajectory tracking
CN110599514B (en) Image segmentation method and device, electronic equipment and storage medium
CN113947613B (en) Target area detection method, device, equipment and storage medium
CN115439456A (en) Method and device for detecting and identifying object in pathological image
CN111325773A (en) Method, device and equipment for detecting moving target and readable storage medium
CN111667419A (en) Moving target ghost eliminating method and system based on Vibe algorithm
Romero-Rondón et al. Algorithm for detection of overlapped red blood cells in microscopic images of blood smears
CN107818287B (en) Passenger flow statistics device and system
CN115690092B (en) Method and device for identifying and counting amoeba cysts in corneal confocal image
Vigneron et al. Adaptive filtering and hypothesis testing: Application to cancerous cells detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant