CN109064475A - For the image partition method and device of cervical exfoliated cell image - Google Patents

For the image partition method and device of cervical exfoliated cell image Download PDF

Info

Publication number
CN109064475A
CN109064475A CN201811058091.9A CN201811058091A CN109064475A CN 109064475 A CN109064475 A CN 109064475A CN 201811058091 A CN201811058091 A CN 201811058091A CN 109064475 A CN109064475 A CN 109064475A
Authority
CN
China
Prior art keywords
image
sub
edge
block
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811058091.9A
Other languages
Chinese (zh)
Inventor
郏东耀
李玉娟
曾强
庄重
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Brilliant Yaoqiang Technology Co Ltd
Original Assignee
Shenzhen Brilliant Yaoqiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Brilliant Yaoqiang Technology Co Ltd filed Critical Shenzhen Brilliant Yaoqiang Technology Co Ltd
Priority to CN201811058091.9A priority Critical patent/CN109064475A/en
Publication of CN109064475A publication Critical patent/CN109064475A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

This application discloses a kind of image partition methods and device for cervical exfoliated cell image.Wherein method includes: that the first image of the cervical exfoliated cell that will acquire is divided into several sub-blocks;Classified based on gray value to each sub-block, edge extracting is carried out to the sub-block for including image border;All sub-blocks are spliced into the second image in order, the edge of the extraction in the sub-block constitutes area-of-interest;The initial profile closed curve for determining the area-of-interest, the gradient vector based on second image make the position vector of the initial profile curve converge to object edge.The present processes can divide an image into sub-block first, carry out edge extracting respectively to sub-block, reduce the processing complexity of image procossing, while can improve the speed of processing with multiple sub-block parallel parsings and processing;After image mosaic, secondary splitting is carried out to interested region, the cellular regions of adhesion can be separated, improve the accuracy of segmentation.

Description

Image segmentation method and device for cervical exfoliated cell image
Technical Field
The present application relates to the field of image processing, and in particular, to an image segmentation method and apparatus for an image of a cervical exfoliated cell.
Background
Conventional image segmentation algorithms fall into the following categories: an edge-based image segmentation method, a threshold segmentation method, a region segmentation method, and a genetic algorithm segmentation method. Among segmentation methods based on machine learning, there are algorithms based on support vector machines, random forests, markov random fields, and conditional random fields in the prior art. However, due to its own features, such as small area, high similarity, overlapping and blocking of objects, the cell image is not ideal when using the conventional segmentation method. In recent years, in the field of image segmentation methods for cells, a watershed transform technique based on mathematical morphology is applied to object segmentation, for example, a watershed transform method based on distance transform, a transform algorithm for reconstructing an image after distance transform by a fast gray scale reconstruction algorithm, an adaptive distance transform algorithm, a watershed transform based on a gradient image, and the like. Although the algorithms have strong capability of positioning the edge of the target object, can fully utilize the edge information of the image and well segment the complete contour of the target object, the algorithms can generate an under-segmentation phenomenon for the target which has no obvious boundary and is easy to be confused with the background of the image. The cells in the image Of the exfoliated cervical cells are accurately and quickly segmented to obtain a Region Of Interest (ROI), which is an important basis for identifying and classifying the cervical cells in the later stage. In the process of smear making and image acquisition, some impurities, interference and the like are inevitably mixed in the image, so that the image has the problems of noise, blurring, uneven gray scale and the like, and the difficulty and the challenge are increased for accurate image segmentation. The above method does not segment the cervical cell image well.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to an aspect of the present application, there is provided an image segmentation method for an image of a cervical exfoliated cell, including:
an image dividing step: dividing the acquired first image of the exfoliated cervical cells into a plurality of sub-blocks;
an edge extraction step: classifying each sub-block based on the gray value, and performing edge extraction on the sub-blocks including the image edges;
image splicing: splicing all the sub-blocks into a second image in sequence, wherein the extracted edges in the sub-blocks form an interested area;
cell population segmentation step: determining an initial contour closed curve of the region of interest, and converging a position vector of the initial contour curve to a target edge based on a gradient vector of the second image.
According to the method, the image can be firstly divided into the sub-blocks, the sub-blocks are respectively subjected to edge extraction, the processing complexity of image processing is reduced, meanwhile, a plurality of sub-blocks can be analyzed and processed in parallel, and the processing speed is improved; after the images are spliced, the interested area is subjected to secondary segmentation, the adhered cells can be distinguished, and the segmentation accuracy is improved.
Optionally, the edge extracting step includes:
gray value calculation: calculating the gray variance and the gray mean of the sub-blocks;
and (3) classification step: respectively comparing the gray variance and the gray mean value with a preset first threshold value and a preset second threshold value, and dividing the subblocks into a foreground category, a background category and a category containing foreground and background contents at the same time;
an edge segmentation step: and for the sub-block of the category containing both foreground and background contents, performing edge segmentation on the sub-block by using a global maximum inter-class variance threshold.
By adopting the method, the subblocks can be rapidly classified based on the gray level of the image, and the subblocks of the foreground category and the background category do not need to be analyzed, but the subblocks of the categories containing the foreground content and the background content are mainly analyzed. Therefore, the data volume of image analysis can be greatly shortened, and the analysis speed is improved.
Optionally, in the edge segmentation step:
the global maximum between-class variance threshold K satisfies the varianceWherein the varianceCalculated using the formula:
wherein n is the total number of pixels of the sub-block, L is the total number of gray levels of the sub-block, nqTo a gray level of rqQ is 0,1,2, … L-1; foreground pixels in the sub-block belong to set C1The value range is [0,1, … K-1 ]](ii) a The background pixels in the sub-block belong to the set C2,C2The value range is [ K, K +1, … L-1]。
Optionally, the cell population dividing step comprises:
contour determination: determining an initial contour closed curve of the region of interest;
a position vector calculation step: determining a gradient vector of the second image based on an edge of the region of interest, wherein the gradient vector represents an external force; acquiring a position vector of the initial contour closed curve, wherein the position vector represents an internal force; and under the action of the gradient vector, the position of the position vector is continuously changed until the external force and the internal force are balanced, and at the moment, the position vector converges to a target edge.
The method can adopt a segmentation method based on a GVF Snake model to carry out secondary segmentation on the cell population. The main idea of this model is to minimize the energy. That is, first, an initial contour closed curve is roughly defined around the segmented object, and the curve can be represented by an energy functional. And solving the energy functional under a certain approximation rule to enable the initial contour curve to approach the target edge continuously and finally converge to the target edge.
Optionally, after the contour determining step, the method further comprises:
a first judgment step: and executing the position vector calculation step under the condition that the initial contour closed curve is judged not to conform to the basic morphology of the cell.
Optionally, after the position vector calculating step, the method further comprises:
a second judgment step: and judging the region surrounded by the target edge as an impurity when the target edge is judged not to conform to the basic morphology of the cell.
According to another aspect of the present application, there is also provided an image segmentation apparatus for an image of a cervical exfoliated cell, including:
an image dividing module configured to divide the acquired first image of cervical exfoliated cells into a number of sub-blocks;
an edge extraction module configured to classify each sub-block based on the gray values, perform edge extraction on sub-blocks including edges of the image;
an image stitching module configured to stitch all sub-blocks in sequence into a second image, the extracted edges in the sub-blocks constituting a region of interest;
a cell population segmentation module configured to determine an initial contour closed curve of the region of interest, a location vector of the initial contour curve converging to a target edge based on a gradient vector of the second image.
The device can firstly divide the image into the sub-blocks and respectively carry out edge extraction on the sub-blocks, thereby reducing the processing complexity of image processing, simultaneously analyzing and processing a plurality of sub-blocks in parallel and improving the processing speed; after the images are spliced, the interested area is subjected to secondary segmentation, the adhered cells can be distinguished, and the segmentation accuracy is improved.
Optionally, the edge extraction module includes:
a gray value calculation module configured to calculate a gray variance and a gray mean of the sub-blocks;
a classification module configured to compare the gray variance and the gray mean with a preset first threshold and a preset second threshold, respectively, and to classify the subblocks into a foreground category, a background category, and a category containing both foreground and background content;
an edge segmentation module configured to edge segment a sub-block of a class that contains both foreground and background content with a global maximum inter-class variance threshold.
According to another aspect of the application, there is also provided a computer-readable storage medium, preferably a non-volatile readable storage medium, having stored therein a computer program which, when executed by a processor, implements the method as described above.
According to another aspect of the present application, there is also provided a computer program product comprising computer readable code which, when executed by a computing device, causes the computing device to perform the method as described above.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a diagram illustrating a hardware configuration of a computer device for executing an image segmentation method according to an embodiment of the present application;
FIG. 2 is a schematic flow diagram of an image segmentation method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of an image segmentation method according to another embodiment of the present application;
FIG. 4 is a schematic flow chart diagram of an image segmentation method according to another embodiment of the present application;
FIGS. 5a to 5f are schematic diagrams of segmentation results according to the image segmentation method of the present application;
FIG. 6 is a schematic block diagram of an image segmentation apparatus according to one embodiment of the present application;
FIG. 7 is a block diagram of one embodiment of a computing device of the present application;
FIG. 8 is a block diagram of one embodiment of a computer-readable storage medium of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
There is also provided, in accordance with an embodiment of the present application, an embodiment of a method for image segmentation of an image of exfoliated cervical cells, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 is a schematic diagram of a hardware structure of a computer device for executing an image segmentation method according to an embodiment of the present application. As shown in fig. 1, computer apparatus 10 (or mobile device 10) may include one or more processors (shown as 102a, 102b, … …, 102n, which may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 104 for storing data, and a transmission module for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, computer device 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer apparatus 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 can be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the method for image segmentation of cervical exfoliated cell images in the embodiment of the present application, and the processor executes various functional applications and data processing by executing the software programs and modules stored in the memory 104, namely, the method for implementing the application programs. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from the processor, which may be connected to computer device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of computer device 10. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer device 10 (or mobile device).
Under the operating environment, the application provides an image segmentation method for an image of a cervical exfoliated cell. Fig. 2 is a schematic flow diagram of a method of image segmentation according to an embodiment of the present application, which may include:
s100, image dividing step: the acquired first image of the exfoliated cervical cells is divided into a number of sub-blocks. For example, it may be divided into N × N sub-blocks. Each sub-block is rectangular or square.
S300, edge extraction: and classifying each sub-block based on the gray value, and performing edge extraction on the sub-blocks including the image edge.
S500, image splicing: and splicing all the sub-blocks into a second image in sequence, wherein the extracted edges in the sub-blocks form an interested area.
S700 cell population segmentation step: determining an initial contour closed curve of the region of interest, and converging a position vector of the initial contour curve to a target edge based on a gradient vector of the second image.
According to the method, the image can be firstly divided into the sub-blocks, the sub-blocks are respectively subjected to edge extraction, the processing complexity of image processing is reduced, meanwhile, a plurality of sub-blocks can be analyzed and processed in parallel, and the processing speed is improved; after the images are spliced, the interested area is subjected to secondary segmentation, the adhered cells can be distinguished, and the segmentation accuracy is improved.
Optionally, the S300 edge extracting step includes:
gray value calculation: and calculating the gray variance and the gray mean of the sub-blocks.
And (3) classification step: respectively combining the gray variance and the gray mean with a preset first threshold value T1And a second threshold value T2And comparing, and dividing the sub-blocks into a foreground category, a background category and a category containing foreground and background contents at the same time. Wherein, T1、T2And setting according to the actual situation of the image.
An edge segmentation step: and for the sub-block of the category containing both foreground and background contents, performing edge segmentation on the sub-block by using a global maximum inter-class variance threshold. For the sub-block of the category, the sub-block is regarded as a complete image for segmentation.
By adopting the method, the subblocks can be rapidly classified based on the gray level of the image, and the subblocks of the foreground category and the background category do not need to be analyzed, but the subblocks of the categories containing the foreground content and the background content are mainly analyzed. Therefore, the data volume of image analysis can be greatly shortened, and the analysis speed is improved.
Fig. 3 is a schematic flow diagram of an image segmentation method according to another embodiment of the present application. Referring to fig. 3, the classifying step may include:
graying the sub-block, calculating the gray variance V of the sub-block, and comparing the gray variance V with a first threshold value T1Comparing;
if V > T1Calculating the image mean value M of the sub-block, and comparing the image mean value V with a second threshold value T2Comparing;
if M > T2If not, the subblock is divided into a background category;
if V is less than or equal to T1The sub-block is divided into categories containing both foreground and background content.
The core idea of the global maximum between-class variance threshold method is that when the threshold value of image segmentation is selected, the variance between the foreground average gray value, the background average gray value and the average gray value of the whole image should be maximized. The method can be used for segmentation in the present application.
Referring to fig. 3, in the edge segmentation step: for the sub-block of the category containing both foreground and background contents, calculating a threshold value K by using a global maximum inter-class variance threshold method, comparing pixel values F (x, y) of the image of the sub-block with the threshold value K, and further dividing each pixel of the sub-block into a foreground pixel or a background pixel, thereby realizing the edge segmentation of the cells in the sub-block.
The threshold K of the global maximum between-class variance satisfies the varianceWherein the varianceCalculated using the formula:
wherein n is the total number of pixels of the sub-block, L is the total number of gray levels of the sub-block, nqTo a gray level of rqQ is 0,1,2, … L-1; foreground pixels in the sub-block belong to set C1The value range is [0,1, … K-1 ]](ii) a The background pixels in the sub-block belong to the set C2,C2The value range is [ K, K +1, … L-1]。
The method has the advantages of high processing speed and sensitivity to noise and targets. In practical situations, however, conditions such as uneven illumination, cell overlapping, etc. in the cell smear image often occur, which all affect the selection of the threshold value by the edge segmentation step. Therefore, the image is firstly partitioned into blocks, then each block is partitioned by adopting a maximum inter-class variance threshold method, and then the whole partitioned image is combined according to the sequence of the blocks. Therefore, the influence of uneven illumination of the image on the whole gray scale of the image can be avoided, and the influence on the segmentation of individual cells is further avoided. If the gray scale is calculated for each small sub-block independently, segmentation errors caused by uneven illumination of the cell smear image can be avoided.
Optionally, the S700 cell population dividing step comprises:
contour determination: determining an initial contour closed curve of the region of interest;
a position vector calculation step: determining a gradient vector of the second image based on an edge of the region of interest, wherein the gradient vector represents an external force; acquiring a position vector of the initial contour closed curve, wherein the position vector represents an internal force; and under the action of the gradient vector, the position of the position vector is continuously changed until the external force and the internal force are balanced, and at the moment, the position vector converges to a target edge.
The method can adopt a segmentation method based on a GVF Snake model to carry out secondary segmentation on the cell population. The main idea of this model is to minimize the energy. That is, first, an initial contour closed curve is roughly defined around the segmented object, and the curve can be represented by an energy functional. And solving the energy functional under a certain approximation rule to enable the initial contour curve to approach the target edge continuously and finally converge to the target edge. The energy value of the curve, i.e. the energy spread value, is at a minimum. The specific process is as follows:
firstly, defining an initial contour closed curve X(s) ═ x(s), y(s) around a segmented target],s∈[0,1]Then the energy function E of the dotted linesnakeThe definition is as follows:
in the formula, the compound is shown in the specification,
Eexternal=Eext(X(s))
wherein E isinternalFor internal energy, two terms in the definition are respectively the first derivative and the second derivative of the curve, and respectively represent the slope of a certain pixel point on the curve and the curvature of the point. Both enable the curve to maintain its continuity and smoothness as it approaches the target curve. Wherein,the elastic coefficient of the curve is shown asA break point will occur in the curve, β(s) represents the stiffness coefficient of the curve, and a corner point may occur in the curve when β(s) is 0, in general,β(s) is generally taken as constant EexternalThe value of the external energy is important related to the local information of the image, and the value can make the curve continuously look atThe standard curve approaches, and the concrete formula is as follows:
or the following steps:
wherein G isσIs a Gaussian function with sigma as the standard deviation, I (x, y) is the original image,is a gradient operator.
In order to make the preset profile curve coincide with the target edge, the energy function E is madesnakeMinimization, EsnakeThe following euler formula needs to be satisfied:
alternatively, the GVF Snake model can be used to perform a second segmentation of the cell population.
The GVF Snake model uses Gradient Vector Flow (GVF) instead of gaussian potential energy field in the traditional model, and the mathematical theoretical basis is Helmholtz (Helmholtz) theorem in electromagnetic field. The GVF field of the GVF Snake model is shown as follows:
by usingAs a result of the above-mentioned EexternalAs a result, the other calculation formula is the same as or similar to the Snake model。
Compared with a Gaussian potential energy field, the GVF field obtains a gradient vector diagram of the whole image, so that the action range of an external force field is larger. This also means that even if the selected initial contour is far from the target contour, it will eventually converge to the target contour through successive approximation. Meanwhile, after the external force action range is enlarged, the external force action of the concave part at the target contour is enlarged, so that the boundary can be converged to the concave part.
Fig. 4 is a schematic flow chart of an image segmentation method according to another embodiment of the present application. Optionally, after the contour determining step, the method may further comprise:
a first judgment step: and executing the position vector calculation step under the condition that the initial contour closed curve is judged not to conform to the basic morphology of the cell.
Optionally, after the position vector calculating step, the method may further include:
a second judgment step: and judging the region surrounded by the target edge as an impurity when the target edge is judged not to conform to the basic morphology of the cell.
The conditions for testing the basic morphology of the cells were:
(1) area test: number N of pixels in ROI areap(i.e., ROI area) whether or not the area conforms to the range of normal cell area [ Nmin,Nmax]Within;
(2) and (3) testing the degree of deformity: calculating the formula gamma as l/N by simple malformation degreepCalculating the malformation degree of the ROI region, wherein l is the perimeter of the ROI and is provided with a high malformation degree threshold value gammaTWhen gamma is less than or equal to gammaTThe test passed.
If the test condition passes, the ROI is a cell image, and the features of the cell image are extracted to further identify the cell; and the ROI area which does not pass the test (possibly a cell population or an impurity block population) is subjected to secondary segmentation based on the segmentation method of the GVFSnake model. And then, carrying out shape test on the secondary segmentation result under the same test conditions. If the test fails, the ROI is impurities and is directly discarded; and if the ROI passing the test is a cell image, extracting the characteristics of the cell image, and further identifying the cell.
Fig. 5a to 5f are schematic diagrams of segmentation results according to the image segmentation method of the present application. As shown in fig. 5a, a certain local area of the image of exfoliated cells of the cervix. Fig. 5b is a binarized image segmented by the edge extraction step, and as can be seen from a comparison between fig. 5a and 5b, the method can better separate the background and the foreground of the image, but at the same time, some impurities are also segmented as the foreground. Fig. 5c is a schematic diagram of the binarization after the shape test, and it can be found by image comparison that the ROI region (i.e. the impurity) with too large or too small area and high degree of deformity in fig. 5b is eliminated by the morphology test algorithm. FIG. 5d is an edge detection map of FIG. 5c, which aims to define an initial contour closed curve for the GVF Snake model, which can accelerate the approximation of the GVFSnake model to the target curve due to its higher degree of overlap with the image edge and better approach to the actual cell edge. Fig. 5e is a binarized graph obtained after segmentation by the GVF Snake model, and since the Snake model is based on solving an energy functional so that an initial contour curve is continuously approximated to a target edge, and the improved GVF Snake model has a better convergence effect on a concave edge, it can be found by comparing fig. 5c with fig. 5e that originally adhered cell groups are accurately segmented. Fig. 5f shows the final segmented foreground images, and the size of the images is set to 70 × 70 for post-processing. In conclusion, after the image is segmented, interference pixels such as impurities are well eliminated, and the segmentation effect on cells is ideal.
The method provided by the application has a remarkable segmentation effect on images of cervical exfoliated cells, can accurately and quickly segment the cells, can effectively screen out cell communities and impurity blocks, has a good convergence effect on the edges of the dents, better eliminates interference pixels such as impurities and the like, and has a relatively ideal segmentation effect on the cells.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
Example 2
According to the embodiment of the present application, there is also provided an image segmentation apparatus for an image of exfoliated cervical cells, which is an apparatus corresponding to the method described in embodiment 1. Fig. 6 is a schematic block diagram of an image segmentation apparatus according to an embodiment of the present application. The apparatus may include:
an image dividing module 100 configured to divide the acquired first image of cervical exfoliated cells into a number of sub-blocks;
an edge extraction module 300 configured to classify each sub-block based on the gray values, perform edge extraction on sub-blocks including edges of the image;
an image stitching module 500 configured to stitch all sub-blocks in sequence into a second image, the extracted edges in the sub-blocks constituting a region of interest;
a cell population segmentation module 700 configured for determining an initial contour closed curve of the region of interest, the position vector of the initial contour curve converging to a target edge based on a gradient vector of the second image.
The device can firstly divide the image into the sub-blocks and respectively carry out edge extraction on the sub-blocks, thereby reducing the processing complexity of image processing, simultaneously analyzing and processing a plurality of sub-blocks in parallel and improving the processing speed; after the images are spliced, the interested area is subjected to secondary segmentation, the adhered cells can be distinguished, and the segmentation accuracy is improved.
Optionally, the edge extraction module 300 may include:
a gray value calculation module configured to calculate a gray variance and a gray mean of the sub-blocks.
A classification module configured to compare the gray variance and the gray mean with a preset first threshold and a preset second threshold, respectively, and to classify the sub-blocks into a foreground class, a background class, and a class containing both foreground and background content.
An edge segmentation module configured to edge segment a sub-block of a class that contains both foreground and background content with a global maximum inter-class variance threshold.
Wherein the classification module may be to: graying the sub-block, calculating the gray variance V of the sub-block, and comparing the gray variance V with a first threshold value T1Comparing; if V > T1Calculating the image mean value M of the sub-block, and comparing the image mean value V with a second threshold value T2Comparing; if M > T2Then divide the sub-blockDividing the subblocks into foreground classes, otherwise, dividing the subblocks into background classes; if V is less than or equal to T1The sub-block is divided into categories containing both foreground and background content.
Optionally, the cell population segmentation module 700 may include:
a contour determination module configured for determining an initial contour closed curve of the region of interest;
a location vector calculation module configured to determine a gradient vector of the second image based on an edge of the region of interest, wherein the gradient vector represents an external force; acquiring a position vector of the initial contour closed curve, wherein the position vector represents an internal force; and under the action of the gradient vector, the position of the position vector is continuously changed until the external force and the internal force are balanced, and at the moment, the position vector converges to a target edge.
Optionally, the apparatus may further include:
a first determining module configured to execute the position vector calculating module if the initial contour closed curve is determined not to conform to the basic morphology of the cell.
Optionally, the apparatus may further include:
and a second judging module configured to judge the region surrounded by the target edge as an impurity, when the target edge is judged not to conform to the basic morphology of the cell.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Example 3
An aspect of embodiments of the present application provides a computing device, referring to fig. 7, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program realizing, when executed by the processor 1110, a method step 1131 for performing any of the methods according to the present application.
An aspect of embodiments of the present application also provides a computer-readable storage medium. Referring to fig. 8, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the present application, the program being executed by a processor.
An aspect of an embodiment of the present application also provides a computer program product containing instructions, including computer readable code, which when executed by a computing device, causes the computing device to perform the method as described above.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image segmentation method for an image of exfoliated cervical cells, comprising:
an image dividing step: dividing the acquired first image of the exfoliated cervical cells into a plurality of sub-blocks;
an edge extraction step: classifying each sub-block based on the gray value, and performing edge extraction on the sub-blocks including the image edges;
image splicing: splicing all the sub-blocks into a second image in sequence, wherein the extracted edges in the sub-blocks form an interested area; and
cell population segmentation step: determining an initial contour closed curve of the region of interest, and converging a position vector of the initial contour curve to a target edge based on a gradient vector of the second image.
2. The method of claim 1, wherein the edge extracting step comprises:
gray value calculation: calculating the gray variance and the gray mean of the sub-blocks;
and (3) classification step: respectively comparing the gray variance and the gray mean value with a preset first threshold value and a preset second threshold value, and dividing the subblocks into a foreground category, a background category and a category containing foreground and background contents at the same time; and
an edge segmentation step: and for the sub-block of the category containing both foreground and background contents, performing edge segmentation on the sub-block by using a global maximum inter-class variance threshold.
3. The method according to claim 2, wherein in the edge segmentation step:
the global maximum between-class variance threshold K satisfies the varianceWherein the varianceCalculated using the formula:
wherein n is the total number of pixels of the sub-block, L is the total number of gray levels of the sub-block, nqTo a gray level of rqQ is 0,1,2, … L-1; foreground pixels in the sub-block belong to set C1The value range is [0,1, … K-1 ]](ii) a The background pixels in the sub-block belong to the set C2,C2The value range is [ K,K+1,…L-1]。
4. the method of any one of claims 1 to 3, wherein the cell population partitioning step comprises:
contour determination: determining an initial contour closed curve of the region of interest; and
a position vector calculation step: determining a gradient vector of the second image based on an edge of the region of interest, wherein the gradient vector represents an external force; acquiring a position vector of the initial contour closed curve, wherein the position vector represents an internal force; and under the action of the gradient vector, the position of the position vector is continuously changed until the external force and the internal force are balanced, and at the moment, the position vector converges to a target edge.
5. The method of claim 4, wherein after the contouring step, the method further comprises:
a first judgment step: and executing the position vector calculation step under the condition that the initial contour closed curve is judged not to conform to the basic morphology of the cell.
6. The method of claim 5, wherein after the step of computing the position vector, the method further comprises:
a second judgment step: and judging the region surrounded by the target edge as an impurity when the target edge is judged not to conform to the basic morphology of the cell.
7. An image segmentation apparatus for an image of exfoliated cervical cells, comprising:
an image dividing module configured to divide the acquired first image of cervical exfoliated cells into a number of sub-blocks;
an edge extraction module configured to classify each sub-block based on the gray values, perform edge extraction on sub-blocks including edges of the image;
an image stitching module configured to stitch all sub-blocks in sequence into a second image, the extracted edges in the sub-blocks constituting a region of interest; and
a cell population segmentation module configured to determine an initial contour closed curve of the region of interest, a location vector of the initial contour curve converging to a target edge based on a gradient vector of the second image.
8. The apparatus of claim 7, wherein the edge extraction module comprises:
a gray value calculation module configured to calculate a gray variance and a gray mean of the sub-blocks;
a classification module configured to compare the gray variance and the gray mean with a preset first threshold and a preset second threshold, respectively, and to classify the subblocks into a foreground category, a background category, and a category containing both foreground and background content; and
an edge segmentation module configured to edge segment a sub-block of a class that contains both foreground and background content with a global maximum inter-class variance threshold.
9. A computer-readable storage medium, preferably a non-volatile readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
10. A computer program product comprising computer readable code which, when executed by a computing device, causes the computing device to perform the method of any of claims 1 to 6.
CN201811058091.9A 2018-09-11 2018-09-11 For the image partition method and device of cervical exfoliated cell image Pending CN109064475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811058091.9A CN109064475A (en) 2018-09-11 2018-09-11 For the image partition method and device of cervical exfoliated cell image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811058091.9A CN109064475A (en) 2018-09-11 2018-09-11 For the image partition method and device of cervical exfoliated cell image

Publications (1)

Publication Number Publication Date
CN109064475A true CN109064475A (en) 2018-12-21

Family

ID=64761324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811058091.9A Pending CN109064475A (en) 2018-09-11 2018-09-11 For the image partition method and device of cervical exfoliated cell image

Country Status (1)

Country Link
CN (1) CN109064475A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353407A (en) * 2020-02-24 2020-06-30 中南大学湘雅医院 Medical image processing method, apparatus, computer device and storage medium
CN112837307A (en) * 2021-02-24 2021-05-25 北京博清科技有限公司 Method, device, processor and system for determining welding bead profile
CN113228101A (en) * 2018-12-25 2021-08-06 浙江大华技术股份有限公司 System and method for image segmentation
CN114359378A (en) * 2021-12-31 2022-04-15 四川省自贡运输机械集团股份有限公司 Method for positioning inspection robot of belt conveyor
CN114419074A (en) * 2022-03-25 2022-04-29 青岛大学附属医院 4K medical image processing method
CN114972209A (en) * 2022-05-05 2022-08-30 清华大学 Cervical pathology image processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463892A (en) * 2014-12-24 2015-03-25 福州大学 Bacterial colony image segmentation method based on level set and GVF Snake accurate positioning
CN104680498A (en) * 2015-03-24 2015-06-03 江南大学 Medical image segmentation method based on improved gradient vector flow model
CN104992435A (en) * 2015-06-24 2015-10-21 广西师范大学 Cervix uteri single cell image segmentation algorithm
CN106504261A (en) * 2016-10-31 2017-03-15 北京奇艺世纪科技有限公司 A kind of image partition method and device
CN107808381A (en) * 2017-09-25 2018-03-16 哈尔滨理工大学 A kind of unicellular image partition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463892A (en) * 2014-12-24 2015-03-25 福州大学 Bacterial colony image segmentation method based on level set and GVF Snake accurate positioning
CN104680498A (en) * 2015-03-24 2015-06-03 江南大学 Medical image segmentation method based on improved gradient vector flow model
CN104992435A (en) * 2015-06-24 2015-10-21 广西师范大学 Cervix uteri single cell image segmentation algorithm
CN106504261A (en) * 2016-10-31 2017-03-15 北京奇艺世纪科技有限公司 A kind of image partition method and device
CN107808381A (en) * 2017-09-25 2018-03-16 哈尔滨理工大学 A kind of unicellular image partition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴娱: "《数字图像处理》", 31 October 2017, 北京邮电大学出版社 *
蒋晓悦 等: ""一种改进的活动轮廓图像分割技术"", 《中国图象图形学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113228101A (en) * 2018-12-25 2021-08-06 浙江大华技术股份有限公司 System and method for image segmentation
CN113228101B (en) * 2018-12-25 2024-05-10 浙江大华技术股份有限公司 System and method for image segmentation
US12008767B2 (en) 2018-12-25 2024-06-11 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image segmentation
CN111353407A (en) * 2020-02-24 2020-06-30 中南大学湘雅医院 Medical image processing method, apparatus, computer device and storage medium
CN111353407B (en) * 2020-02-24 2023-10-31 中南大学湘雅医院 Medical image processing method, medical image processing device, computer equipment and storage medium
CN112837307A (en) * 2021-02-24 2021-05-25 北京博清科技有限公司 Method, device, processor and system for determining welding bead profile
CN114359378A (en) * 2021-12-31 2022-04-15 四川省自贡运输机械集团股份有限公司 Method for positioning inspection robot of belt conveyor
CN114419074A (en) * 2022-03-25 2022-04-29 青岛大学附属医院 4K medical image processing method
CN114419074B (en) * 2022-03-25 2022-07-12 青岛大学附属医院 4K medical image processing method
CN114972209A (en) * 2022-05-05 2022-08-30 清华大学 Cervical pathology image processing method and device

Similar Documents

Publication Publication Date Title
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
CN109064475A (en) For the image partition method and device of cervical exfoliated cell image
CN109978890B (en) Target extraction method and device based on image processing and terminal equipment
CN111462086B (en) Image segmentation method and device, and training method and device of neural network model
CN110084150B (en) Automatic white blood cell classification method and system based on deep learning
CN109117773B (en) Image feature point detection method, terminal device and storage medium
CN112561080B (en) Sample screening method, sample screening device and terminal equipment
CN109376786A (en) A kind of image classification method, device, terminal device and readable storage medium storing program for executing
CN112348765A (en) Data enhancement method and device, computer readable storage medium and terminal equipment
US11501431B2 (en) Image processing method and apparatus and neural network model training method
CN108846842B (en) Image noise detection method and device and electronic equipment
CN112132206A (en) Image recognition method, training method of related model, related device and equipment
Li et al. Automatic comic page segmentation based on polygon detection
CN111553215A (en) Personnel association method and device, and graph convolution network training method and device
CN109146891A (en) A kind of hippocampus dividing method, device and electronic equipment applied to MRI
CN108133218A (en) Infrared target detection method, equipment and medium
CN113158773A (en) Training method and training device for living body detection model
CN109978903B (en) Identification point identification method and device, electronic equipment and storage medium
CN111062927A (en) Method, system and equipment for detecting image quality of unmanned aerial vehicle
CN113052162B (en) Text recognition method and device, readable storage medium and computing equipment
EP3391335A1 (en) Automatic nuclear segmentation
CN110659631A (en) License plate recognition method and terminal equipment
CN111062984B (en) Method, device, equipment and storage medium for measuring area of video image area
CN111161789B (en) Analysis method and device for key areas of model prediction
CN115881304B (en) Risk assessment method, device, equipment and medium based on intelligent detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181221

RJ01 Rejection of invention patent application after publication