CN112150461B - Method and apparatus for assessing head-to-tail sharpness of a cell image - Google Patents
Method and apparatus for assessing head-to-tail sharpness of a cell image Download PDFInfo
- Publication number
- CN112150461B CN112150461B CN202011118230.XA CN202011118230A CN112150461B CN 112150461 B CN112150461 B CN 112150461B CN 202011118230 A CN202011118230 A CN 202011118230A CN 112150461 B CN112150461 B CN 112150461B
- Authority
- CN
- China
- Prior art keywords
- image
- tail
- head
- region
- sharpness
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 239000011159 matrix material Substances 0.000 claims description 29
- 239000013598 vector Substances 0.000 claims description 27
- 230000015654 memory Effects 0.000 claims description 19
- 238000001514 detection method Methods 0.000 claims description 16
- 230000009467 reduction Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 abstract description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241001156002 Anthonomus pomorum Species 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method and a device for evaluating the head-to-tail definition of a cell image, relates to the field of artificial intelligence, and particularly relates to the aspects of computer vision and deep learning, and can be used in intelligent medical influence analysis scenes. The specific implementation scheme is as follows: acquiring an image of a cell comprising a head and a tail; detecting a head region from the image; filling the head region with the pixel mean value of the image, and then reducing noise to obtain the tail region; the sharpness of the head region and the sharpness of the tail region are calculated separately. This embodiment enables assessment of the head-to-tail sharpness of the cell image.
Description
Technical Field
The present application relates to the field of artificial intelligence, and in particular to aspects of computer vision and deep learning.
Background
When observing cells using a microscope, since some cells (e.g., sperm) have different head-to-tail thicknesses, the head and tail portions cannot be seen clearly when the microscope is adjusted to different focal lengths, respectively, and thus a clear image of the entire cell cannot be displayed at the same focal length. At present, the evaluation of pictures of cells is carried out manually, and the defects of strong subjectivity, relaxed standard, time consumption and the like exist, so that the positions of the head and tail in a large number of cell pictures are required to be detected rapidly and accurately, and the pictures with high definition are required to be screened.
Disclosure of Invention
The present disclosure provides a method, apparatus, device and storage medium for assessing head-to-tail sharpness of a cell image.
According to a first aspect of the present disclosure, there is provided a method for assessing head-to-tail sharpness of a cell image, comprising: acquiring an image of a cell comprising a head and a tail; detecting a head region from the image; filling the head region with the pixel mean value of the image, and then reducing noise to obtain the tail region; the sharpness of the head region and the sharpness of the tail region are calculated separately.
According to a second aspect of the present disclosure, there is provided an apparatus for assessing head-to-tail sharpness of a cell image, comprising: an acquisition unit configured to acquire an image of a cell including a head and a tail; a head detection unit configured to detect a head region from an image; the tail detection unit is configured to fill the head region by using the pixel mean value of the image and then to reduce noise to obtain a tail region; and a calculation unit configured to calculate the sharpness of the head region and the sharpness of the tail region, respectively.
According to a third aspect of the present disclosure, there is provided an electronic apparatus, characterized by comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the first aspects.
According to a fourth aspect of the present disclosure there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of the first aspects.
According to the technology, the problem of head-tail definition evaluation of cells is solved, and the speed and accuracy of head-tail definition evaluation of cells are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method for assessing the sharpness of a cell image in the head-to-tail, according to the present disclosure;
FIG. 3 is a flow chart of yet another embodiment of a method for assessing the sharpness of a cell image in accordance with the present disclosure;
FIG. 4 is a schematic diagram of a cell for use in a method of assessing the sharpness of the head-to-tail of a cell image according to the present disclosure;
FIG. 5 is a schematic structural view of one embodiment of an apparatus for assessing the sharpness of a cell image in accordance with the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a method for assessing the end-to-end sharpness of a cell image in accordance with an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the methods of the present disclosure for assessing the end-to-end sharpness of a cell image or apparatus for assessing the end-to-end sharpness of a cell image may be applied.
As shown in fig. 1, a system architecture 100 may include a microscope 101, a network 102, and a server 103. Network 102 is the medium used to provide a communication link between microscope 101 and server 103. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
Microscope 101 may be various types of microscopes including, but not limited to: polarizing microscope, optical microscope, electron microscope and digital microscope.
The microscope can adjust the focal length and shoot the cell images under different focal lengths. The cell image is then sent to the server 103 via the network.
The server 103 may be a server providing various services, such as a background analysis server that analyzes the cell image transmitted from the microscope 101. The background analysis server can perform image processing on the received cell image, identify the head and tail of the cell, and calculate the definition of the head and tail. The clearest head image and the clearest tail image can be selected according to the calculated definition, and then the complete clear cell image is spliced.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (e.g., a plurality of software or software modules for providing distributed services), or as a single software or software module. The present invention is not particularly limited herein. The server may also be a server of a distributed system or a server that incorporates a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should be noted that, the method for evaluating the end-to-end sharpness of the cell image provided by the embodiments of the present disclosure is generally performed by the server 103, and accordingly, the device for evaluating the end-to-end sharpness of the cell image is generally disposed in the server 103.
It should be understood that the number of microscopes, networks, and servers in fig. 1 is merely illustrative. Any number of microscopes, networks, and servers may be provided as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for assessing the end-to-end sharpness of a cell image according to the present disclosure is shown. The method for evaluating the head-to-tail definition of a cell image comprises the following steps:
in step 201, an image of a cell including a head and a tail is acquired.
In this embodiment, an execution subject of the method for evaluating the sharpness of the head and tail of a cell image (e.g., a server shown in fig. 1) may receive an image of a cell from a microscope. The cell comprises a head part and a tail part, and the thickness difference between the head part and the tail part is larger, so that if the head part is clear when the focal length is the same, the tail part is not clear, and when the focal length of the tail part is changed into the clear focal length, the head part is not clear.
Step 202, a head region is detected from an image.
In this embodiment, the head region may be detected by a pre-trained neural network model.
In some alternative implementations of this embodiment, the cell head region in the image is detected by a color model filtering method. The color model HSV (Value) is a color space created from visual properties of colors, also called a hexagonal pyramid model. Converting an image from the RGB color space to HSV, the region with pixel values between [90,50,60] and [140,255,180] may be considered the head region. The two sets of values are derived from statistical analysis of the data set. The pixel value of the tail part is not in the interval, so that the head part area can be distinguished through the pixel value, and the method is simpler and faster than that of a neural network model.
Step 203, filling the head region with the pixel mean value of the image, and then performing noise reduction to obtain the tail region.
In this embodiment, the tail region may be detected by processing the head region as noise, and then identifying the tail region by a noise reduction method. Filling the head region with the pixel mean of the image may turn the head region into noise. The noise reduction may be performed using conventional image filtering algorithms, such as spatial domain pixel feature denoising algorithms and transform domain denoising algorithms.
In some alternative implementations of the present embodiment, the head region is filled with the pixel mean of the image, resulting in an intermediate image; calculating the maximum connected domain of the intermediate image by using an image connected domain operator to obtain a binarized mask region; expanding the boundary of the mask region through an expansion operator, and performing AND operation on the mask region after expanding the boundary and the intermediate image to obtain a tail region; and filling the region outside the mask region after the boundary expansion with the pixel mean value of the image to obtain the tail image after noise reduction. The noise reduction method is simpler and faster than the filtering method, and the image processing speed is improved.
In step 204, the sharpness of the head region and the sharpness of the tail region are calculated, respectively.
In this embodiment, the image sharpness refers to the sharpness of each fine image and its boundary on the image. In general, for a particular imaging system, the sharpness of its imaging is indicative of the focus state of the system. When the focusing effect is good, the image is clear, the information such as contour details is rich, and different characteristic information is highlighted on a spatial domain or a frequency domain. For example, in the spatial domain, the gray value of an image is taken as main characteristic information; in the frequency domain, the characteristic information is a high frequency component. Therefore, the sharpness can be calculated by calculating the gradation value, the contrast, or the like.
In some alternative implementations of the present embodiment, the sharpness of the head region and the sharpness of the tail region are calculated separately by sharpness operators. Sharpness operators may include, but are not limited to: SMD function, tenengrad function, image energy function, energy gradient function, brenner function, and Laplacian function. The definition is calculated by using the definition operator, so that the definition calculation speed and accuracy can be improved.
The method provided by the embodiment of the disclosure can simply and quickly identify the head area and the tail area of the cell image, and can calculate the definition of the head area and the tail area. Therefore, photos with clear head or photos with clear tail can be selected quickly, manual screening is not needed, labor is saved, and accuracy is improved.
With further reference to fig. 3, a flow 300 of yet another embodiment of a method for assessing the sharpness of a cell image is shown. The process 300 of the method for assessing the head-to-tail sharpness of a cell image comprises the steps of:
step 301, a set of images of different focal lengths of cells are acquired.
In this embodiment, in order to stitch out an image with the clear head and tail, a group of images (as shown in fig. 4) with different focal lengths of the cells may be obtained, and then the image with the clearest head and the image with the clearest tail may be selected from the images, and the two images may be stitched. Each image in the group of images corresponds to a focal length, the images in the group are arranged in order from big to small or from small to big, and each image is provided with a number. For example, image 1 corresponds to a 5cm focal length, image 2 corresponds to a 10cm focal length, and image 3 corresponds to a 15cm focal length … ….
In step 302, the sharpness of the head region and the sharpness of the tail region of each image in the set of images are calculated.
In this embodiment, the sharpness of the head region and the sharpness of the tail region of each image are calculated according to the methods of steps 201-204. The definition (head, tail) can be recorded in a binary group manner, such as (3, 2), (3, 3), (4, 3). Recording is performed in the order of image numbers.
Step 303, normalizing the sharpness of the head area of each image in the set of images to obtain a set of column vectors, normalizing the sharpness of the tail area of each image in the set of images to obtain a set of row vectors, and constructing a matrix according to the set of column vectors and the set of row vectors.
In this embodiment, each element in the matrix is a product of a row vector and a column vector. And respectively carrying out normalization operation on the definition of the head region and the definition of the tail region of each image in the group of images, so that the sum of the definition of the head and the sum of the definition of the tail in each group of images are one. The column vector of the head score of each group of pictures is denoted as h, the row vector of the corresponding head score is denoted as b, and the product r of the vectors is calculated, r=h×b.
For example, the sharpness of the head of the 4 pictures in fig. 4 is [7555.5,18202.6,40514.4,10497.2] in sequence, the sharpness of the tail is [13,141,35,24] in sequence, and normalization is performed respectively: h= [ [0.0984], [0.2371], [0.5277], [0.1367] ],
b=[0.0610,0.6620,0.1643,0.1127],
r=h*b
[[0.00600645,0.0651469,0.01617122,0.01108883],
[0.01447114,0.15695617,0.03896075,0.02671594],
[0.03220985,0.34935294,0.08671882,0.05946433],
[0.00834543,0.09051582,0.02246847,0.01540695]]
step 304, determining a row index and a column index corresponding to the largest element in the matrix, and determining a first image with the clearest head area and a second image with the clearest tail area corresponding to the row index and the column index.
In this embodiment, the rank index starts from 0. The row and column subscript corresponding to the maximum r value (0.34935294) is found from the above matrix as (2, 1), so the number of the sharpest head image is 3 and the number of the sharpest tail image is 2.
In some alternative implementations of this embodiment, the head region of the first image is stitched to the tail region of the second image to generate a complete, clear cell image. If the focal length of the first image and the focal length of the second image are not different, the head area of the first image and the tail area of the second image can be directly spliced to generate a complete and clear cell image. If the focal length difference is larger, the first image or the second image can be zoomed according to the focal length ratio and then spliced. Therefore, clear images of the head and the tail of the cell can be constructed, and the subsequent observation and analysis of the whole cell are convenient.
In some optional implementations of this embodiment, a set of images of different focal lengths of a predetermined number of sample cells of a same class is acquired, where each sample cell corresponds to a set of images arranged in order of size of focal length; for each sample cell, determining a row index and a column index corresponding to the largest element in the matrix corresponding to the sample cell according to the methods of steps 301-304; and counting the relevance between the row index and the column index corresponding to the largest element in the matrix corresponding to the preset number of sample cells. For example, images of 20 sample cells are taken, each sample cell taking a set of images of different focal lengths. According to the methods of steps 301-304, the row index and the column index corresponding to the largest element in the matrix corresponding to each sample cell can be determined, and the relevance between the row index and the column index can be obtained. Since the focal length sequence set during shooting is fixed, the thicknesses of the heads and the tails of the cells are larger than those of the tails of the cells, and the thicknesses of the heads and the tails of different cells follow normal distribution, the most clear head-tail picture numbers of each group of samples show remarkable relevance, and more than 99% of the head numbers = tail numbers and the head numbers = tail numbers +1 are satisfied. According to the relevance of the clear head image and the clear tail image of the cell, the clear tail image can be found as long as the clear head image is found, the definition of the tail image is not required to be calculated, and the tail area is not required to be identified. The image screening speed is improved.
In some optional implementations of this embodiment, the row index and the column index corresponding to the largest element in the matrix are locally searched for according to the association between the row index and the column index corresponding to the largest element in the matrix. In the above example, if the row subscript is equal to the column subscript in the matrix r, and the row subscript is equal to the column subscript plus 1, and the row subscript row and column subscript col corresponding to the maximum of these elements, the most clear header picture number is row+1, and the most clear tail picture number is col+1. The row and column index corresponding to the maximum value is found only among the several values of row number = column number +1. The searching range can be reduced only by locally searching the maximum value, the searching speed is improved, and the situation that the clearest head and tail cannot be found accurately due to insufficient accuracy of definition can be avoided. As shown below, the maximum value is searched only within the range defined by the box, so that the search range can be reduced.
As can be seen from fig. 3, compared to the corresponding embodiment of fig. 2, the flow 300 of the method for evaluating the head-to-tail sharpness of a cell image in this embodiment embodies the step of finding the sharpest cell head image and cell tail image. Therefore, the scheme described in the embodiment can rapidly and accurately screen out the clearest cell head image and the clearest cell tail image, splice the images and provide a basis for subsequent image recognition.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of an apparatus for evaluating the head-to-tail sharpness of a cell image, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for evaluating the head-to-tail sharpness of a cell image of the present embodiment includes: an acquisition unit 501, a head detection unit 502, a tail detection unit 503, and a calculation unit 504. Wherein the acquisition unit 501 is configured to acquire an image of a cell including a head and a tail; a head detection unit 502 configured to detect a head region from an image; a tail detection unit 503 configured to fill the head region with the pixel mean value of the image and then perform noise reduction to obtain a tail region; a calculation unit 504 configured to calculate the sharpness of the head region and the sharpness of the tail region, respectively.
In this embodiment, specific processes of the acquisition unit 501, the head detection unit 502, the tail detection unit 503, and the calculation unit 504 of the apparatus 500 for evaluating the head-to-tail sharpness of a cell image may refer to steps 201, 202, 203, 204 in the corresponding embodiment of fig. 2.
In some optional implementations of the present embodiment, the apparatus 500 further comprises a determining unit (not shown in the drawings) configured to: acquiring a group of images with different focal lengths of cells, wherein each image in the group of images corresponds to one focal length, and the images are arranged in order from large to small or from small to large; the sharpness of the head region and the sharpness of the tail region of each image in the set of images are calculated according to steps 201-204. Normalizing the definition of the head region of each image in the set of images to obtain a set of column vectors, normalizing the definition of the tail region of each image in the set of images to obtain a set of row vectors, and constructing a matrix according to the set of column vectors and the set of row vectors, wherein each element in the matrix is the product of one row vector and one column vector. And determining a row index and a column index corresponding to the largest element in the matrix, and determining a first image with the clearest head area and a second image with the clearest tail area corresponding to the row index and the column index.
In some optional implementations of the present embodiment, the apparatus 500 further includes a splicing unit (not shown in the drawings) configured to: and splicing the head region of the first image and the tail region of the second image to generate a complete and clear cell image.
In some optional implementations of the present embodiment, the apparatus 500 further includes a statistics unit (not shown in the drawings) configured to: acquiring image sets of different focal lengths of a predetermined number of sample cells of the same class, wherein each sample cell corresponds to a group of images arranged according to the order of the focal lengths; for each sample cell, the row index and column index corresponding to the largest element in the matrix corresponding to the sample cell are determined according to steps 301-304. And counting the relevance between the row index and the column index corresponding to the largest element in the matrix corresponding to the preset number of sample cells.
In some optional implementations of the present embodiment, the determining unit is further configured to: and according to the relevance, locally searching a row index and a column index corresponding to the largest element in the matrix.
In some optional implementations of the present embodiment, the head detection unit 502 is further configured to: the cell head region in the image was detected by means of color model filtering.
In some optional implementations of the present embodiment, the tail detection unit 503 is further configured to: filling the head region with the pixel mean value of the image to obtain an intermediate image; calculating the maximum connected domain of the intermediate image by using an image connected domain operator to obtain a binarized mask region; expanding the boundary of the mask region through an expansion operator, and performing AND operation on the mask region after expanding the boundary and the intermediate image to obtain a tail region; and filling the region outside the mask region after the boundary expansion with the pixel mean value of the image to obtain the tail image after noise reduction.
In some optional implementations of the present embodiment, the computing unit 504 is further configured to: and respectively calculating the definition of the head region and the definition of the tail region through a definition operator.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, a block diagram of an electronic device for evaluating a method of cell image head-to-tail sharpness according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 6, the electronic device includes: one or more processors 601, memory 602, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 601 is illustrated in fig. 6.
Memory 602 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods provided herein for assessing the head-to-tail sharpness of a cell image. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the methods provided herein for assessing the head-to-tail sharpness of a cell image.
The memory 602 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (e.g., the acquisition unit 501, the head detection unit 502, the tail detection unit 503, and the calculation unit 504 shown in fig. 5) corresponding to the method for evaluating the head-to-tail sharpness of a cell image in the embodiments of the present application. The processor 601 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 602, i.e., implements the method for evaluating the sharpness of the head and tail of a cell image in the above-described method embodiment.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created from the use of electronic devices for evaluating the end-to-end sharpness of the cell image, and the like. In addition, the memory 602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 602 optionally includes memory remotely located relative to processor 601, which may be connected via a network to electronic devices for assessing the end-to-end sharpness of a cell image. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for the method of assessing the head-to-tail sharpness of a cell image may further comprise: an input device 603 and an output device 604. The processor 601, memory 602, input device 603 and output device 604 may be connected by a bus or otherwise, for example in fig. 6.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic device for assessing the clarity of the cell image from head to tail, such as a touch screen, keypad, mouse, trackpad, touch pad, pointer stick, one or more mouse buttons, trackball, joystick, etc. input devices. The output means 604 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the head area and the tail area can be found out from the cell image rapidly and accurately, and the definition of the head area and the tail area can be calculated accurately, so that the clear image of the head of the cell and the clear image of the tail of the cell can be screened rapidly and accurately.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.
Claims (16)
1. A method for assessing head-to-tail sharpness of a cell image, comprising:
acquiring a set of images of different focal lengths of cells comprising a head and a tail, wherein each image in the set of images corresponds to a focal length, and the set of images are arranged in order of the focal lengths from large to small or from small to large;
calculating the definition of the head area and the definition of the tail area of each image in the group of images;
normalizing the definition of the head region of each image in the group of images to obtain a group of column vectors, normalizing the definition of the tail region of each image in the group of images to obtain a group of row vectors, and constructing a matrix according to the group of column vectors and the group of row vectors, wherein each element in the matrix is the product of one row vector and one column vector;
determining a row index and a column index corresponding to the largest element in the matrix, and determining a first image with the clearest head area and a second image with the clearest tail area corresponding to the row index and the column index;
wherein said calculating the sharpness of the head region and the sharpness of the tail region of each image in the set of images comprises:
each image in the set of images is processed as follows:
acquiring an image of a cell comprising a head and a tail;
detecting a head region from the image;
filling the head region with the pixel mean value of the image, and then reducing noise to obtain a tail region;
the sharpness of the head region and the sharpness of the tail region are calculated separately.
2. The method of claim 1, the method further comprising:
and splicing the head region of the first image and the tail region of the second image to generate a complete and clear cell image.
3. The method of claim 1, the method further comprising:
acquiring image sets of different focal lengths of a predetermined number of sample cells of the same class, wherein each sample cell corresponds to a group of images arranged according to the order of the focal lengths;
for each sample cell, determining a row index and a column index corresponding to the largest element in the matrix corresponding to the sample cell according to the method of claim 1;
and counting the relevance between the row index and the column index corresponding to the largest element in the matrix corresponding to the preset number of sample cells.
4. A method according to claim 3, said determining row and column indices corresponding to the largest element in the matrix comprising:
and according to the relevance, locally searching a row index and a column index corresponding to the largest element in the matrix.
5. The method of claim 1, the detecting a head region from the image comprising:
and detecting the head area of the cell in the image by a color model filtering method.
6. The method of claim 1, wherein the filling the head region with the pixel mean of the image and then reducing noise to obtain a tail region comprises:
filling the head region by using the pixel mean value of the image to obtain an intermediate image;
calculating the maximum connected domain of the intermediate image by using an image connected domain operator to obtain a binarized mask region;
expanding the boundary of the mask region through an expansion operator, and performing AND operation on the mask region after expanding the boundary and the intermediate image to obtain a tail region;
and filling the area outside the mask area after the boundary expansion with the pixel mean value of the image to obtain the tail image after noise reduction.
7. The method of claim 1, the computing the sharpness of the head region and the sharpness of the tail region, respectively, comprising:
and respectively calculating the definition of the head region and the definition of the tail region through a definition operator.
8. An apparatus for assessing head-to-tail sharpness of a cell image, comprising:
a determination unit configured to:
acquiring a set of images of different focal lengths of cells comprising a head and a tail, wherein each image in the set of images corresponds to a focal length, and the set of images are arranged in order of the focal lengths from large to small or from small to large;
calculating the definition of the head area and the definition of the tail area of each image in the group of images;
normalizing the definition of the head region of each image in the group of images to obtain a group of column vectors, normalizing the definition of the tail region of each image in the group of images to obtain a group of row vectors, and constructing a matrix according to the group of column vectors and the group of row vectors, wherein each element in the matrix is the product of one row vector and one column vector;
determining a row index and a column index corresponding to the largest element in the matrix, and determining a first image with the clearest head area and a second image with the clearest tail area corresponding to the row index and the column index;
the apparatus further comprises:
an acquisition unit configured to acquire an image of a cell including a head and a tail;
a head detection unit configured to detect a head region from the image;
the tail detection unit is configured to fill the head region by using the pixel mean value of the image and then to reduce noise to obtain a tail region;
and a calculation unit configured to calculate the sharpness of the head region and the sharpness of the tail region, respectively.
9. The apparatus of claim 8, further comprising a stitching unit configured to:
and splicing the head region of the first image and the tail region of the second image to generate a complete and clear cell image.
10. The apparatus of claim 8, the apparatus further comprising a statistics unit configured to:
acquiring image sets of different focal lengths of a predetermined number of sample cells of the same class, wherein each sample cell corresponds to a group of images arranged according to the order of the focal lengths;
for each sample cell, determining a row index and a column index corresponding to the largest element in the matrix corresponding to the sample cell according to the method of claim 1;
and counting the relevance between the row index and the column index corresponding to the largest element in the matrix corresponding to the preset number of sample cells.
11. The apparatus of claim 10, the determining unit further configured to:
and according to the relevance, locally searching a row index and a column index corresponding to the largest element in the matrix.
12. The apparatus of claim 8, the head detection unit further configured to:
the cell head region in the image is detected by means of color model filtering.
13. The apparatus of claim 8, the tail detection unit further configured to:
filling the head region by using the pixel mean value of the image to obtain an intermediate image;
calculating the maximum connected domain of the intermediate image by using an image connected domain operator to obtain a binarized mask region;
expanding the boundary of the mask region through an expansion operator, and performing AND operation on the mask region after expanding the boundary and the intermediate image to obtain a tail region;
and filling the area outside the mask area after the boundary expansion with the pixel mean value of the image to obtain the tail image after noise reduction.
14. The apparatus of claim 8, the computing unit further configured to:
and respectively calculating the definition of the head region and the definition of the tail region through a definition operator.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011118230.XA CN112150461B (en) | 2020-10-19 | 2020-10-19 | Method and apparatus for assessing head-to-tail sharpness of a cell image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011118230.XA CN112150461B (en) | 2020-10-19 | 2020-10-19 | Method and apparatus for assessing head-to-tail sharpness of a cell image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112150461A CN112150461A (en) | 2020-12-29 |
CN112150461B true CN112150461B (en) | 2024-01-12 |
Family
ID=73953363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011118230.XA Active CN112150461B (en) | 2020-10-19 | 2020-10-19 | Method and apparatus for assessing head-to-tail sharpness of a cell image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112150461B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118037739B (en) * | 2024-04-15 | 2024-07-02 | 深圳市生强科技有限公司 | Definition evaluation method of digital pathological image and application thereof |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001066265A (en) * | 1999-08-31 | 2001-03-16 | Matsushita Electric Ind Co Ltd | Image processing method |
CN104915927A (en) * | 2014-03-11 | 2015-09-16 | 株式会社理光 | Parallax image optimization method and apparatus |
CN106780414A (en) * | 2016-12-11 | 2017-05-31 | 天津汉铭科技发展有限公司 | A kind of processing method of multiple-target remote sensing image clouds |
CN106803070A (en) * | 2016-12-29 | 2017-06-06 | 北京理工雷科电子信息技术有限公司 | A kind of port area Ship Target change detecting method based on remote sensing images |
WO2019143633A1 (en) * | 2018-01-18 | 2019-07-25 | Nantomics, Llc | Real-time whole slide pathology image cell counting |
CN110488481A (en) * | 2019-09-19 | 2019-11-22 | 广东工业大学 | A kind of microscope focusing method, microscope and relevant device |
CN111724339A (en) * | 2020-04-21 | 2020-09-29 | 广州番禺职业技术学院 | Happy fruit head and tail recognition device based on multi-channel information fusion and recognition method thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8042954B2 (en) * | 2007-01-24 | 2011-10-25 | Seiko Epson Corporation | Mosaicing of view projections |
-
2020
- 2020-10-19 CN CN202011118230.XA patent/CN112150461B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001066265A (en) * | 1999-08-31 | 2001-03-16 | Matsushita Electric Ind Co Ltd | Image processing method |
CN104915927A (en) * | 2014-03-11 | 2015-09-16 | 株式会社理光 | Parallax image optimization method and apparatus |
CN106780414A (en) * | 2016-12-11 | 2017-05-31 | 天津汉铭科技发展有限公司 | A kind of processing method of multiple-target remote sensing image clouds |
CN106803070A (en) * | 2016-12-29 | 2017-06-06 | 北京理工雷科电子信息技术有限公司 | A kind of port area Ship Target change detecting method based on remote sensing images |
WO2019143633A1 (en) * | 2018-01-18 | 2019-07-25 | Nantomics, Llc | Real-time whole slide pathology image cell counting |
CN110488481A (en) * | 2019-09-19 | 2019-11-22 | 广东工业大学 | A kind of microscope focusing method, microscope and relevant device |
CN111724339A (en) * | 2020-04-21 | 2020-09-29 | 广州番禺职业技术学院 | Happy fruit head and tail recognition device based on multi-channel information fusion and recognition method thereof |
Non-Patent Citations (4)
Title |
---|
Defect detection for PCBs based on gradient direction information entropy;Li Yunfeng等;Defect detection for PCBs based on gradient direction information entropy;全文 * |
利用多分辨率分析的胸部X线数字图像粗糙集滤波增强;陈真诚, 张锋, 蒋大宗, 倪利莉, 王红艳;中国生物医学工程学报(第06期);全文 * |
基于Hough变换的斑马鱼胚胎图像分析技术;许晓燕;夏顺仁;WONG Stephen T C;;浙江大学学报(工学版)(第11期);全文 * |
基于卡尔曼滤波改进的精子图像序列分割方法;余东;黄文明;温佩芝;宁如花;黄锦芳;;计算机工程与科学(第07期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112150461A (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111967538B (en) | Feature fusion method, device and equipment applied to small target detection and storage medium | |
CN111753908B (en) | Image classification method and device and style migration model training method and device | |
JP2021166062A (en) | Focal point weighting machine learning classifier error prediction for microscope slide image | |
JP5282658B2 (en) | Image learning, automatic annotation, search method and apparatus | |
CN111524137B (en) | Cell identification counting method and device based on image identification and computer equipment | |
CN113837079B (en) | Automatic focusing method, device, computer equipment and storage medium of microscope | |
CN112150462B (en) | Method, device, equipment and storage medium for determining target anchor point | |
CN110736748A (en) | Immunohistochemical nuclear plasma staining section diagnosis method and system | |
CN112288699B (en) | Method, device, equipment and medium for evaluating relative definition of image | |
CN111784757B (en) | Training method of depth estimation model, depth estimation method, device and equipment | |
CN112561879B (en) | Ambiguity evaluation model training method, image ambiguity evaluation method and image ambiguity evaluation device | |
CN113553909B (en) | Model training method for skin detection and skin detection method | |
CN113705361A (en) | Method and device for detecting model in living body and electronic equipment | |
CN116309590B (en) | Visual computing method, system, electronic equipment and medium based on artificial intelligence | |
CN111523467B (en) | Face tracking method and device | |
CN114511661A (en) | Image rendering method and device, electronic equipment and storage medium | |
CN112150461B (en) | Method and apparatus for assessing head-to-tail sharpness of a cell image | |
CN112288697B (en) | Method, apparatus, electronic device and readable storage medium for quantifying degree of abnormality | |
CN111951214B (en) | Method and device for dividing readable area in image, electronic equipment and storage medium | |
US20210183027A1 (en) | Systems and methods for recognition of user-provided images | |
JPWO2011033744A1 (en) | Image processing apparatus, image processing method, and image processing program | |
Deng et al. | Selective kernel and motion-emphasized loss based attention-guided network for HDR imaging of dynamic scenes | |
CN112183484B (en) | Image processing method, device, equipment and storage medium | |
Kornilova et al. | Deep learning framework for mobile microscopy | |
CN111739008B (en) | Image processing method, device, equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |