WO2014066218A2 - Cast recognition method and device, and urine analyzer - Google Patents
Cast recognition method and device, and urine analyzer Download PDFInfo
- Publication number
- WO2014066218A2 WO2014066218A2 PCT/US2013/065856 US2013065856W WO2014066218A2 WO 2014066218 A2 WO2014066218 A2 WO 2014066218A2 US 2013065856 W US2013065856 W US 2013065856W WO 2014066218 A2 WO2014066218 A2 WO 2014066218A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cast
- image
- candidate
- classification
- current
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/20—Measuring for diagnostic purposes; Identification of persons for measuring urological functions restricted to the evaluation of the urinary system
- A61B5/201—Assessing renal or kidney functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30084—Kidney; Renal
Definitions
- the present invention relates to a cast recognition method and device, in particular to a cast recognition method and cast recognition device capable of increasing the precision of cast recognition, and furthermore to a urine analyzer .
- protein and cells or fragments filtered out by the kidneys can, after solidifying in the kidney tubules and collecting tubes, form cylindrical agglomerations of protein which are discharged in the urine, and these are known as casts. They may be detected with the aid of a microscope. Casts are a highly significant constituent of sediment in urine.
- Fig. 1 shows examples of the appearance of casts. Casts themselves have complex and diverse shapes, and are susceptible to interference from image noise and shadows from other large cells. It can be seen from Fig. 1 that casts may have completely fuzzy edges, or half-fuzzy/half-clear edges, or part of their edges may disappear.
- a Chinese patent application (no. 200910217867.1) has presented a classification system for the recognition of sediment in urine.
- the method uses a neural network as a training framework.
- Various features are used to achieve a reasonable degree of precision. For example, these features may include area, grayscale covariance matrix, etc.
- the present invention shall present a cast recognition method, to further improve the precision of cast recognition.
- the present invention shall also present a cast recognition device, for improving the precision of cast recognition.
- the present invention shall also present a urine analyzer comprising the urine recognition device .
- a cast recognition method comprising the following steps :
- an image acquisition step for acquiring an input image to be processed
- a segmentation step for segmenting a current-level image to generate a first image indicating a cast candidate, wherein in an initial state, the current-level image is the input image
- a classification step for calculating multiple features for each cast candidate on the basis of the first image and/or a grayscale image of the current-level image, and classifying each cast candidate on the basis of the multiple features to determine whether it is a cast.
- the method further comprises a scale conversion step, for determining whether a predetermined condition is satisfied when no cast is detected in the classification step; and if the predetermined condition is satisfied, subjecting the current-level image to scale conversion to obtain a next-level image, and performing the segmentation step and classification step again on the next- level image; if the predetermined condition is not satisfied, processing ends.
- a scale conversion step for determining whether a predetermined condition is satisfied when no cast is detected in the classification step; and if the predetermined condition is satisfied, subjecting the current-level image to scale conversion to obtain a next-level image, and performing the segmentation step and classification step again on the next- level image; if the predetermined condition is not satisfied, processing ends.
- the segmentation step further comprises the following steps:
- the multiple features comprise at least one of the following features: area, average luminance, average gradient, percentage of green or dark areas, shape ratio, area saturation, average edge luminance, radius contrast, and grayscale covariance matrix.
- classification is carried out on the basis of a tree structure according to the multiple features, in order to determine whether each cast candidate is a cast.
- the predetermined condition is whether at least one of cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range.
- a cast recognition device comprising:
- an image acquisition component for acquiring an input image to be processed
- a segmentation component for generating a first image indicating a cast candidate on the basis of a current-level image, wherein in an initial state, the current-level image is an input image
- a classification component for calculating multiple features of each cast candidate on the basis of the first image and/or a grayscale image of the current-level image, and performing classification on the basis of the multiple features to determine whether each cast candidate is a cast.
- the cast recognition device further comprises a scale conversion component, for determining whether a predetermined condition is satisfied when no cast is detected in the classification component; if the predetermined condition is satisfied, the current-level image is subjected to scale conversion to obtain a next-level image which is inputted to the segmentation component, in order to perform segmentation processing and classification processing again, but if the predetermined condition is not satisfied, processing ends.
- a scale conversion component for determining whether a predetermined condition is satisfied when no cast is detected in the classification component; if the predetermined condition is satisfied, the current-level image is subjected to scale conversion to obtain a next-level image which is inputted to the segmentation component, in order to perform segmentation processing and classification processing again, but if the predetermined condition is not satisfied, processing ends.
- the segmentation component comprises:
- an edge filtering component for subjecting the current-level image to edge filtering, to obtain an image indicating an edge
- a first predetermined processing component for subjecting the image obtained by edge filtering to a first predetermined processing, to obtain a binary image in which the background is black and the foreground objects are multiple white cast candidates;
- a second predetermined processing component for subjecting the binary image to a second predetermined processing, to obtain a first image in which the background is black and the foreground objects are multiple cast candidates with different luminance values, wherein different luminance values mark different cast candidates.
- the multiple features comprise at least one of the following features: area, average luminance, average gradient, percentage of green or dark areas, shape ratio, area saturation, average edge luminance, radius contrast, and grayscale covariance matrix.
- the classification component performs classification on the basis of a tree structure according to the multiple features, in order to determine whether each cast candidate is a cast.
- the predetermined condition is whether at least one of cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range.
- a urine analyzer comprising any one of the cast recognition devices described above.
- the precision of cast recognition can be further improved, helping to reduce the incidence of false negatives.
- the embodiments of the present invention employ multiple features in addition to area and grayscale covariance matrix, and thus help to further increase the precision of cast classification.
- the embodiments of the present invention can further increase the speed of recognition while increasing the precision of recognition.
- Fig. 1 is a picture showing examples of the appearance of casts.
- Fig. 2 is a flow chart showing the procedure of the cast recognition method according to the embodiments of the present invention.
- Fig. 3 is a picture showing an example of the input image acquired in step S201 in Fig. 2.
- Fig. 4 is a schematic diagram showing classification by means of a tree structure.
- Fig. 5 is a flow chart showing the specific procedure of step S202 in Fig. 2.
- Fig. 6 is a picture showing an image obtained by step
- Fig. 7 is a picture showing an image obtained by step
- Fig. 8 is a picture showing an image obtained by step
- Fig. 9 is a schematic diagram showing a Gaussian pyramid .
- Fig. 10 is a block diagram showing the configuration of the cast recognition device according to the embodiments of the present invention.
- Fig. 11 is a block diagram showing the specific configuration of the segmentation component shown in Fig. 10. DETAILED DESCRIPTION OF THE DRAWINGS
- the cast recognition method comprises the following steps.
- step S201 an input image to be processed, i.e. a urine sediment image, is acquired.
- Fig. 3 shows an example of the input image acquired, a black and white image containing casts.
- the input image may instead be a color image .
- step S202 a current-level image is segmented, to produce a first image indicating cast candidates, wherein in an initial state, the current-level image is the input image.
- step S203 multiple features are calculated for each cast candidate separately, based on the first image and/or a grayscale image of the current-level image.
- the multiple features may comprise the following parameters:
- Area (A) the number of pixels in a single cast candidate. This may be calculated on the basis of the first image generated in step S202.
- the formula for calculating the average luminance is: A , wherein AI is the average luminance,
- Icell is the luminance of a particular pixel
- A is area, i.e. the number of pixels in a single cast candidate
- ⁇ represents summing.
- Average gradient (AG) : ⁇ , wherein AG is the average gradient, Gcell is the gradient of a particular pixel, A is area, i.e. the number of pixels in a single cast candidate, and ⁇ represents summing.
- Percentage of green or dark areas this is a feature used to calculate the percentage of green areas or dark areas.
- Shape ratio a bounding box (with four edges intersecting with the edges of the candidate region, respectively) is added to a candidate region, and SR is defined as the ratio of the height to the width of the bounding box.
- AS Area saturation
- AEI Average edge luminance
- edge wherein AEI is the average edge luminance, ledge is the luminance of a particular edge pixel, A is the number of pixels in the edge, and ⁇ represents summing .
- Grayscale covariance matrix a GLCM is created by calculating the ratio of pixels with luminance (grayscale) value i to pixels with value j .
- a GLCM is created by calculating the ratio of pixels with luminance (grayscale) value i to pixels with value j .
- each cast candidate is classified on the basis of the multiple features, to determine whether it is a cast.
- this classification processing may be carried out by simple threshold judgment. For example, if feature 1 is greater than a first threshold and feature 2 is less than a second threshold or feature 3 is less than a third threshold, then it is determined that the cast candidate is a cast. Such a method of classification is easily implemented, but precision is not high.
- the classification processing may be carried out by constructing a tree structure.
- the tree structure classification method has higher precision and a faster processing speed than the method based on simple threshold judgment.
- Fig. 4 is a schematic diagram showing tree structure classification.
- the tree structure is trained by training theory (e.g. Adaboost, Probability boosting, etc.), or a tree structure may be preset based on experience.
- all the criteria represent a specific threshold for a particular feature. It must be noted that ordinarily, the rankings of all the criteria will be determined according to the function served by a feature in classification. Specifically, the greater the importance of a particular feature, the higher the position in the tree structure of the criterion corresponding thereto .
- step S205 a judgment is made on whether a cast has been detected. If it is determined in step S205 that a cast has been detected, processing ends. On the other hand, if it is determined in step S205 that a cast has not been detected, processing continues to step S206.
- step S206 the current-level image is subjected to scale conversion (e.g. downsampling) to obtain a next-level image. Processing then returns to step S202, in order to subject the next-level image obtained by scale conversion to the processing of steps S202 - S205.
- scale conversion e.g. downsampling
- step S202 segmentation processing in step S202 and the various parameters used in the classification processing in step S204 will be different for images of different scales (i.e. images of different levels) .
- step S202 further comprises the following steps S2021 - S2023.
- step S2021 the current-level image is subjected to edge filtering.
- edge filtering we use low-level edge detectors or edge filters. Many such methods exist in the prior art, such as the Sobel filter or the Canny filter.
- edge filtering step a method of performing edge detection using a Canny filter is described as an example.
- a Gaussian filter is used to smooth the image, so as to reduce noise and undesired details and texture .
- gradient g(m,n) is calculated using any type of gradient operation (Roberts, Sobel, Prewitt, etc.).
- M (m,n) represents the gradient of the image after undergoing Gaussian filtering
- gm(m,n) represents the gradient in the direction of the m-axis at point (m,n)
- gn(m,n) represents the gradient in the direction of the n-axis at point (m, n) .
- a threshold T is set, and this threshold T is used to limit M(m,n), thereby obtaining the following expression :
- the non-maximum pixels in the edges in MT obtained above are restricted to a thin edge ridge, because the edges may be widened in the step of smoothing the image using a Gaussian filter.
- an examination is made as to whether each non-zero MT(m,n) is greater than two adjacent values thereof in the gradient direction. If it is, MT(m,n) is kept unchanged, otherwise it is set to 0.
- the aim of this processing is to refine the binary image, so as to generate a binary image with single-pixel edges.
- the MT(m,n) obtained above is limited by means of two different thresholds Tl and T2 (wherein Tl ⁇ T2), so as to obtain binary images PT1 and PT2. It must be noted that compared to PT1, PT2 has lower noise and fewer wrong edges, but larger gaps between edge segments.
- edge segments in PT2 are connected together to form a continuous edge.
- each edge segment in PT2 is traced back to its endpoint, and a segment adjacent thereto in PT1 is then sought, so as to seek an edge segment in PT1 to bridge the gap, until another edge segment in PT2 is reached .
- the aim of this processing is to use two thresholds to increase the continuity of edges. Each non-zero point on the single-pixel binary image undergoes iterative processing.
- a pixel value ⁇ Tl indicates a non-edge point, while a pixel value > T2 indicates an edge point; for a pixel value between Tl and T2, if the surrounding connecting points are edge points, then this point is an edge point, otherwise it is a non-edge point.
- Fig. 6 shows a binary image obtained by edge filtering. It can be seen from Fig. 6 that the binary image contains a large number of line segments indicating edges. A large number of such line segments constitutes a possible cast candidate, but such a binary image still requires further processing .
- step S2022 the image obtained by edge filtering is subjected to a first predetermined processing, to obtain a binary image in which the background is black and the objects in the foreground are multiple white cast candidates.
- the first predetermined processing may be a morphological operation, to ensure that it is non-empty
- the binary image shown in Fig. 7 is obtained.
- the binary image has a black background and a white foreground, and unlike the large number of discontinuous line segments in the foreground of Fig. 6, the interior of the multiple cast candidates in the foreground of Fig. 7 has been filled in with white.
- the current-level image may contain multiple different cast candidates, the following processing must be performed, in order to facilitate individual classification of each of the multiple different cast candidates .
- step S2023 the binary image is subjected to a second predetermined processing, to obtain a first image in which the background is black while the objects in the foreground are multiple cast candidates with different luminance values, wherein different luminance values mark different cast candidates.
- the result of the processing performed in this step is shown in Fig. 8.
- the marking processing step in step S2023 is as follows: starting at the top-left corner of the image, the first cast candidate is marked 1, the second cast candidate is marked 2, .... In this way, parts with identical marked values (i.e. the same cast candidate) may be classified in the subsequent classification processing.
- step S206 in Fig. 2 The specific processing procedure of step S206 in Fig. 2 will be described.
- step S206 it is first determined whether a predetermined condition is satisfied.
- the predetermined condition is determined based on the possibility of there still being a cast in the current- level image.
- the predetermined condition may be whether at least one of cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range.
- transparency can be calculated via the following features: aspect ratio of elliptical fitting, area saturation of elliptical fitting, area saturation of minimum bounding rectangle, average gradient, angular difference between elliptical fitting and minimum bounding rectangle fitting, color difference between the green channel and two other channels. It must be explained that the predetermined condition is by no means limited to the above .
- the current-level image is convolved with a Gaussian function having parameter ⁇ .
- the image obtained by Gaussian filtering is subjected to a reducing operation.
- the reducing operation is a processing step in which the image is sampled to reduce its size.
- a common sampling factor is 2.
- a Gaussian kernel is used above to obtain a scale-space pyramid, but in fact many other methods can serve such a function, such as a bilateral filter method, wherein the filter weighting is affected not only by variation in pixel values, but also by distance from the center pixel. In such a situation, the processed image will be edges which retain a very small amount of noise. Compared to the method which uses a Gaussian kernel, it has better performance but a heavier calculation load. Although similar results can be obtained by many scale-space methods, we have disclosed the method which uses a Gaussian kernel as an example in this description, but this should not be interpreted as limiting our solution to the Gaussian kernel alone .
- the predetermined condition may also comprise a threshold n for the number of levels.
- n scale conversions have already been carried out. The judgment described above relating to e.g. cast candidate area, largest cast candidate average gradient and transparency is only carried out if it is determined that the number of times that scale conversion has been performed at the present time is less than the threshold for the number of levels. If n scale conversions have already been performed at the present time, processing ends.
- the threshold for the number of levels is generally influenced by the image noise characteristics and the size of casts in the image, etc. However, in practice, 3 levels are generally sufficient for cast recognition.
- a cast recognition device 1000 comprises an image acquisition component 1001, a segmentation component 1002, a classification component 1003 and a scale conversion component 1004.
- the image acquisition component 1001 acquires an input image to be processed, and supplies the input image to the segmentation component 1002.
- the segmentation component 1002 segments the image inputted thereto, to generate a first image indicating cast candidates, and supplies both the pre-segmentation and post- segmentation images to the classification component 1003.
- the classification component 1003 calculates multiple features of each cast candidate, on the basis of the images supplied by the segmentation component, and performs classification on the basis of the multiple features, so as to determine whether each cast candidate is a cast. Specific details relating to the multiple features and the classification method have already been given above, and are not repeated superfluously here in the interests of conciseness .
- the pre-segmentation and post-segmentation images are supplied to the scale conversion component 1004.
- the scale conversion component 1004 determines whether a predetermined condition is satisfied. If the predetermined condition is satisfied, the current-level image is subjected to scale conversion to obtain a next-level image, which is inputted to the segmentation component, in order to carry out segmentation processing and classification processing again; if the predetermined condition is not satisfied, processing ends. Specific details relating to the predetermined condition mentioned here have already been given above, and are not repeated superfluously here in the interests of conciseness.
- the segmentation component 1002 further comprises: an edge filtering component 111, a first predetermined processing component 112 and a second predetermined processing component 113.
- the edge filtering component 111 is used to subject the current-level image to edge filtering, and to input the image obtained by edge filtering to the first predetermined processing component 112.
- the first predetermined processing component 112 is used to subject an image supplied thereto which is obtained by edge filtering to a first predetermined processing, to obtain a binary image in which the background is black while the objects in the foreground are multiple white cast candidates, and supply this binary image to the second predetermined processing component 113.
- the second predetermined processing component 113 subjects the image supplied thereto to a second predetermined processing, to obtain a first image in which the background is black and the foreground objects are multiple cast candidates having different luminance values, wherein different cast candidates are marked by different luminance values.
- a urine analyzer comprising any one of the cast recognition devices described above.
- Cast recognition methods and devices according to the embodiments of the present invention have been described in detail above with reference to Figs. 1 to 11.
- a multi-scale segmentation algorithm is used to process weak edges of casts, as a result of which the precision of cast recognition can be improved further.
- several new features in addition to area and grayscale covariance matrix are used, helping to further improve the precision of cast classification.
- the adoption of a tree structure classification mechanism enables the speed of recognition to be further increased while increasing the precision of recognition.
- the present application discloses a cast recognition method and a cast recognition method device, and a urine analyzer.
- the method comprises the following steps: an image acquisition step, for acquiring an input image to be processed; a segmentation step, for segmenting a current-level image, to generate a first image indicating cast candidates, wherein in an initial state, the current-level image is the input image; and a classification step, for calculating multiple features for each cast candidate separately on the basis of the first image and/or a grayscale image of the current-level image, and classifying each cast candidate on the basis of the multiple features so as to determine whether it is a cast.
- the technical solution of the present application can increase the precision of cast recognition.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
The present application discloses a cast recognition method and a cast recognition method device, and a urine analyzer. The method comprises the following steps: an image acquisition step, for acquiring an input image to be processed; a segmentation step, for segmenting a current-level image, to generate a first image indicating cast candidates, wherein in an initial state, the current-level image is the input image; and a classification step, for calculating multiple features for each cast candidate separately on the basis of the first image and/or a grayscale image of the current-level image, and classifying each cast candidate on the basis of the multiple features so as to determine whether it is a cast. The technical solution of the present application can increase the precision of cast recognition.
Description
CAST RECOGNITION METHOD
AND DEVICE, AND URINE ANALYZER
PRIORITY STATEMENT
[001] This application claims benefit under 35 U.S.C. §119 of Chinese Patent Application Number CN 201210418837.9 filed October 26, 2012, the entire contents of which are hereby incorporated herein by reference.
TECHNICAL FIELD
[002] The present invention relates to a cast recognition method and device, in particular to a cast recognition method and cast recognition device capable of increasing the precision of cast recognition, and furthermore to a urine analyzer .
BACKGROUND OF THE INVENTION
[003] In certain conditions, protein and cells or fragments filtered out by the kidneys can, after solidifying in the kidney tubules and collecting tubes, form cylindrical agglomerations of protein which are discharged in the urine, and these are known as casts. They may be detected with the aid of a microscope. Casts are a highly significant constituent of sediment in urine.
[004] Fig. 1 shows examples of the appearance of casts. Casts themselves have complex and diverse shapes, and are susceptible to interference from image noise and shadows from other large cells. It can be seen from Fig. 1 that casts may have completely fuzzy edges, or half-fuzzy/half-clear edges, or part of their edges may disappear.
[005] Many researchers have already worked hard in the area of recognition of other components of urine sediment (such as red cells and white cells) , but little contribution has been
made to the area of cast recognition. However, casts are very important in urinary diagnosis, and are closely linked to kidney problems .
[006] A Chinese patent application (no. 200910217867.1) has presented a classification system for the recognition of sediment in urine. The method uses a neural network as a training framework. Various features are used to achieve a reasonable degree of precision. For example, these features may include area, grayscale covariance matrix, etc.
[007] The non-patent document "Automatic detecting and recognition of casts in urine sediment images" (Proceeding of the 2009 international conference on wavelet analysis and pattern recognition, July 2009, Chunyan Li et al . ) has presented a single-scale segmentation technique, wherein a 4- directional variance mapping image is obtained from a grayscale image, an adaptive dual-threshold segmentation algorithm is then applied to this mapping image to obtain a binary image, and finally five texture and shape characteristics are extracted from the grayscale image and binary image, and casts and other particles in the images are distinguished from each other by means of a decision tree classifier .
[008] However, the precision of automatic cast recognition still awaits further improvement.
SUMMARY OF THE INVENTION
[009] In view of the above, the present invention shall present a cast recognition method, to further improve the precision of cast recognition. The present invention shall also present a cast recognition device, for improving the precision of cast recognition. The present invention shall also present a urine analyzer comprising the urine recognition device .
[0010] According to an embodiment of the present invention, a cast recognition method is provided, comprising the following steps :
[0011] an image acquisition step, for acquiring an input image to be processed;
[0012] a segmentation step, for segmenting a current-level image to generate a first image indicating a cast candidate, wherein in an initial state, the current-level image is the input image;
[0013] a classification step, for calculating multiple features for each cast candidate on the basis of the first image and/or a grayscale image of the current-level image, and classifying each cast candidate on the basis of the multiple features to determine whether it is a cast.
[0014] Preferably, the method further comprises a scale conversion step, for determining whether a predetermined condition is satisfied when no cast is detected in the classification step; and if the predetermined condition is satisfied, subjecting the current-level image to scale conversion to obtain a next-level image, and performing the segmentation step and classification step again on the next- level image; if the predetermined condition is not satisfied, processing ends.
[0015] Preferably, in the cast recognition method according to the embodiments of the present invention, the segmentation step further comprises the following steps:
[0016] subjecting the current-level image to edge filtering;
[0017] subjecting the image obtained by edge filtering to a first predetermined processing, to obtain a binary image in
which the background is black and the foreground objects are multiple white cast candidates;
[0018] subjecting the binary image to a second predetermined processing, to obtain a first image in which the background is black and the foreground objects are multiple cast candidates with different luminance values, wherein different luminance values mark different cast candidates.
[0019] Preferably, the multiple features comprise at least one of the following features: area, average luminance, average gradient, percentage of green or dark areas, shape ratio, area saturation, average edge luminance, radius contrast, and grayscale covariance matrix.
[0020] Preferably, in the classification step, classification is carried out on the basis of a tree structure according to the multiple features, in order to determine whether each cast candidate is a cast.
[0021] Preferably, the predetermined condition is whether at least one of cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range.
[0022] According to another aspect of the embodiments of the present invention, a cast recognition device is provided, comprising :
[0023] an image acquisition component, for acquiring an input image to be processed;
[0024] a segmentation component, for generating a first image indicating a cast candidate on the basis of a current-level image, wherein in an initial state, the current-level image is an input image;
[0025] a classification component, for calculating multiple features of each cast candidate on the basis of the first image and/or a grayscale image of the current-level image, and performing classification on the basis of the multiple features to determine whether each cast candidate is a cast.
[0026] Preferably, the cast recognition device further comprises a scale conversion component, for determining whether a predetermined condition is satisfied when no cast is detected in the classification component; if the predetermined condition is satisfied, the current-level image is subjected to scale conversion to obtain a next-level image which is inputted to the segmentation component, in order to perform segmentation processing and classification processing again, but if the predetermined condition is not satisfied, processing ends.
[0027] Preferably, the segmentation component comprises:
[0028] an edge filtering component, for subjecting the current-level image to edge filtering, to obtain an image indicating an edge;
[0029] a first predetermined processing component, for subjecting the image obtained by edge filtering to a first predetermined processing, to obtain a binary image in which the background is black and the foreground objects are multiple white cast candidates;
[0030] a second predetermined processing component, for subjecting the binary image to a second predetermined processing, to obtain a first image in which the background is black and the foreground objects are multiple cast candidates with different luminance values, wherein different luminance values mark different cast candidates.
[0031] Preferably, the multiple features comprise at least one of the following features: area, average luminance, average gradient, percentage of green or dark areas, shape ratio, area saturation, average edge luminance, radius contrast, and grayscale covariance matrix.
[0032] Preferably, the classification component performs classification on the basis of a tree structure according to the multiple features, in order to determine whether each cast candidate is a cast.
[0033] Preferably, the predetermined condition is whether at least one of cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range.
[0034] According to another aspect of the embodiments of the present invention, a urine analyzer is provided, comprising any one of the cast recognition devices described above.
[0035] It can be seen from the above solution that since weak edges of casts are processed using a multi-scale segmentation algorithm in the embodiments of the present invention, the precision of cast recognition can be further improved, helping to reduce the incidence of false negatives. Furthermore, the embodiments of the present invention employ multiple features in addition to area and grayscale covariance matrix, and thus help to further increase the precision of cast classification. Moreover, by adopting a tree structure classification mechanism, the embodiments of the present invention can further increase the speed of recognition while increasing the precision of recognition.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, to give those skilled in the art a clearer
understanding of the abovementioned and other features and advantages of the present invention. In the drawings:
[0037] Fig. 1 is a picture showing examples of the appearance of casts.
[0038] Fig. 2 is a flow chart showing the procedure of the cast recognition method according to the embodiments of the present invention.
[0039] Fig. 3 is a picture showing an example of the input image acquired in step S201 in Fig. 2.
[0040] Fig. 4 is a schematic diagram showing classification by means of a tree structure.
[0041] Fig. 5 is a flow chart showing the specific procedure of step S202 in Fig. 2.
[0042] Fig. 6 is a picture showing an image obtained by step
52021 in Fig. 5.
[0043] Fig. 7 is a picture showing an image obtained by step
52022 in Fig. 5.
[0044] Fig. 8 is a picture showing an image obtained by step
52023 in Fig. 5.
[0045] Fig. 9 is a schematic diagram showing a Gaussian pyramid .
[0046] Fig. 10 is a block diagram showing the configuration of the cast recognition device according to the embodiments of the present invention.
[0047] Fig. 11 is a block diagram showing the specific configuration of the segmentation component shown in Fig. 10.
DETAILED DESCRIPTION OF THE DRAWINGS
[0048] First of all, the cast recognition method according to the embodiments of the present invention is described with reference to Fig. 2. As Fig. 2 shows, the cast recognition method comprises the following steps.
[0049] In step S201, an input image to be processed, i.e. a urine sediment image, is acquired. Fig. 3 shows an example of the input image acquired, a black and white image containing casts. Of course, the input image may instead be a color image .
[0050] Next, in step S202, a current-level image is segmented, to produce a first image indicating cast candidates, wherein in an initial state, the current-level image is the input image.
[0051] Next, in step S203, multiple features are calculated for each cast candidate separately, based on the first image and/or a grayscale image of the current-level image. Here, the multiple features may comprise the following parameters:
[0052] Area (A) : the number of pixels in a single cast candidate. This may be calculated on the basis of the first image generated in step S202.
[0053] Average luminance (AI): before luminance calculation is applied, the current-level image is first converted to a grayscale image. The formula for calculating the average luminance is: A , wherein AI is the average luminance,
Icell is the luminance of a particular pixel, A is area, i.e. the number of pixels in a single cast candidate, and ∑ represents summing.
[0054] Average gradient (AG) : ^ , wherein AG is the average gradient, Gcell is the gradient of a particular pixel, A is area, i.e. the number of pixels in a single cast candidate, and ∑ represents summing.
[0055] Percentage of green or dark areas: this is a feature used to calculate the percentage of green areas or dark areas.
[0056] Shape ratio (SR) : a bounding box (with four edges intersecting with the edges of the candidate region, respectively) is added to a candidate region, and SR is defined as the ratio of the height to the width of the bounding box.
[0057] Area saturation (AS) : used to describe the concavity of a subject. It is defined as the ratio of the area of the subject to the area of the bounding box thereof.
[0058] Average edge luminance (AEI) : a binary image is subjected to erosion once, and an edge area of each cast candidate is obtained; AEI is then calculated by the following
AEI = Iedge
formula: edge , wherein AEI is the average edge luminance, ledge is the luminance of a particular edge pixel, A is the number of pixels in the edge, and ∑ represents summing .
[0059] Radius contrast (RC) : the ratio of the minimum radius to the maximum radius, calculated by the following formula:
(radius)
A — ;
max(radius) ^ wherein RC is the radius contrast, min (radius) is the minimum radius, and max (radius) is the maximum radius.
[0060] Grayscale covariance matrix (GLCM) : a GLCM is created by calculating the ratio of pixels with luminance (grayscale) value i to pixels with value j .
[0061] Of course, in addition to the abovementioned features, those skilled in the art could think of many other features, such as mean, variance, uniformity, energy, contrast, angular second moment, entropy, etc. The present invention is by no means limited to the abovementioned features.
[0062] Next, in step S204, each cast candidate is classified on the basis of the multiple features, to determine whether it is a cast.
[0063] It must be pointed out that this classification processing may be carried out by simple threshold judgment. For example, if feature 1 is greater than a first threshold and feature 2 is less than a second threshold or feature 3 is less than a third threshold, then it is determined that the cast candidate is a cast. Such a method of classification is easily implemented, but precision is not high.
[0064] Alternatively, the classification processing may be carried out by constructing a tree structure. The tree structure classification method has higher precision and a faster processing speed than the method based on simple threshold judgment.
[0065] Fig. 4 is a schematic diagram showing tree structure classification. The tree structure is trained by training theory (e.g. Adaboost, Probability boosting, etc.), or a tree structure may be preset based on experience. In Fig. 4, all the criteria represent a specific threshold for a particular feature. It must be noted that ordinarily, the rankings of all the criteria will be determined according to the function served by a feature in classification. Specifically, the greater the importance of a particular feature, the higher the position in the tree structure of the criterion corresponding thereto .
[0066] Next, in step S205, a judgment is made on whether a cast has been detected. If it is determined in step S205 that a cast has been detected, processing ends. On the other hand, if it is determined in step S205 that a cast has not been detected, processing continues to step S206.
[0067] In step S206, the current-level image is subjected to scale conversion (e.g. downsampling) to obtain a next-level image. Processing then returns to step S202, in order to subject the next-level image obtained by scale conversion to the processing of steps S202 - S205.
[0068] It must be pointed out that the segmentation processing in step S202 and the various parameters used in the classification processing in step S204 will be different for images of different scales (i.e. images of different levels) .
[0069] Next, the specific procedure of step S202 in Fig. 2 will be described with reference to Fig. 5. As Fig. 5 shows, step S202 further comprises the following steps S2021 - S2023.
[0070] In step S2021, the current-level image is subjected to edge filtering. In this step, we use low-level edge detectors or edge filters. Many such methods exist in the prior art, such as the Sobel filter or the Canny filter. In this edge filtering step, a method of performing edge detection using a Canny filter is described as an example.
[0071] First of all, a Gaussian filter is used to smooth the image, so as to reduce noise and undesired details and texture .
[0072] Next, gradient g(m,n) is calculated using any type of gradient operation (Roberts, Sobel, Prewitt, etc.).
wherein M (m,n) represents the gradient of the image after undergoing Gaussian filtering, gm(m,n) represents the gradient in the direction of the m-axis at point (m,n), and gn(m,n) represents the gradient in the direction of the n-axis at point (m, n) .
[0073] Next, a threshold T is set, and this threshold T is used to limit M(m,n), thereby obtaining the following expression :
[0, otherwise
[0074] The non-maximum pixels in the edges in MT obtained above are restricted to a thin edge ridge, because the edges may be widened in the step of smoothing the image using a Gaussian filter. For this purpose, an examination is made as to whether each non-zero MT(m,n) is greater than two adjacent values thereof in the gradient direction. If it is, MT(m,n) is kept unchanged, otherwise it is set to 0. The aim of this processing is to refine the binary image, so as to generate a binary image with single-pixel edges.
[0075] Next, the MT(m,n) obtained above is limited by means of two different thresholds Tl and T2 (wherein Tl < T2), so as to obtain binary images PT1 and PT2. It must be noted that compared to PT1, PT2 has lower noise and fewer wrong edges, but larger gaps between edge segments.
[0076] Next, the edge segments in PT2 are connected together to form a continuous edge. For this purpose, each edge segment in PT2 is traced back to its endpoint, and a segment adjacent thereto in PT1 is then sought, so as to seek an edge segment in PT1 to bridge the gap, until another edge segment in PT2 is reached .
[0077] The aim of this processing is to use two thresholds to increase the continuity of edges. Each non-zero point on the single-pixel binary image undergoes iterative processing. A pixel value < Tl indicates a non-edge point, while a pixel value > T2 indicates an edge point; for a pixel value between Tl and T2, if the surrounding connecting points are edge points, then this point is an edge point, otherwise it is a non-edge point.
[0078] Fig. 6 shows a binary image obtained by edge filtering. It can be seen from Fig. 6 that the binary image contains a large number of line segments indicating edges. A large number of such line segments constitutes a possible cast candidate, but such a binary image still requires further processing .
[0079] Next, in step S2022, the image obtained by edge filtering is subjected to a first predetermined processing, to obtain a binary image in which the background is black and the objects in the foreground are multiple white cast candidates. Specifically, the first predetermined processing may be a morphological operation, to ensure that it is non-empty
(solid) cast candidates connected together which are segmented. For this purpose, the image is first subjected to m dilations and n erosions.
[0080] After the processing performed in step S2022, the binary image shown in Fig. 7 is obtained. By comparing Fig. 6 with Fig. 7, it can be seen that after undergoing morphological processing, the binary image has a black background and a white foreground, and unlike the large number of discontinuous line segments in the foreground of Fig. 6, the interior of the multiple cast candidates in the foreground of Fig. 7 has been filled in with white.
[0081] However, since the current-level image may contain multiple different cast candidates, the following processing
must be performed, in order to facilitate individual classification of each of the multiple different cast candidates .
[0082] In step S2023, the binary image is subjected to a second predetermined processing, to obtain a first image in which the background is black while the objects in the foreground are multiple cast candidates with different luminance values, wherein different luminance values mark different cast candidates. The result of the processing performed in this step is shown in Fig. 8. Specifically, the marking processing step in step S2023 is as follows: starting at the top-left corner of the image, the first cast candidate is marked 1, the second cast candidate is marked 2, .... In this way, parts with identical marked values (i.e. the same cast candidate) may be classified in the subsequent classification processing.
[0083] The specific processing procedure of step S206 in Fig. 2 will be described.
[0084] If no cast is detected in the classification step, the processing of step S206 is performed. In step S206, it is first determined whether a predetermined condition is satisfied. The predetermined condition is determined based on the possibility of there still being a cast in the current- level image. For example, the predetermined condition may be whether at least one of cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range. Of these, transparency can be calculated via the following features: aspect ratio of elliptical fitting, area saturation of elliptical fitting, area saturation of minimum bounding rectangle, average gradient, angular difference between elliptical fitting and minimum bounding rectangle fitting, color difference between the green channel and two other channels. It must be explained
that the predetermined condition is by no means limited to the above .
[0085] The current-level image is subjected to scale conversion. First of all, we must provide a definition of scale in image processing. Image I is convolved with a Gaussian kernel (e.g. δ) , to obtain Ιδ (the Gaussian distribution having parameter δ is
j _ The j_mage resulting from such processing is often referred to as an image under scale δ. Fig. 9 shows an example of a Gaussian pyramid. The calculation procedure is as follows:
[0086] First of all, the current-level image is convolved with a Gaussian function having parameter δ. Next, the image obtained by Gaussian filtering is subjected to a reducing operation. Here, the reducing operation is a processing step in which the image is sampled to reduce its size. A common sampling factor is 2. After this step, we will obtain an image resulting from scale conversion, i.e. a next-level image of the current-level image.
[0087] It must be pointed out that several methods exist for generating a scale-space pyramid. A Gaussian kernel is used above to obtain a scale-space pyramid, but in fact many other methods can serve such a function, such as a bilateral filter method, wherein the filter weighting is affected not only by variation in pixel values, but also by distance from the center pixel. In such a situation, the processed image will be edges which retain a very small amount of noise. Compared to the method which uses a Gaussian kernel, it has better performance but a heavier calculation load. Although similar results can be obtained by many scale-space methods, we have disclosed the method which uses a Gaussian kernel as an example in this description, but this should not be interpreted as limiting our solution to the Gaussian kernel alone .
[0088] In this step, more preferably, the predetermined condition may also comprise a threshold n for the number of levels. In other words, before it is determined whether at least one of e.g. cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range as described above, it is also determined whether n scale conversions have already been carried out. The judgment described above relating to e.g. cast candidate area, largest cast candidate average gradient and transparency is only carried out if it is determined that the number of times that scale conversion has been performed at the present time is less than the threshold for the number of levels. If n scale conversions have already been performed at the present time, processing ends. The threshold for the number of levels is generally influenced by the image noise characteristics and the size of casts in the image, etc. However, in practice, 3 levels are generally sufficient for cast recognition.
[0089] Cast recognition methods according to the embodiments of the present invention have been described above with reference to Figs. 1 to 9. Next, a cast recognition device according to another embodiment of the present invention will be described with reference to Fig. 10.
[0090] As Fig. 10 shows, a cast recognition device 1000 comprises an image acquisition component 1001, a segmentation component 1002, a classification component 1003 and a scale conversion component 1004.
[0091] The image acquisition component 1001 acquires an input image to be processed, and supplies the input image to the segmentation component 1002.
[0092] The segmentation component 1002 segments the image inputted thereto, to generate a first image indicating cast
candidates, and supplies both the pre-segmentation and post- segmentation images to the classification component 1003.
[0093] The classification component 1003 calculates multiple features of each cast candidate, on the basis of the images supplied by the segmentation component, and performs classification on the basis of the multiple features, so as to determine whether each cast candidate is a cast. Specific details relating to the multiple features and the classification method have already been given above, and are not repeated superfluously here in the interests of conciseness .
[0094] When no cast is detected in the classification component 1003, the pre-segmentation and post-segmentation images are supplied to the scale conversion component 1004. The scale conversion component 1004 determines whether a predetermined condition is satisfied. If the predetermined condition is satisfied, the current-level image is subjected to scale conversion to obtain a next-level image, which is inputted to the segmentation component, in order to carry out segmentation processing and classification processing again; if the predetermined condition is not satisfied, processing ends. Specific details relating to the predetermined condition mentioned here have already been given above, and are not repeated superfluously here in the interests of conciseness.
[0095] Next, the specific configuration of the segmentation component in the cast recognition device shown in Fig. 10 is described with reference to Fig. 11. As Fig. 11 shows, the segmentation component 1002 further comprises: an edge filtering component 111, a first predetermined processing component 112 and a second predetermined processing component 113.
[0096] The edge filtering component 111 is used to subject the current-level image to edge filtering, and to input the
image obtained by edge filtering to the first predetermined processing component 112.
[0097] The first predetermined processing component 112 is used to subject an image supplied thereto which is obtained by edge filtering to a first predetermined processing, to obtain a binary image in which the background is black while the objects in the foreground are multiple white cast candidates, and supply this binary image to the second predetermined processing component 113.
[0098] The second predetermined processing component 113 subjects the image supplied thereto to a second predetermined processing, to obtain a first image in which the background is black and the foreground objects are multiple cast candidates having different luminance values, wherein different cast candidates are marked by different luminance values.
[0099] According to another aspect of the embodiments of the present invention, a urine analyzer is provided, comprising any one of the cast recognition devices described above.
[00100] Cast recognition methods and devices according to the embodiments of the present invention have been described in detail above with reference to Figs. 1 to 11. In the cast recognition methods and devices according to the embodiments of the present invention, a multi-scale segmentation algorithm is used to process weak edges of casts, as a result of which the precision of cast recognition can be improved further. Moreover, several new features in addition to area and grayscale covariance matrix are used, helping to further improve the precision of cast classification. Furthermore, the adoption of a tree structure classification mechanism enables the speed of recognition to be further increased while increasing the precision of recognition.
[00101] The present application discloses a cast recognition method and a cast recognition method device, and a urine analyzer. The method comprises the following steps: an image acquisition step, for acquiring an input image to be processed; a segmentation step, for segmenting a current-level image, to generate a first image indicating cast candidates, wherein in an initial state, the current-level image is the input image; and a classification step, for calculating multiple features for each cast candidate separately on the basis of the first image and/or a grayscale image of the current-level image, and classifying each cast candidate on the basis of the multiple features so as to determine whether it is a cast. The technical solution of the present application can increase the precision of cast recognition.
[00102] The above embodiments are merely preferred embodiments of the present invention, and are by no means intended to limit it; any amendments, equivalent substitutions or improvements etc. made without departing from the spirit and principles of the present invention should be included in the scope of protection thereof.
Claims
1. A cast recognition method, comprising the following steps:
an image acquisition step, for acquiring an input image to be processed;
a segmentation step, for segmenting a current-level image to generate a first image indicating a cast candidate, wherein in an initial state, the current-level image is the input image;
a classification step, for calculating multiple features for each cast candidate on the basis of the first image and/or a grayscale image of the current-level image, and classifying each cast candidate on the basis of the multiple features to determine whether it is a cast.
2. The cast recognition method as claimed in claim 1, characterized by further comprising the following steps:
a scale conversion step, for determining whether a predetermined condition is satisfied when no cast is detected in the classification step; and if the predetermined condition is satisfied, subjecting the current-level image to scale conversion to obtain a next-level image, and performing the segmentation step and classification step again on the next- level image; if the predetermined condition is not satisfied, processing ends.
3. The cast recognition method as claimed in claim 1, characterized in that the segmentation step comprises:
subjecting the current-level image to edge filtering, to obtain an image indicating an edge;
subjecting the image obtained by edge filtering to a first predetermined processing, to obtain a binary image in which the background is black and the foreground objects are multiple white cast candidates;
subjecting the binary image to a second predetermined processing, to obtain a first image in which the background is black and the foreground objects are multiple cast candidates
with different luminance values, wherein different luminance values mark different cast candidates.
4. The cast recognition method as claimed in claim 1, characterized in that
the multiple features comprise at least one of the following features: area, average luminance, average gradient, percentage of green or dark areas, shape ratio, area saturation, average edge luminance, radius contrast, and grayscale covariance matrix.
5. The cast recognition method as claimed in claim 1, characterized in that
in the classification step, classification is carried out on the basis of a tree structure according to the multiple features, in order to determine whether each cast candidate is cL CcL St ·
6. The cast recognition method as claimed in claim 2, characterized in that
the predetermined condition is whether at least one of cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range.
7. A cast recognition device, comprising:
an image acquisition component, for acquiring an input image to be processed;
a segmentation component, for generating a first image indicating a cast candidate on the basis of a current-level image, wherein in an initial state, the current-level image is an input image;
a classification component, for calculating multiple features of each cast candidate on the basis of the first image and/or a grayscale image of the current-level image, and performing classification on the basis of the multiple features to determine whether each cast candidate is a cast.
8. The cast recognition device as claimed in claim 7, characterized by further comprising:
a scale conversion component, for determining whether a predetermined condition is satisfied when no cast is detected in the classification component; if the predetermined condition is satisfied, the current-level image is subjected to scale conversion to obtain a next-level image which is inputted to the segmentation component, in order to perform segmentation processing and classification processing again, but if the predetermined condition is not satisfied, processing ends.
9. The cast recognition device as claimed in claim 7, characterized in that the segmentation component comprises: an edge filtering component, for subjecting the current- level image to edge filtering, to obtain an image indicating an edge;
a first predetermined processing component, for subjecting the image obtained by edge filtering to a first predetermined processing, to obtain a binary image in which the background is black and the foreground objects are multiple white cast candidates;
a second predetermined processing component, for subjecting the binary image to a second predetermined processing, to obtain a first image in which the background is black and the foreground objects are multiple cast candidates with different luminance values, wherein different luminance values mark different cast candidates.
10. The cast recognition device as claimed in claim 7, characterized in that
the multiple features comprise at least one of the following features: area, average luminance, average gradient, percentage of green or dark areas, shape ratio, area saturation, average edge luminance, radius contrast, and grayscale covariance matrix.
11. The cast recognition device as claimed in claim 7, characterized in that
the classification component performs classification on the basis of a tree structure according to the multiple features, in order to determine whether each cast candidate is cL CcL St ·
12. The cast recognition device as claimed in claim 8, characterized in that the predetermined condition is whether at least one of cast candidate area, largest cast candidate average gradient and transparency lies within a predetermined threshold range.
13. A urine analyzer, comprising the cast recognition device as claimed in any one of claims 7 - 12.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210418837.9 | 2012-10-26 | ||
CN201210418837.9A CN103793902A (en) | 2012-10-26 | 2012-10-26 | Casts identification method and device, and urine analyzer |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2014066218A2 true WO2014066218A2 (en) | 2014-05-01 |
WO2014066218A3 WO2014066218A3 (en) | 2014-07-10 |
Family
ID=50545448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2013/065856 WO2014066218A2 (en) | 2012-10-26 | 2013-10-21 | Cast recognition method and device, and urine analyzer |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN103793902A (en) |
WO (1) | WO2014066218A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665889A (en) * | 2018-04-20 | 2018-10-16 | 百度在线网络技术(北京)有限公司 | The Method of Speech Endpoint Detection, device, equipment and storage medium |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105760878A (en) * | 2014-12-19 | 2016-07-13 | 西门子医疗保健诊断公司 | Method and device for selecting urinary sediment microscope image with optimal focusing performance |
CN109447119A (en) * | 2018-09-26 | 2019-03-08 | 电子科技大学 | Cast recognition methods in the arena with SVM is cut in a kind of combining form credit |
CN110473167B (en) * | 2019-07-09 | 2022-06-17 | 哈尔滨工程大学 | Deep learning-based urinary sediment image recognition system and method |
CN112508854B (en) * | 2020-11-13 | 2022-03-22 | 杭州医派智能科技有限公司 | Renal tubule detection and segmentation method based on UNET |
CN116883415B (en) * | 2023-09-08 | 2024-01-05 | 东莞市旺佳五金制品有限公司 | Thin-wall zinc alloy die casting quality detection method based on image data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050251347A1 (en) * | 2004-05-05 | 2005-11-10 | Pietro Perona | Automatic visual recognition of biological particles |
US20110002516A1 (en) * | 2008-04-07 | 2011-01-06 | Chihiro Manri | Method and device for dividing area of image of particle in urine |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1645139A (en) * | 2004-12-27 | 2005-07-27 | 长春迪瑞实业有限公司 | Method for analysing non-centrifugal urine by image identifying system |
CN101447080B (en) * | 2008-11-19 | 2011-02-09 | 西安电子科技大学 | Method for segmenting HMT image on the basis of nonsubsampled Contourlet transformation |
-
2012
- 2012-10-26 CN CN201210418837.9A patent/CN103793902A/en active Pending
-
2013
- 2013-10-21 WO PCT/US2013/065856 patent/WO2014066218A2/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050251347A1 (en) * | 2004-05-05 | 2005-11-10 | Pietro Perona | Automatic visual recognition of biological particles |
US20110002516A1 (en) * | 2008-04-07 | 2011-01-06 | Chihiro Manri | Method and device for dividing area of image of particle in urine |
Non-Patent Citations (2)
Title |
---|
HANS, C ET AL.: 'Decision Fusion for Urine Particle Classification in Multispectral Images.' ICVGIP'10 PROCEEDINGS OF THE SEVENTH INDIAN CONFERENCE ON COMPUTER VISION, GRAPHICS AND IMAGE PROCESSING, [Online] December 2010, Retrieved from the Internet: <URL:http://qil.uh.edu/qilAvebsitecontent/pdf/2011-8.pdf> [retrieved on 2014-03-27] * |
LI, CY ET AL.: 'AUTOMATIC DETECTING AND RECOGNITION OF CASTS IN URINE SEDIMENT IMAGES' ICWAPR 2009. INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION, [Online] July 2009, Retrieved from the Internet: <URL:http://ieeexplore.ieee.org/xpl/login.j sp?tp=&arnumber=5207456&url=http%3A%2F% 2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp% 3Farnumber%3D5207456> [retrieved on 2014-03-27] * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665889A (en) * | 2018-04-20 | 2018-10-16 | 百度在线网络技术(北京)有限公司 | The Method of Speech Endpoint Detection, device, equipment and storage medium |
CN108665889B (en) * | 2018-04-20 | 2021-09-28 | 百度在线网络技术(北京)有限公司 | Voice signal endpoint detection method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN103793902A (en) | 2014-05-14 |
WO2014066218A3 (en) | 2014-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6660313B2 (en) | Detection of nuclear edges using image analysis | |
Nandi | Detection of human brain tumour using MRI image segmentation and morphological operators | |
Pratikakis et al. | ICFHR 2012 competition on handwritten document image binarization (H-DIBCO 2012) | |
Kumar et al. | Review on image segmentation techniques | |
Promdaen et al. | Automated microalgae image classification | |
WO2014066218A2 (en) | Cast recognition method and device, and urine analyzer | |
CN109636824A (en) | A kind of multiple target method of counting based on image recognition technology | |
US7574304B2 (en) | Chromatin segmentation | |
Mohan et al. | Video image processing for moving object detection and segmentation using background subtraction | |
Hidayatullah et al. | Automatic sperms counting using adaptive local threshold and ellipse detection | |
Abdelsamea | An automatic seeded region growing for 2d biomedical image segmentation | |
CN109716355B (en) | Particle boundary identification | |
Kumar et al. | A comparative study of various filtering techniques | |
CN113850792A (en) | Cell classification counting method and system based on computer vision | |
Chen et al. | An improved edge detection in noisy image using fuzzy enhancement | |
Upadhyay et al. | Fast segmentation methods for medical images | |
Abdelsamea | An enhancement neighborhood connected segmentation for 2D-cellular image | |
Gim et al. | A novel framework for white blood cell segmentation based on stepwise rules and morphological features | |
Prathusha et al. | Enhanced image edge detection methods for crab species identification | |
Sanap et al. | License plate recognition system for Indian vehicles | |
Wang et al. | Improved cell segmentation with adaptive bi-Gaussian mixture models for image contrast enhancement pre-processing | |
Pise et al. | Segmentation of nuclei in cytological images of breast FNAC sample: case study | |
Al-Amaren et al. | Edge Map Extraction of an Image Based on the Gradient of its Binary Versions | |
Al-Shammaa et al. | Extraction of connected components Skin pemphigus diseases image edge detection by Morphological operations | |
Hoang et al. | A marker-free watershed approach for 2d-ge protein spot segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13849546 Country of ref document: EP Kind code of ref document: A2 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13849546 Country of ref document: EP Kind code of ref document: A2 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13849546 Country of ref document: EP Kind code of ref document: A2 |