CN103793902A - Casts identification method and device, and urine analyzer - Google Patents

Casts identification method and device, and urine analyzer Download PDF

Info

Publication number
CN103793902A
CN103793902A CN201210418837.9A CN201210418837A CN103793902A CN 103793902 A CN103793902 A CN 103793902A CN 201210418837 A CN201210418837 A CN 201210418837A CN 103793902 A CN103793902 A CN 103793902A
Authority
CN
China
Prior art keywords
cast
image
candidate
prime
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210418837.9A
Other languages
Chinese (zh)
Inventor
苏子华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Healthcare Diagnostics GmbH Germany
Siemens Healthcare Diagnostics Inc
Original Assignee
Siemens Healthcare Diagnostics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Healthcare Diagnostics Inc filed Critical Siemens Healthcare Diagnostics Inc
Priority to CN201210418837.9A priority Critical patent/CN103793902A/en
Priority to PCT/US2013/065856 priority patent/WO2014066218A2/en
Publication of CN103793902A publication Critical patent/CN103793902A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/20Measuring for diagnostic purposes; Identification of persons for measuring urological functions restricted to the evaluation of the urinary system
    • A61B5/201Assessing renal or kidney functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The application discloses a casts identification method and a casts identification method device, and a urine analyzer. The method comprises the following steps: an image acquisition step which is used for acquiring an input image to be processed; a segmentation step which is used for segmenting a current-level image and generating a first image indicating a casts candidate, wherein the current-level image is the input image in an initial state; and a classification step which is used for calculating multiple characteristics by aiming at each casts candidate respectively on the basis of a current-level image gray-level image and/or the first image, and classifying all the casts candidates on the basis of the multiple characteristics so that casts are judged. Precision in casts identification can be enhanced by the technical scheme of the application.

Description

Cast recognition methods and device and Urine Analyzer
Technical field
The present invention relates to cast recognition methods and device, particularly can improve cast recognition methods and the cast recognition device of cast accuracy of identification, also have Urine Analyzer.
Background technology
Under certain condition, after the protein that kidney leaches and cell or fragment solidify in renal tubule, concetrated pipe, can form cylindrical albumen aggressiveness and with urine discharge, be called cast (casts).They can pass through microscopic examination.Cast is composition significant in arena.
Fig. 1 shows the example of cast outward appearance.The complex shape of cast itself is various, and be easy to be subject to picture noise, from interference such as other Magnocellular shades.As can be seen from Figure 1, cast may have completely fuzzy edge or have half fuzzy partly clearly edge or part edge disappearance.
Many researchers other compositions of arena (as, red blood cell, leucocyte etc.) identification aspect made effort, but rare achievement aspect tubular identification.But cast is very important in Urine in Diagnosis, and cast and kidney problems have closely and contact.
Chinese patent application (application number is 200910217867.1) has proposed the categorizing system for the identification of arena object.Described method is used neural network as training framework.Use various features to realize rational precision.For example, described various features can comprise area, gray level covariance matrix etc.
Non-patent file " cast in detection and Identification arena automatically " (" Automatic detecting and recognitionof casts in urine sediment images ", Proceeding of the 2009 international conference onwavelet analysis and pattern recognition, in July, 2009, the people such as Chunyan Li) a kind of single scale cutting techniques proposed, wherein by obtain 4 direction variance map images from grayscale image, then by by self-adaptation Double Thresholding Segmentation algorithm application in above map image with obtain binary map, finally extract five textures and shape facility from grayscale image and binary map the two, by the cast in decision tree classifier differentiate between images and other particles.
But the precision of cast identification still needs further to be improved automatically.
Summary of the invention
In view of this, this invention is intended to propose a kind of cast recognition methods, further to improve cast accuracy of identification.The present invention also wish proposes a kind of cast recognition device, in order to improve cast accuracy of identification.The present invention also wish proposes a kind of Urine Analyzer that comprises above-mentioned cast recognition device.
According to one embodiment of present invention, provide a kind of cast recognition methods, comprised the steps:
Image acquisition step, for obtaining pending input picture;
Segmentation step, for to when prime Image Segmentation Using, produces the first image of indication cast candidate, wherein, and under original state, when prime image is input picture;
Classification step, for grayscale image and/or the first image based on when prime image, calculates multiple features for each cast candidate respectively, and based on the plurality of feature, each cast candidate is classified to judge whether it is cast.
Preferably, the method also comprises change of scale step, for when in the time that classification step does not detect cast, judges whether to meet predetermined condition; If meet predetermined condition, to carrying out change of scale when prime image to obtain next stage image, and again carry out segmentation step and classification step for next stage image, and if do not meet predetermined condition, processing finishes.
Preferably, according to the cast recognition methods of the embodiment of the present invention, described segmentation step further comprises the steps:
To carrying out edge filter when prime image;
The filtered image of edge carries out the first predetermined process, and obtaining background is that black, foreground object is the binary map of multiple cast candidates of white;
Binary map is carried out to the second predetermined process, and obtaining background is that black, foreground object is first image with multiple cast candidates of different brightness values, wherein with the different cast candidates of different brightness value marks.
Preferably, described multiple feature comprises at least one following feature: area, mean flow rate, average gradient, green or dark areas number percent, shape ratio, saturated, the average edge brightness in region, radius contrast and gray level covariance matrix.
Preferably, in described classification step, according to described multiple features, classify to judge based on tree construction whether each cast candidate is cast.
Preferably, described predetermined condition is whether at least one in the area of cast candidate, maximum cast candidate average gradient and transparency be in predetermined threshold range.
According to the embodiment of the present invention on the other hand, provide a kind of cast recognition device, having comprised:
An image acquisition component, for obtaining pending input picture;
A partition member, for based on when prime image, produces the first image of indication cast candidate, wherein, and under original state, when prime image is input picture;
A classification element, for grayscale image and/or the first image based on when prime image, calculates multiple features of each cast candidate, and classifies based on the plurality of feature, to judge that whether each cast candidate is as cast.
Preferably, described cast recognition device also comprises change of scale parts, for when in the time that classification element does not detect cast, judges whether to meet predetermined condition; If meet predetermined condition, to carrying out change of scale when prime image to obtain next stage image, and next stage image is inputted to described partition member, to again carry out dividing processing and classification is processed, and if do not meet predetermined condition, processing finishes.
Preferably, described partition member comprises:
Edge filter parts, for to carrying out edge filter when prime image, obtain the image at indication edge;
The first predetermined process parts, carry out the first predetermined process for the filtered image of edge, and obtaining background is that black, foreground object is the binary map of multiple cast candidates of white;
The second predetermined process parts, for binary map is carried out to the second predetermined process, obtaining background is that black, foreground object is first image with multiple cast candidates of different brightness values, wherein with the different cast candidates of different brightness value marks.
Preferably, described multiple feature comprises at least one following feature: area, mean flow rate, average gradient, green or dark areas number percent, shape ratio, saturated, the average edge brightness in region, radius contrast and gray level covariance matrix.
Preferably, described classification element, according to described multiple features, classifies to judge based on tree construction whether each cast candidate is cast.
Preferably, described predetermined condition is whether at least one in the area of cast candidate, maximum cast candidate average gradient and transparency be in predetermined threshold range.
According to the one side again of the embodiment of the present invention, a kind of Urine Analyzer is provided, comprise above-mentioned any one cast recognition device.
From such scheme, can find out, process the weak edge of cast because the embodiment of the present invention adopts multi-scale division algorithm, therefore the precision of cast identification can further improve, and contributes to reduce false negative.In addition, the embodiment of the present invention has adopted the multiple new features except area, gray level covariance matrix, thereby contributes to further to improve cast nicety of grading.And the embodiment of the present invention, by adopting tree construction classification mechanism, can further improve recognition speed in improving accuracy of identification.
Accompanying drawing explanation
To, by describing the preferred embodiments of the present invention in detail with reference to accompanying drawing, the person of ordinary skill in the art is more clear that above-mentioned and other feature and advantage of the present invention below, in accompanying drawing:
Fig. 1 is the figure that shows the example of cast outward appearance.
Fig. 2 is the process flow diagram showing according to the flow process of the cast recognition methods of the embodiment of the present invention.
Fig. 3 is the figure that shows a kind of example of the input picture that the step S201 in Fig. 2 obtains.
Fig. 4 shows the schematic diagram of classifying by tree construction.
Fig. 5 is the process flow diagram that shows the idiographic flow of the step S202 in Fig. 2.
Fig. 6 is the figure that shows the image obtaining by the step S2021 in Fig. 5.
Fig. 7 is the figure that shows the image obtaining by the step S2022 in Fig. 5.
Fig. 8 is the figure that shows the image obtaining by the step S2023 in Fig. 5.
Fig. 9 is the schematic diagram that shows gaussian pyramid.
Figure 10 is the block diagram showing according to the configuration of the cast recognition device of the embodiment of the present invention.
Figure 11 is the block diagram that shows the concrete configuration of the partition member shown in Figure 10.
Embodiment
First, describe according to the cast recognition methods of the embodiment of the present invention with reference to Fig. 2.As shown in Figure 2, described cast recognition methods comprises the steps.
At step S201, obtain pending input picture, i.e. urine sediment image.Fig. 3 shows a kind of example of the input picture obtaining, and Fig. 3 has shown a black white image, and comprises cast in this image.Certainly, alternately, input picture can be also coloured image.
Then, at step S202, to when prime Image Segmentation Using, produce the first image of indication cast candidate, wherein, under original state, when prime image is input picture.
Next,, at step S203, based on grayscale image and/or the first image when prime image, calculate multiple features for each cast candidate respectively.Multiple features described here can comprise following parameter:
Area (A): the number of pixels in single cast candidate.This can be by obtaining according to the first image calculation producing in step S202.
Mean flow rate (AI): before application brightness calculation, first will work as prime image and be converted to grayscale image.As follows for calculating the formula of this mean flow rate:
Figure BDA00002315115800051
wherein, AI is mean flow rate, I cellfor the brightness of a certain pixel, A is that area stops over the number of pixels in cast candidate, and Σ represents summation.
Average gradient (AG): wherein AG is average gradient, G cellfor the gradient of a certain pixel, A is that area stops over the number of pixels in cast candidate, and Σ represents summation.
Green or dark areas number percent: this is the feature of calculating the number percent in dark areas or territory, Green Zone.
Shape is than (SR): external frame (four edges is crossing with the edge of candidate region respectively) is added on candidate region, and SR is defined as to the length of this external world's frame and the ratio of width.
Region saturation degree (AS): in order to describe the concavity (Concavity) to object.Be defined as the ratio of the area of object area frame external with it.
Average edge brightness (AEI): binary map is corroded to (Erode) once, and obtain the fringe region of each cast candidate, then calculate AEI by following formula:
Figure BDA00002315115800053
wherein AEI is average edge brightness, I edgefor the brightness of a certain edge pixel, A is edge pixel number, and Σ represents summation.
Radius contrast (RC): the ratio of least radius and maximum radius, calculates by following formula: wherein RC is radius contrast, and min (radius) is least radius, and max (radius) is maximum radius.
Gray level covariance matrix (GLCM): the pixel in a particular space by calculating with brightness (gray level) value i creates GLCM with the ratio of the pixel with value j.
Certainly, except above feature, those skilled in the art it will also be appreciated that other various features, as average, variance, homogeneity, energy, contrast, angle second moment, entropy etc.The embodiment of the present invention is not limited in above multiple feature.
Then,, at step S204, based on the plurality of feature, each cast candidate is classified to judge whether it is cast.
It is pointed out that the classification processing here can be undertaken by simple threshold decision.For example, if feature 1 is greater than first threshold and feature 2 is less than Second Threshold or feature 3 is less than the 3rd threshold value, judge that this cast candidate is cast.Such sorting technique is easy to realize, but precision is not high.
Alternately, the classification processing here can also be undertaken by structure tree construction.Compared with the method judging by simple threshold values, the precision of tree construction sorting technique is higher, processing speed is faster.
Fig. 4 shows the schematic diagram of tree construction classification.By training theory (as, Adaboost, Probability boosting etc.) train tree construction, or also can preset tree construction by experience.In Fig. 4, all criterions represent the specific threshold for a certain feature.It should be noted that under normal conditions, the characteristic rank of institute will be determined in classification role according to feature.Specifically, a certain feature is more important, and its corresponding criterion position in tree construction is just higher.
Next,, at step S205, judge whether to detect cast.If judge and cast detected at step S205, processing finishes.On the other hand, if judge and cast do not detected at step S205, process and proceed to step S206.
At step S206, for example, to carrying out change of scale (down-sampled) to obtain next stage image when prime image.Then, process and turn back to step S202, to the next stage image after change of scale is carried out to the processing of step S202 ~ S205.
It is pointed out that the image (, image not at the same level) for different scale, the each parameter using in the dividing processing of step S202 and the classification of step S204 processing is different.
Next, the idiographic flow of step S202 in Fig. 2 is described with reference to Fig. 5.As shown in Figure 5, step S202 further comprises the steps S2021 – S2023.
At step S2021, to carrying out edge filter when prime image.In this step, we adopt some rudimentary rim detection or boundary filters.In prior art, there are many such algorithms, as Sobel wave filter, Canny wave filter etc.In this edge filter step, as example, describe by carry out the method for rim detection with Canny wave filter.
First, carry out smoothed image with Gaussian filter, thus noise decrease and undesired details and texture.
Then, by using any one gradient computing (Roberts, Sobel, Prewitt etc.) to carry out compute gradient g (m, n).
M ( m , n ) = g m 2 ( m , n ) + g n 2 ( m , n )
Wherein, M (m, n) represents the gradient of the image after gaussian filtering, g m(m, n) represents that point (m, n) is at the axial gradient of m, g n(m, n) represents the gradient of point (m, n) on n direction of principal axis.
Next, threshold value T is set, and limits M (m, n) with threshold value T, thereby obtain following expression:
Figure BDA00002315115800062
By the M obtaining above tin edge in non-maximum pixel be suppressed to thin edges ridge, this is because in the step with Gaussian filter smoothed image, edge may be widened.For this reason, check each non-zero M twhether (m, n) is greater than its two consecutive values on gradient direction.If so, keep M t(m, n) is constant, otherwise is set to 0.The object of this processing is refinement bianry image, thereby generates the binary map that edge is single pixel.
Then, by two different threshold value T1 and T2(wherein, T1<T2) limit the M of above acquisition t(m, n), thus binary map PT1 and PT2 obtained.It should be noted that compared with PT1, PT2 has less noise and less false edges, but between each edge section, has larger gap.
Next, the each edge section in connection PT2 is to form continuous edge., the edge section in PT2 is traced back to its end points for this reason, then in PT1, find the section being adjacent, thereby in PT1, find the edge section of bridge gap, until arrive another edge section in PT2.
The object of this processing is to improve the continuity at edge by two threshold values.Each non-zero points in the binary map of iterative processing list pixel.Pixel value <T1 is non-marginal point, and pixel value >T2 is marginal point, and pixel value is between T1 and T2, if around tie point is marginal point, this point is marginal point, otherwise is non-marginal point.
Fig. 6 shows the binary map obtaining through after edge filter.As can see from Figure 6, in binary map, there are a large amount of line segments that represent edge.A large amount of such line segments have formed possible cast candidate, but for such binary map, still need further processing.
Next, at step S2022, the filtered image of edge carries out the first predetermined process, and obtaining background is that black, foreground object is the binary map of multiple cast candidates of white.Specifically, for example, the first predetermined process can be morphological operations, so that what guarantee to cut apart is non-NULL (solid) the cast candidate linking together.For this reason, first carry out m expansion (dilation) and n corrosion (erosion) for image.
After the processing of step S2022, obtain binary map as shown in Figure 7.Fig. 6 and Fig. 7 are compared, can find out, through after morphological operations, the background of this binary map is that black, prospect is for white, and different from a large amount of desultory line segment in prospect in Fig. 6, the solid white of the multiple cast candidates in Fig. 7 in prospect.
But, due to when comprising multiple different cast candidates in prime image, therefore, for the ease of each in multiple different cast candidates is classified respectively, also need to be handled as follows.
At step S2023, binary map is carried out to the second predetermined process, obtaining background is that black, foreground object is first image with multiple cast candidates of different brightness values, wherein with the different cast candidates of different brightness value marks.The result of this step as shown in Figure 8.Specifically, the mark treatment step of step S2023 is as follows: from the upper left corner of image, first cast candidate is labeled as 1, the second cast candidate and is labeled as 2 ...Like this, in ensuing classification is processed, can classify for the identical part of mark value (, same cast candidate).
The treatment scheme of step S206 in Fig. 2 will be specifically described below.
If do not detect any cast in classification step, carry out the processing of step S206.At step S206, first judge whether to meet predetermined condition.Possibility based on whether still have cast existence when prime image is determined described predetermined condition.For example, described predetermined condition can be whether at least one in the area of cast candidate, maximum cast candidate average gradient and transparency be in predetermined threshold range.Wherein, transparency can be calculated by following characteristics: the aberration of differential seat angle, green channel and other two interchannels between region saturation degree, average gradient, ellipse fitting and the minimum boundary rectangle matching of the length breadth ratio of ellipse fitting, the region saturation degree of ellipse fitting, minimum boundary rectangle.It should be noted that, described predetermined condition is not limited in this.
In order to carry out change of scale to working as prime image.First, we need to provide the definition of image processing mesoscale.By image I and gaussian kernel (Gaussian kernel) (as, δ) convolution, obtain I δ(Gaussian distribution with parameter δ is
Figure BDA00002315115800081
).Image after treatment is like this commonly referred to the image under yardstick δ.Fig. 9 shows an example of gaussian pyramid.Computation process is as follows:
First, prime image and the Gaussian function convolution with parameter δ will be worked as.Then, carry out reduction operation for the image after gaussian filtering.The reduction operation is here the processing that it is diminished by sampled images.Common decimation factor is 2.After this step, the image obtaining after change of scale is worked as the next stage image of prime image by we.
It is pointed out that and exist several different methods to produce metric space pyramid.Hereinbefore, adopt gaussian kernel to obtain metric space pyramid, but in fact many additive methods also can complete such function, as two-sided filter method, the weighting of its median filter is not only subject to the Different Effects of pixel value, but also is subject to and the distance affects of center pixel.In this case, treated image will be the edge that retains few noise.Compared with using the method for gaussian kernel, it has better performance but calculated load is heavier.Although many Review on Scale Space Methods can obtain similar result, in this manual, we disclose and have used the method for gaussian kernel as example, and can not be interpreted as our scheme is only limited to gaussian kernel.
In this step, more preferably, described predetermined condition can also comprise progression threshold value n.In other words,, as noted before, before judging that at least one in the area of for example cast candidate, maximum cast candidate average gradient and transparency is whether in predetermined threshold range, also judge whether to have carried out the conversion of n subdimension.The number of times that only carries out at present change of scale in the case of judging is less than progression threshold value, just further carry out as described above, for example, about the judgement of the area of cast candidate, maximum cast candidate average gradient and transparency.If carried out at present the conversion of n subdimension, processing finishes.Progression threshold value is subject to the impacts such as noisiness, image and the cast size of image conventionally.But, in practice, under normal circumstances 3 grades to identify for cast be enough.
Hereinbefore, referring to figs. 1 through Fig. 9, the cast recognition methods according to the embodiment of the present invention has been described.Next, with reference to Figure 10, cast recognition device is according to another embodiment of the present invention described.
As shown in figure 10, cast recognition device 1000 comprises an image acquisition component 1001, partition member 1002, classification element 1003 and change of scale parts 1004.
Wherein, image acquisition component 1001 is obtained pending input picture, and input picture is offered to partition member 1002.
Partition member 1002, to being input to its Image Segmentation Using, produces the first image of indication cast candidate, and the image before and after cutting apart is all offered to classification element 1003.
The image that classification element 1003 provides based on partition member, calculates multiple features of each cast candidate, and classifies based on the plurality of feature, to judge that whether each cast candidate is as cast.Wherein, about the detail of multiple features and sorting technique provides hereinbefore, for brevity, repeat no more here.
When cast do not detected in classification element 1003 time, the image before and after cutting apart is offered to change of scale parts 1004.Change of scale parts 1004 judge whether to meet predetermined condition.If meet predetermined condition, to carrying out change of scale when prime image to obtain next stage image, and next stage image is inputted to described partition member, to again carry out dividing processing and classification is processed, and if do not meet predetermined condition, processing finishes.About the detail of the predetermined condition here provides hereinbefore, for brevity, repeat no more here.
Next, the concrete configuration of partition member in the cast recognition device shown in Figure 10 is described with reference to Figure 11.As shown in figure 11, described partition member 1002 further comprises: edge filter parts 111, the first predetermined process parts 112 and second predetermined process parts 113.
Edge filter parts 111 are for to carrying out edge filter when prime image, and the image after edge filter is input to the first predetermined process parts 112.
The first predetermined process parts 112 image that it provides for subtend, after edge filter carries out the first predetermined process, obtains background and is black, foreground object and be the binary map of multiple cast candidates of white, and provide it to the second predetermined process parts 113.
Second its image providing of predetermined process parts 113 subtends carries out the second predetermined process, and obtaining background is that black, foreground object is first image with multiple cast candidates of different brightness values, wherein with the different cast candidates of different brightness value marks.
According to the one side again of the embodiment of the present invention, a kind of Urine Analyzer is provided, comprise above-mentioned any one cast recognition device.
Describe cast recognition methods and the device according to various embodiments of the present invention in detail referring to figs. 1 through Figure 11 hereinbefore.According in the cast recognition methods of the embodiment of the present invention and device, adopt multi-scale division algorithm to process the weak edge of cast, therefore the precision of cast identification can further improve; In addition, adopt the multiple new features except area, gray level covariance matrix, thereby contributed to further to improve cast nicety of grading.And, by adopting tree construction classification mechanism, can in improving accuracy of identification, further improve recognition speed.
The application discloses cast recognition methods and cast recognition methods device and Urine Analyzer.Described method comprises the steps: image acquisition step, for obtaining pending input picture; Segmentation step, for to when prime Image Segmentation Using, produces the first image of indication cast candidate, wherein, and under original state, when prime image is input picture; Classification step, for grayscale image and/or the first image based on when prime image, calculates multiple features for each cast candidate respectively, and based on the plurality of feature, each cast candidate is classified to judge whether it is cast.The application's technical scheme can improve the precision of cast identification.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (13)

1. a cast recognition methods, comprises the steps:
Image acquisition step, for obtaining pending input picture;
Segmentation step, for to when prime Image Segmentation Using, produces the first image of indication cast candidate, and wherein, under original state, working as prime image is input picture;
Classification step, for grayscale image and/or the first image based on when prime image, calculates multiple features for each cast candidate respectively, and based on the plurality of feature, each cast candidate is classified to judge whether it is cast.
2. cast recognition methods according to claim 1, is characterized in that, further comprises the steps:
Change of scale step, for when in the time that classification step does not detect cast, judges whether to meet predetermined condition; If meet predetermined condition, to carrying out change of scale when prime image to obtain next stage image, and again carry out segmentation step and classification step for next stage image, and if do not meet predetermined condition, processing finishes.
3. cast recognition methods according to claim 1, is characterized in that, described segmentation step comprises:
To carrying out edge filter when prime image, obtain the image at indication edge;
The filtered image of edge carries out the first predetermined process, and obtaining background is that black, foreground object is the binary map of multiple cast candidates of white;
Binary map is carried out to the second predetermined process, and obtaining background is that black, foreground object is first image with multiple cast candidates of different brightness values, wherein with the different cast candidates of different brightness value marks.
4. cast recognition methods according to claim 1, is characterized in that,
Described multiple feature comprises at least one following feature: area, mean flow rate, average gradient, green or dark areas number percent, shape ratio, saturated, the average edge brightness in region, radius contrast and gray level covariance matrix.
5. cast recognition methods according to claim 1, is characterized in that,
In described classification step, according to described multiple features, classify to judge based on tree construction whether each cast candidate is cast.
6. cast recognition methods according to claim 2, is characterized in that,
Described predetermined condition is whether at least one in the area of cast candidate, maximum cast candidate average gradient and transparency be in predetermined threshold range.
7. a cast recognition device, comprising:
An image acquisition component, for obtaining pending input picture;
A partition member, for based on when prime image, produces the first image of indication cast candidate, and wherein, under original state, working as prime image is input picture;
A classification element, for grayscale image and/or the first image based on when prime image, calculates multiple features of each cast candidate, and classifies based on the plurality of feature, to judge that whether each cast candidate is as cast.
8. cast recognition device according to claim 7, is characterized in that, further comprises:
Change of scale parts, for when in the time that classification element does not detect cast, judge whether to meet predetermined condition; If meet predetermined condition, to carrying out change of scale when prime image to obtain next stage image, and next stage image is inputted to described partition member, to again carry out dividing processing and classification is processed, and if do not meet predetermined condition, processing finishes.
9. cast recognition device according to claim 7, is characterized in that, described partition member comprises:
Edge filter parts, for to carrying out edge filter when prime image, obtain the image at indication edge;
First predetermined process parts, carry out the first predetermined process for the filtered image of edge, and obtaining background is that black, foreground object is the binary map of multiple cast candidates of white;
Second predetermined process parts, for binary map is carried out to the second predetermined process, obtaining background is that black, foreground object is first image with multiple cast candidates of different brightness values, wherein with the different cast candidates of different brightness value marks.
10. cast recognition device according to claim 7, is characterized in that,
Described multiple feature comprises at least one following feature: area, mean flow rate, average gradient, green or dark areas number percent, shape ratio, saturated, the average edge brightness in region, radius contrast and gray level covariance matrix.
11. cast recognition devices according to claim 7, is characterized in that,
Described classification element, according to described multiple features, classifies to judge based on tree construction whether each cast candidate is cast.
12. cast recognition devices according to claim 8, is characterized in that,
Described predetermined condition is whether at least one in the area of cast candidate, maximum cast candidate average gradient and transparency be in predetermined threshold range.
13. 1 kinds of Urine Analyzers, comprise that one according to the cast recognition device described in any one in claim 7-12.
CN201210418837.9A 2012-10-26 2012-10-26 Casts identification method and device, and urine analyzer Pending CN103793902A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201210418837.9A CN103793902A (en) 2012-10-26 2012-10-26 Casts identification method and device, and urine analyzer
PCT/US2013/065856 WO2014066218A2 (en) 2012-10-26 2013-10-21 Cast recognition method and device, and urine analyzer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210418837.9A CN103793902A (en) 2012-10-26 2012-10-26 Casts identification method and device, and urine analyzer

Publications (1)

Publication Number Publication Date
CN103793902A true CN103793902A (en) 2014-05-14

Family

ID=50545448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210418837.9A Pending CN103793902A (en) 2012-10-26 2012-10-26 Casts identification method and device, and urine analyzer

Country Status (2)

Country Link
CN (1) CN103793902A (en)
WO (1) WO2014066218A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760878A (en) * 2014-12-19 2016-07-13 西门子医疗保健诊断公司 Method and device for selecting urinary sediment microscope image with optimal focusing performance
CN109447119A (en) * 2018-09-26 2019-03-08 电子科技大学 Cast recognition methods in the arena with SVM is cut in a kind of combining form credit
CN110473167A (en) * 2019-07-09 2019-11-19 哈尔滨工程大学 A kind of urine sediment image identifying system and method based on deep learning
CN112508854A (en) * 2020-11-13 2021-03-16 杭州医派智能科技有限公司 Renal tubule detection and segmentation method based on UNET
CN116883415A (en) * 2023-09-08 2023-10-13 东莞市旺佳五金制品有限公司 Thin-wall zinc alloy die casting quality detection method based on image data

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665889B (en) * 2018-04-20 2021-09-28 百度在线网络技术(北京)有限公司 Voice signal endpoint detection method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1645139A (en) * 2004-12-27 2005-07-27 长春迪瑞实业有限公司 Method for analysing non-centrifugal urine by image identifying system
US20050251347A1 (en) * 2004-05-05 2005-11-10 Pietro Perona Automatic visual recognition of biological particles
CN101447080A (en) * 2008-11-19 2009-06-03 西安电子科技大学 Method for segmenting HMT image on the basis of nonsubsampled Contourlet transformation
WO2009125678A1 (en) * 2008-04-07 2009-10-15 株式会社日立ハイテクノロジーズ Method and device for dividing area of image of particle in urine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050251347A1 (en) * 2004-05-05 2005-11-10 Pietro Perona Automatic visual recognition of biological particles
CN1645139A (en) * 2004-12-27 2005-07-27 长春迪瑞实业有限公司 Method for analysing non-centrifugal urine by image identifying system
WO2009125678A1 (en) * 2008-04-07 2009-10-15 株式会社日立ハイテクノロジーズ Method and device for dividing area of image of particle in urine
CN101447080A (en) * 2008-11-19 2009-06-03 西安电子科技大学 Method for segmenting HMT image on the basis of nonsubsampled Contourlet transformation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHUN-YAN LI ET AL.: "AUTOMATIC DETECTING AND RECOGNITION OF CASTS IN URINE SEDIMENT IMAGES", 《PROCEEDINGS OF THE 2009 INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760878A (en) * 2014-12-19 2016-07-13 西门子医疗保健诊断公司 Method and device for selecting urinary sediment microscope image with optimal focusing performance
CN109447119A (en) * 2018-09-26 2019-03-08 电子科技大学 Cast recognition methods in the arena with SVM is cut in a kind of combining form credit
CN110473167A (en) * 2019-07-09 2019-11-19 哈尔滨工程大学 A kind of urine sediment image identifying system and method based on deep learning
CN110473167B (en) * 2019-07-09 2022-06-17 哈尔滨工程大学 Deep learning-based urinary sediment image recognition system and method
CN112508854A (en) * 2020-11-13 2021-03-16 杭州医派智能科技有限公司 Renal tubule detection and segmentation method based on UNET
CN112508854B (en) * 2020-11-13 2022-03-22 杭州医派智能科技有限公司 Renal tubule detection and segmentation method based on UNET
CN116883415A (en) * 2023-09-08 2023-10-13 东莞市旺佳五金制品有限公司 Thin-wall zinc alloy die casting quality detection method based on image data
CN116883415B (en) * 2023-09-08 2024-01-05 东莞市旺佳五金制品有限公司 Thin-wall zinc alloy die casting quality detection method based on image data

Also Published As

Publication number Publication date
WO2014066218A3 (en) 2014-07-10
WO2014066218A2 (en) 2014-05-01

Similar Documents

Publication Publication Date Title
Viswanathan Fuzzy C means detection of leukemia based on morphological contour segmentation
WO2018107939A1 (en) Edge completeness-based optimal identification method for image segmentation
CN103793902A (en) Casts identification method and device, and urine analyzer
Moussa et al. A new technique for automatic detection and parameters estimation of pavement crack
CN110008932A (en) A kind of vehicle violation crimping detection method based on computer vision
Hari et al. Separation and counting of blood cells using geometrical features and distance transformed watershed
CN103778627A (en) Sea oil spill detection method based on SAR image
Suryani et al. Identification and counting white blood cells and red blood cells using image processing case study of leukemia
Hidayatullah et al. Automatic sperms counting using adaptive local threshold and ellipse detection
Deshmukh et al. Segmentation of microscopic images: A survey
CN103049788A (en) Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk
Plissiti et al. Automated segmentation of cell nuclei in PAP smear images
CN105787912A (en) Classification-based step type edge sub pixel localization method
CN113962994B (en) Method for detecting cleanliness of lock pin on three-connecting-rod based on image processing
CN109543498A (en) A kind of method for detecting lane lines based on multitask network
Deepa et al. Improved watershed segmentation for apple fruit grading
CN115330792A (en) Sewage detection method and system based on artificial intelligence
JP2016115084A (en) Object detection device and program
Gim et al. A novel framework for white blood cell segmentation based on stepwise rules and morphological features
Abraham et al. A fuzzy based automatic bridge detection technique for satellite images
Yu et al. Crack detection algorithm of complex bridge based on image process
Hussain et al. An analytical study on different image segmentation techniques for malaria parasite detection
CN111191534A (en) Road extraction method in fuzzy aerial image
Al-Muhairy et al. Automatic white blood cell segmentation based on image processing
Al-Shammaa et al. Extraction of connected components Skin pemphigus diseases image edge detection by Morphological operations

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140514