EP1038267A1 - Method and device for detecting and processing images of biological tissue - Google Patents
Method and device for detecting and processing images of biological tissueInfo
- Publication number
- EP1038267A1 EP1038267A1 EP98965797A EP98965797A EP1038267A1 EP 1038267 A1 EP1038267 A1 EP 1038267A1 EP 98965797 A EP98965797 A EP 98965797A EP 98965797 A EP98965797 A EP 98965797A EP 1038267 A1 EP1038267 A1 EP 1038267A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- color
- mask
- tissue section
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 238000012545 processing Methods 0.000 title claims abstract description 53
- 238000012937 correction Methods 0.000 claims abstract description 31
- 238000009826 distribution Methods 0.000 claims abstract description 27
- 238000001454 recorded image Methods 0.000 claims abstract description 20
- 238000005259 measurement Methods 0.000 claims abstract description 10
- 238000010191 image analysis Methods 0.000 claims description 27
- 238000007781 pre-processing Methods 0.000 claims description 23
- 238000001514 detection method Methods 0.000 claims description 20
- 239000003086 colorant Substances 0.000 claims description 17
- 238000010606 normalization Methods 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 14
- 230000011218 segmentation Effects 0.000 claims description 13
- 238000004458 analytical method Methods 0.000 claims description 12
- 238000003702 image correction Methods 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 7
- 239000000049 pigment Substances 0.000 claims description 7
- 238000013139 quantization Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000000926 separation method Methods 0.000 claims description 5
- 238000003860 storage Methods 0.000 claims description 2
- 238000003709 image segmentation Methods 0.000 claims 2
- 230000003902 lesion Effects 0.000 description 32
- 230000008901 benefit Effects 0.000 description 12
- 238000012800 visualization Methods 0.000 description 11
- 201000001441 melanoma Diseases 0.000 description 9
- 238000004140 cleaning Methods 0.000 description 6
- 238000003672 processing method Methods 0.000 description 6
- 238000001228 spectrum Methods 0.000 description 6
- 210000004209 hair Anatomy 0.000 description 5
- 238000003705 background correction Methods 0.000 description 4
- 208000035250 cutaneous malignant susceptibility to 1 melanoma Diseases 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000009499 grossing Methods 0.000 description 4
- 238000000844 transformation Methods 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 239000002537 cosmetic Substances 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 230000004069 differentiation Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 235000019646 color tone Nutrition 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000002405 diagnostic procedure Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000007654 immersion Methods 0.000 description 2
- 239000007788 liquid Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 206010040882 skin lesion Diseases 0.000 description 2
- 231100000444 skin lesion Toxicity 0.000 description 2
- 230000004304 visual acuity Effects 0.000 description 2
- 230000029663 wound healing Effects 0.000 description 2
- 201000004384 Alopecia Diseases 0.000 description 1
- 208000025865 Ulcer Diseases 0.000 description 1
- 239000004480 active ingredient Substances 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 208000024963 hair loss Diseases 0.000 description 1
- 230000003676 hair loss Effects 0.000 description 1
- 229910052736 halogen Inorganic materials 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 150000002367 halogens Chemical class 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000019612 pigmentation Effects 0.000 description 1
- 239000000843 powder Substances 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 230000036555 skin type Effects 0.000 description 1
- 238000010972 statistical evaluation Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000002277 temperature effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 231100000397 ulcer Toxicity 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20008—Globally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the invention relates to methods for capturing and processing images of biological tissue, in particular for processing tissue images, e.g. dermatoscopic images to identify tissue features or to identify and / or evaluate tissue changes.
- tissue images e.g. dermatoscopic images to identify tissue features or to identify and / or evaluate tissue changes.
- the invention also relates to devices for implementing such methods.
- SVM scaling vector method
- SVM is a method for capturing target patterns in a texture, with which a system state can be represented as a distribution of points in an n-dimensional space.
- anisotropic scaling indexes are determined in SVM, which are characteristic of the dependence of the projection of the point density on a certain coordinate on the distance to a point under consideration is.
- SVM also contains a comparison of the distribution of the anisotropic scaling indices with predetermined comparison distributions.
- the object of the invention is to provide improved image recording and image processing methods for imaging biological tissue, which are characterized by high reproducibility, which enable standardization and with which the field of application of the conventional techniques mentioned is expanded.
- the object of the invention is also to provide devices for implementing such improved image processing methods.
- the solution to the above-mentioned task is based on the idea of capturing digital images and / or characteristic image parameters of digital ones in a method for image acquisition and processing, depending on the specific requirements for the processing result in individual or all steps of image acquisition, image preprocessing and image analysis To perform images under standardized conditions that guarantee comparability regardless of time and device. For this purpose, a series of corrections to the image recording requirements as well as normalizations, transformations and / or projections of the recorded images into point spaces are carried out, in which parameter analyzes can be carried out in a standardized and reproducible manner.
- the measures according to the invention for improving the recording on the one hand and the image processing (image preprocessing and image analysis) on the other on the one hand, depending on the requirements of a specific application they can be provided individually or together. Even if in the application of the invention explained in detail below in the acquisition and processing of dermatoscopic images, the correction steps in the image acquisition interact particularly advantageously with the standardization and transformation steps in the image processing, it is also possible in other applications, which are also mentioned below to provide either an improvement in image recording according to the invention or an improvement in image processing according to the invention.
- a particular advantage of the invention results from the novel image processing, in particular from mask processing, which provides reproducible results that are compatible with practical experience from non-automatic tissue recognition.
- Fig. 3 Histogram representations (N (a) spectra) for the application of the scaling mdex method for image cleaning;
- Fig. 8a, 8b schematic edge representations to illustrate the analysis of the image parameters which are characteristic of edge properties
- Fig. 9a, 9b overview representations of the procedure for determining image parameters which are characteristic of the color properties of an examined tissue section
- the image acquisition comprises the process steps which are carried out for the delivery of a brightness and color compensated digital image or an image (e.g. RGB image), which is the starting point of the image processing described below.
- the image acquisition includes, in particular, adjustment measurements for determining brightness and color correction sizes, the acquisition of the acquisition of images of an object to be examined and correction steps for dynamic correction of the acquired raw image on the basis of the large brightness and color correction.
- the adjustment measurements are required once for a specific measurement configuration, which is characterized by unchanged camera conditions, lighting conditions and a constant imaging scale. Details of the image acquisition are shown schematically in FIG. 1.
- the adjustment measurements 10 include a black image recording 11, a white image recording 12, a color reference recording 13 and the correction size determination 14.
- the adjustment measurements 10 are first carried out with a signal adaptation of the black and white levels to the video system used. With the lighting switched on and the image focused, the aperture or a corresponding shutter device is opened so far during the acquisition of a white image that the channel level is fully utilized. All lighting, focusing diaphragms and amplification settings The recording system is saved as a configuration file. This signal adaptation is repeated whenever one of the settings changes due to operational changes or instabilities.
- the camera lens is completely darkened and a signal level is set for the black level.
- the signal level is set in such a way that the gray value distributions in the three color channels are equal to one another and are below predetermined limits. If the image is recorded, for example, with a digital video system with a signal resolution over 256 level levels, the predetermined limit can be selected within the lower levels 0 to 15.
- the black images S (x, y) of the color channels recorded with the determined signal level settings serve as black reference images.
- the white image recording 12 a white or gray reference image is recorded accordingly and the aperture and / or the camera amplification is set in such a way that the signal levels in the three color channels are identical to one another and are above a predetermined white limit.
- the white limit can be in the range of the upper signal level levels 200 to 255.
- the white images of the color channels recorded with the determined aperture and / or gain values serve as a white reference image W (x, y).
- the white image recording 12 is carried out for each enlargement setting in which the actual image acquisition takes place later.
- An average black value S m and an average white value S m are then determined from the black and white images.
- a so-called shadmg matrix is formed from the black and white images.
- the color reference recording 13 is provided in order to be able to correct camera-specific color errors of the recorded raw images.
- the color reference image 13 is therefore not absolutely necessary if, in the case of an intended application, it is not of interest to compare the images recorded with different cameras.
- the color reference recording 13 takes place in such a way that first color references are acquired as target values according to a reference card with predetermined, known color values.
- the detection takes place, for example, by storing the setpoints in the camera system.
- the reference card of the camera actually used is then recorded.
- the color values obtained serve as actual values, the deviation of which from the target values is determined.
- the target-actual comparison is carried out using one or more known regression methods (e.g. based on the least squares calculation) or using neural networks.
- the result of the calculation is a 3 * 3 correction matrix K (kll, kl2, kl3, k21, ..., k33) with which a recorded RGB image can be converted into a corrected R'G'B '
- the reference card on the basis of which the color reference recording 13 takes place, can be a standard reference card known per se. However, it is also possible as an alternative to integrate the color reference card in an application-specific manner in the recording system. For example, in the case of dermatological examinations in which the skin area to be examined lies against a transparent plate through which the image is taken, it is possible to use color reference markings on the Attach the plate. This is particularly advantageous since the color reference recording 13 can be carried out under exactly the same conditions as the actual image acquisition 20.
- the correction size determination 14 (see FIG. 1) comprises the determination of the shadmg matrix and the color correction matrix K in a data format suitable for further processing.
- the image capture 20 comprises all steps of capturing a digital video image of the object to be examined.
- Object lighting is based on halogen lighting systems known per se (possibly in combination with light guides) or on the basis of light-emitting diodes. In the latter case, for example, LED systems arranged in a ring are used. These so-called LED rings include, for example, 4 or up to over 100 LEDs.
- the advantages of the LED rings are the simplified handling, the increased service life and stability, the reduced problems with the light supply, since no sensitive light guides are required, the problem-free voltage control and the significantly reduced temperature effects.
- the video image is recorded with a camera system that is used for image enlargement in the range of approx. Is set up 1 to 20 times.
- the magnification is selected depending on the application and is preferably in the range from 3 to 5 when examining melanomas.
- the magnification is chosen, for example, in such a way that an area of 11.8 11.8 mm (skin section) can be captured entirely on the camera chip in a square pixel field of 512 512 pixels.
- the pixel size 0.023 mm defines the resolution of the digital video image.
- the camera system is to be operated under remote control so that all settings can be controlled with a computer.
- the captured raw image is subjected to the correction steps 30 for providing the captured image, which is the subject of the image processing.
- the correction steps 30 include a shading correction or background compensation 31 and possibly a color correction 32.
- the shading correction 31 comprises an additive correction that is preferably used if lighting homogeneities predominate, or a multiplicative correction that is preferably used if inhomogeneities of the sensors predominate.
- the background compensation for determining the additively (B A ) or multiplicatively (B N ) corrected image from the raw image (B) is carried out according to:
- B M (x) B (x, y) W m / W (x, y) - S m (multiplicative).
- the corrected image B or B M is multiplied by the correction matrix K in accordance with the color correction form specified above (color correction 32).
- the existing image after the shading and color correction according to the invention has the advantage that the pixel values (brightness, colors) are independent of specific properties of the camera system and the recording conditions, so that the comparability of the image with time and location independently recorded images is guaranteed and that The captured image represents a reproducible, standardized input size for the subsequent image processing.
- CCD cameras with remote control all settings are carried out and registered under computer control. Deviations from the setpoint settings are thus recognized.
- the image processing provides image features that are the starting point for a subsequent identification and evaluation of skin changes with regard to their dignity and development.
- the image processing according to the invention comprises, after the acquisition of a digital image or image, which has been described with reference to FIG. Processing 100 and a valuation-relevant image processing (image analysis) 200. Further processing steps follow with the visualization 300 and classification 400.
- the image preprocessing 100 is generally aimed at producing excerpts from feature-specific regions ("region of metrest") of the digital image (so-called "masks"), which are intended to form input variables for linear or non-linear methods of image processing which are used in the Within the framework of the subsequent image analysis 200.
- the required mask types depend on the features to be analyzed and the intended linear or non-linear image processing methods such as the scaling mdex method, the determination of the fractal dimension of image structures, the determination of generalized or local entropy dimensions, the use of statistical methods or the like.
- the masks are in binary form.
- the masks provided as a result of the image preprocessing 100 include an artifact mask for marking faults in the recorded image of the examined biological tissue, an object mask for binary marking of a specific tissue section within the recorded image, a contour mask that is one-dimensional or linear Borderline of the object mask represents, an edge mask for the two-dimensional representation of the edge area of the examined tissue section and a color image reduced to prototypical colors (so-called "color symbols") with a quantized color representation of the captured image.
- Further results of the image preprocessing 100 are the large normalization factor f (also scaling factor) and color transformations of the captured image.
- the image evaluation is broken down into elementary classes of image characteristics which are independent of one another, namely of geometry, coloring and structure properties.
- the masks and ub ⁇ - Standardization or transformation sizes are saved for later image analysis.
- a particular advantage of this procedure is that the mask formation and normalization (size, color) assigns each captured image an derived image, which is then used for the image analysis for all captured and reference images under comparable, standardized conditions.
- the image preprocessing 100 contains an image correction 101.
- the image correction 101 is provided to remove non-tissue image elements from the captured image.
- Such disturbances (or: artifacts) caused by non-tissue elements are formed, for example, by hair, light reflections in the image, bubbles in an immersion liquid, other non-tissue particles or the like.
- image correction 101 which is described in detail below, segmentation 102 of the recorded image takes place.
- the segmentation 102 is aimed at separating a specific tissue section of interest (image detail) from the remaining tissue.
- the segmentation is aimed, for example, at determining the points (pixels) of the digital image and making them available for further processing, which are part of the skin change, and separating the other points, which depict the surrounding skin surface, from the image .
- the segmentation 102 which is described in detail below, the object mask, the contour mask after a contour calculation 103, the large normalization factor f after a normalization 104 and the edge mask after an edge determination 105 are determined.
- the color processing 106, 107 takes place before the image cleaning 101 and segmentation 102 or parallel to these steps and includes a color quantization 106 and a color transformation 107.
- the color transformation is not necessarily part of the image preprocessing 100, but rather can also be carried out as part of the image analysis.
- the image preprocessing 100 can include further steps, which include application-dependent or target-oriented measures for the standardized representation of the captured image. For example, additional geometric transformations can be provided, with which distortions caused by the exposure are eliminated.
- an equalization step is provided, in which an arched tissue area is projected onto a two-dimensional plane, taking into account the bulging properties and optical recording parameters.
- the recorded image is first converted into a gray-scale image, which is then processed further in accordance with one of the following procedures.
- the gray value image is processed with a line detector known per se (algorithm for connecting gray value gradient maxima) which provides information about the position direction and width of lines in the recorded image. Subsequently, very short lines (so-called phantoms) that vary greatly in the directional information are detected by comparing the determined lines with predetermined limit values and excluded from further image correction. In conclusion, gaps between Line fragments closed and the determined line image saved as an artifact mask.
- the scaling index method (SIM) mentioned above is used after increasing the contrast of the gray-scale image. When increasing the contrast, bright areas are assigned an increased brightness and dark areas a further reduced brightness by multiplication with the sigmoid function with reference to an average gray value.
- the nine gray values of the 3 * 3 environment of each pixel (pixel) are combined with the two location coordinates of the pixel to form an 11-dimensional space.
- the scaling indexes are determined by means of SIM, the line characterizing band is extracted from the spectrum of the indexes, and the associated pixels are marked in a digital image.
- FIG. 3 An example of a spectrum display according to the SIM is shown in FIG. 3. It can be seen that there is a large cluster for scaling mids that are larger than three (right maximum). The pixels with scaling indexes in the range between 1 and 2 are clearly separated from this cluster and are assigned to the line structures.
- the digital image of the line structures determined with the SIM is subjected to a standard erosion.
- the eroded digital image of the line structures is added by means of an AND operation with a bmarized grayscale image of the recorded image, which results in the artifact mask.
- the bmarized gray-scale image is generated by subjecting the recorded image to a binary median filter, which sets the pixel to black or white depending on whether its gray-scale value is below or above the median of the surrounding pixels.
- the artifact mask can be subjected to phantom cleaning. With so-called scrabbling, particularly small line structures are removed from the artifact mask.
- the explained image correction method can also be applied to other image disturbances whose scaling indexes differ significantly from those of the tissue image, so that the corresponding pixels can be detected from the recorded image by simple spectra observation.
- FIG. 4 shows an example of an artifact mask for a skin image, in which hair is arranged around the lesion to be examined (middle part of the image).
- the image cleansing 101 according to the invention is particularly advantageous since image disturbances are recorded at high speed, selectivity and completeness. Furthermore, an exact position determination of the image disturbances is supplied with the artifact mask.
- the segmentation 102 serves to separate the depicted lesion from the intact skin background.
- the image is first transformed from the RGB space (image acquisition) into a suitable other color space (eg via a main axis transformation).
- the target color space is selected so that after the color transformation, the transformed image can be projected (reduced) onto a color plane, in which the segmentation can be carried out particularly effectively and reliably by means of a threshold value separation. If pigment changes in the skin are detected, the transformed image is preferably reduced to a blue level.
- the histogram of the color-transformed image which has been cleaned of artifact elements (here, for example: blue values), is then created and an iterative histogram analysis is used to determine the color threshold value, which separates the image segment to be segmented (the lesion) from the image background (intact skin).
- the histogram is a frequency distribution of the blue values that fall within certain reference intervals.
- iterative histogram analysis according to FIG. 5 there is initially a display with relatively wide reference intervals
- the reference interval is determined with the minimum blue value frequency.
- the minimum with the minimum frequency is searched for within the reference interval that represented the minimum value in the previous histogram
- FIG. 5d The histogram according to FIG. 5d with a fine reference interval width shows em first maximum at low blue values (corresponding: low brightness), which corresponds to the dark lesion area, and e second maximum at higher blue values (higher brightness), which corresponds to the surrounding skin.
- the histogram analysis can be combined with a determination of the number of objects and the selection of the object closest to the center of the image.
- the object mask (see FIG. 2) is determined by combining all points of the transformed image with a blue value that is less than or equal to the determined threshold value and subjecting it to an area growth method. This means that additional pixels are added to the neighborhood of the ascertained pixels in accordance with a predetermined homogeneity criterion within tolerance limits.
- a further post-processing step can be the closing von Lucken and include a smoothing of the outer boundary of the determined pixels.
- the result of the digital image obtained after the area has grown is the object mask according to FIG. 2.
- the iterative histogram analysis according to the invention is particularly advantageous, since with a simple basic assumption (dark object (lasion), light environment (skin)) a reproducible segmentation limit is reliably determined as a minimum between light and dark areas, with the implementation with iterative histogram generation with increasing The determination of the segmentation limit (threshold value) at the location of local Mmima is avoided. Another advantage is that the threshold separation can be fully automated and carried out at high speed according to this algorithm.
- the object-like edge is used to determine the peripheral edge (segmentation limit, contour mask) by simply determining the edge pixels.
- an edge mask is also determined according to the invention, from which characteristic properties of the edge width and homogeneity can be derived in a subsequent image analysis.
- the edge determination 105 (see FIG. 2) for determining the edge mask comprises a first step of edge definition and a second step of edge scaling. Both steps are carried out under the general standardization requirement for establishing comparability with reference images.
- the edge definition is based on the histogram analysis mentioned above, by defining an edge interval around the determined threshold value (segmentation limit) and all pixels are determined whose blue values fall within this border interval.
- it is possible to calculate a skin resemblance measure for each pixel of the determined lesion to sort the lesion pixels thereon according to their skin resemblance, and to define a certain proportion of the skin-closest points as the lesion edge.
- the skin similarity measure is in turn given by the color value (eg blue value).
- the proportion of the image points defined as the edge can comprise, for example, 10% of the most skin-like pixels, although this value can be optimized depending on the application and the results of the image processing.
- the determined edge area can then be subjected to a large normalization in order to achieve the edge mask by scaling the binary boundary image determined by the edge definition with the large normalization factor f.
- the large normalization factor f results from the normalization step 104 (see FIG. 2), in which the object mask of the lasion is scaled in relation to a specific area (with a specific number of pixels) that is uniform for reference purposes.
- FIG. 6 An example of an edge mask from an image of a pigment change processed according to the invention is shown in FIG. 6. It can be seen that the determination of the edge mask creates a two-dimensional structure which, in addition to the purely geometric position of the lesion edge, contains additional information about its extent and structure, which will be discussed in more detail below.
- the normalization factor f is determined by (A re f / A a ) ⁇ , where A re f denotes an object reference area and A a the area of the current object
- the color quantization 106 is aimed at generating an image (color symbol) quantized in relation to prototypical colors from the recorded image, from which color features can be derived which are reproducibly comparable with corresponding color features of reference images.
- the formation of the color symbol represents a coding and comprises the projection of the colored image onto a set of prototypical colors.
- the prototypical colors result from a separate application-specific definition of color clusters in a suitable color space.
- the color clusters form a partitioned feature space and are determined using a hierarchical fuzzy clustering method from a training set of a large number of recorded color images.
- a color space is selected as a suitable color space, which delivers particularly good class-uniform results for the later derivation of color features with regard to the evaluation accuracy.
- the so-called YUV color space in which two axes are formed by color differences and one axis by a brightness measure, has proven to be suitable for processing skin images.
- the color palette of the captured image is preferably reduced to approx. 10 to 30 prototype color clusters reduced.
- the projection onto the clusters takes place by determining the Euclidean distance to all clusters for each pixel, determining the minimum distance to a next cluster and assigning the respective pixel to this next cluster (so-called "next neighbor matching").
- the mask processing takes place as a function of correction variables which are derived from the image analysis 200 of the visualization 300 and / or the classification 400.
- the A, B, C and D components of the image parameters determined according to the invention described below are not diagnosis results. It is possible, in applications of the invention other than the melanoma examination given by way of example, only individual parameters described below or further parameters derived therefrom. to determine guided image parameters in the image analysis 200.
- Global components index in the figures: G
- local components index in the figures: L
- Local components are thus location-related, quantitative parameters that relate to specific image coordinates with an application-specific, definable spatial resolution.
- the spatial resolution can correspond to one image pixel or a large number of image pixels. Up to 100 image pixels can typically be combined to form image tiles, as can also be seen from FIG. 13d (see below).
- the simultaneous detection of local and global components for the image parameters represents a particular advantage of the invention.
- local parameters can also be specified and, as in FIGS. 7, 9, 10 and 11 are illustrated, visualized. This enables the user of the system (e.g. the dermatologist) to identify the local image features on the basis of which the image processing system delivers certain global image features and then carry out the diagnosis on the basis of this finding.
- step 220 FIG. 2 The determination of the B component (s) (step 220 FIG. 2) for evaluating the transition or edge of the lesion is described below with reference to FIG. 7.
- 7 schematically shows details of the border determination 220 on the basis of the features of the edge mask, the contour mask and the large normalization factor f.
- the border determination 220 comprises an edge point classification 221, in which the pixels of the edge mask are classified in dependence on the respective scaling index are assigned to different submasks, a homogeneity determination 222, in which a parameter of the boundary homogeneity is derived from the spatial arrangement of the edge mask, an edge profile determination 223, in which features of the abruptness of the lesion transition are derived, a detection of the edge direction continuity 224 by means of variance features of the edge mask and a detection of the contour roughness 225 by means of the fractal dimension of the contour mask.
- the edge point classification 221 is aimed at separating lasions with a flatly extended edge region, as is typical for melanomas, from lesions with a sharply delimited edge region.
- the result of the boundary point classification 221 is shown visually as follows or represented by a parameter B1.
- the scaling mdex is calculated for each point of the border mask.
- the number of pixels whose scaling index lies above a predetermined limit and the number of remaining pixels are then determined.
- the pixels are assigned to submasks below or above the limit, which are then visualized on a display for further assessment.
- the submasks which each represent the pixels with a ⁇ l or a> l, comprise areas of almost the same size, whereas in the case of FIG. 8B, the submask with a> l is much more pronounced.
- the lesion shown in FIG. 8B has a larger number of pixels in the edge region which have a relatively large scaling index.
- the edge area according to FIG. 8A is more sharply delimited, so that the lesion is not assigned to a malignant melanoma, but to a benign pigment change.
- the visualized result can also be done with the Bl parameter according to:
- n a> ⁇ the number of pixels with a scaling index a> l and N edge represent the total number of pixels of the edge mask.
- a dimension B2 is determined to record the fluctuations in the border thickness along the extent of the lesion.
- parameter B2 results from parameter B1 according to equation (1) and a large number of local parameters bl:
- the local parameters bl x result from the border mask being covered with a grid and the local parameter bl x being calculated analogously to equation (1) for all n grid windows which contain a minimum number of pixels.
- the grating size and the minimum number of image points are selected appropriately depending on the size of the lesion.
- the parameter for the edge homogeneity B2 grows if there are large fluctuations in the parameter bl m with respect to the global parameter B1 along the circumference.
- a larger parameter B2 means a larger boundary homogeneity.
- the edge profile determination 223 serves to further characterize the abruptness of the lesion transition.
- a local environment (window) is defined for each pixel of the edge mask and the orientation of the edge of this local environment exercise determined.
- the alignment results from a linear regression of the neighboring points within the window.
- the edge angle of the regression line with respect to the horizontal is determined and the amount of pixels in the window under consideration is directed into the horizontal by this edge angle. This is illustrated by way of example in FIG. 6.
- the anisotropy scaling a x y in the y direction are then determined using the scaling vector method (SVM), as described in DE 196 33 693.
- SVM scaling vector method
- n (a y ⁇ 0.3) denotes the number of pixels with an anisotropic scaling index a y ⁇ 0.3.
- another suitable limit value can also be selected instead of the limit value 0.3.
- the size n refers to the entire edge.
- One or more of the following alternative options I, II and III can be implemented for the detection of the edge direction continuity 224 (see FIG. 7).
- an edge angle is first calculated for each pixel of the edge mask in accordance with its local environment. The respective contact angle value is assigned to each pixel.
- the scaling mdices a of the Wmkelwertsentes are then calculated according to the isotropic SIM, as is known from DE 43 17 746.
- the B parameter B4 then results analogously to equation (3), with the scaling mdices of the Wmkelwertsentes taken into account here.
- the distribution of the local angular variances in II and III is quantified by their statistical moments (mean, variance, etc.) and / or by an entropy measure and printed out by parameter B4.
- the contact angle is calculated for each pixel in its local environment (window).
- the variance of the contact angles in a window (“slidmg window”), centered on the point under consideration, is calculated for the total number of points.
- the parameter B4 then results as the entropy of the normalized frequency distribution of the angle variances for all pixels of the edge mask.
- the parameter B4 (entropy) is a measure of the occupation of the intervals of the histogram of the root variances and grows with the width of the root distributions.
- parameter B4 is the entropy of the normalized frequency distribution of the angle differences for all pixels of the edge mask. The more rugged the edge mask is, the more the angles associated with a pixel differ, so that parameter B4 is again a statistical measure of the regularity of the boundary.
- the roughness determination 225 does not refer to the edge mask, but to the contour mask (one-dimensional edge of the object mask (see FIG. 2).
- the parameter B5 results from the determination of the fractal dimension (hereinafter: FD)
- the image of the contour mask can first be subjected to a large standardization taking into account the normalization factor f.
- the FD of the contour is calculated according to the Flook method, as described in detail by AG Flook in "Powder Technology" (Vol. 21 , 1978, p. 295 ff.) (See also E. Claridge et al, "Shape analysis for classification of malignant melanoma", J. Biomed. Eng. 1992, Vol. 14, p. 229).
- the FD is a measure of the complexity of the contour: If the curve of the contour mask is approximated by a polygon, the number of steps required depends on the respective step size, the FD is a measure of this dependency, depending on the parameters of the methodFlook can determine a textureile and / or a structural FD.
- the texture FD describes distant irregularities in the contour, while the structural FD captures larger fluctuations.
- the parameters B1-B5 are passed on to the classification 400, in which an evaluation algorithm can be processed. Furthermore, the parameters B1-B4 are visualized together with the edge mask, so that an operator of the image processing system can view the local image parameters and, if necessary, manually change boundary conditions and parameters of the statistical processing such as grids or window sizes or scaling limit values.
- the local image parameters are preferably viewed with a false color representation, with which different local parameters for identifying, for example, a particularly uniform and narrow or a particularly non-uniform wide edge are made visible.
- the step of color evaluation 230 (see FIG. 2) is shown in detail in FIG. 9.
- the color assessment 230 comprises a color diversity determination 231 and a color distribution determination 232.
- the color diversity determination 231 is aimed at determining the presence and the frequency distribution of the prototypical colors defined in the color quantization 106 (see FIG. 2) in the lesion.
- the color distribution determination 232 is aimed at determining the geometric arrangement of the prototypical colors in the lasion.
- the parameter C1 results from the color diversity determination 231 by determining the statistical weights p ⁇ of the M prototypical colors according to equation (4) and the color coding entropy derived therefrom according to equation (5).
- the statistical weight p a. of the color cluster I corresponds to the ratio of the number of pixels n- of the color cluster 1 to the total number N of the lesion pixels (N corresponds to the number of pixels of the object mask).
- the measure Cl for the color diversity of the pigment change results from the color coding entropy according to:
- Cl (-l / ln (M)) • ⁇ ip nfpx) (5) Cl is thus a measure of the frequency distribution with which the individual clusters are occupied.
- the color coding entropy C1 is first calculated as a global feature in accordance with equation (5). Then the examined lesion is covered with a regular grid, the grid size being selected depending on the lesion size. The respective local entropy of color coding c ⁇ is calculated analogously to equation (5) for all n lattice windows which are filled at least one third with lasion pixels (pixels of the object mask). The parameter C2 then results from the color coding variance from equation (6)
- the image analysis 200 comprises a structure description 240 (see FIG. 2), the details of which are shown in FIG. 10.
- the structure description 240 contains structure recognition methods 241, 242 and a position determination 243 in which the spatial arrangement of the structures determined during the structure recognition is recorded.
- the structure recognition methods 241, 242 represent alternative image processing that can be implemented individually or together.
- the first structure recognition method 241 initially includes a color transformation 107, if this is not already in the Image preprocessing (see FIG. 2) has been carried out.
- the color transformation the recorded image is transformed into a suitable color space, which allows projection onto a color plane in which the structures to be detected have a particularly high contrast.
- the recorded image is projected (possibly in an adjusted form) onto the red plane for the detection of point-like structures or onto a gray value-brightness level for the detection of network-like structures.
- the scaling indices a are determined for all image points of the transformed image and the frequency spectrum N (a) for the total number of points. Using the structure-ordering properties of the N (a) spectrum, the pixels belonging to certain structures are identified and recorded quantitatively.
- the structure recognition is carried out using conventional image processing methods.
- the recorded image is transformed, for example, into the red plane and subjected to an increase in contrast and a selective smoothing.
- Selective smoothing means that a mean value calculation of the red values takes place, which however only includes pixels that have a red value within a certain interval around the mean value.
- red value densities generally: gray value densities
- the color image is converted into a gray value image, an increase in contrast, a line detection and a subsequent phantom cleaning.
- certain image points within the lasion can be assigned to structural elements such as points, network structures or clods.
- the distinction between points and clods is made by the determination of the number of pixels belonging to the respective structure.
- the D components D1 and D2, which are determined in the structure recognition method 241, 242, can, for example, each comprise the flat part of the respective structure class (point, network, or clod) on the total lesion surface and the number of structures in the various structure classes.
- the compactness c can be calculated depending on a length dimension (e.g. maximum extension) of the structure and the area of the structure.
- the details of the symmetry evaluation 210 within the image analysis 200 are described in detail with reference to FIG. 11.
- the symmetry ratings 211-215 relate to the geometric properties of the contour, the border, the object mask, the color distribution and the structure distribution. This means that individual analysis steps from the analysis methods 220, 230 and 240, as described above, are adopted in the symmetry determination 210 in an identical or modified form. This means that all quantities from the rest of the image analysis are used.
- the contour symmetry determination 211 gives as a parameter an angle ⁇ with respect to a reference axis (for example the horizontal), which corresponds to the inclination of the axis of symmetry of the outer contour of the lesion with respect to the reference axis.
- a reference axis for example the horizontal
- the contour mask is divided into two segments of equal size in a plurality of steps and the fractal dimensions FDi, FD of the segments are calculated as described above.
- An intermediate parameter B ⁇ is calculated for each segment in accordance with equation (7).
- the angle of rotation which corresponds to a minimum value among the set of intermediate parameters B ⁇ , provides the inclination of the axis of maximum asymmetry with respect to the reference axis with respect to the FD ⁇ f2 .
- the boundary symmetry determination 212 and the mask symmetry determination 213 are derived on the basis of symmetry properties of the local parameters from the method step 220.
- the color symmetry determination 214 comprises a color space transformation of the recorded image, the determination of axes of symmetry of the transformed image and the derivation of color parameters of the pixels in relation to the axes of symmetry.
- the color symmetry is calculated from the mean color difference of axisymmetric pixels with respect to the object symmetry axis.
- the texture symmetry determination 215 in turn takes place on the basis of local features that were determined in method step 240 of the image analysis.
- the explained method steps of the image analysis 200 are particularly advantageous since all processes are completely automated. are capable of high accuracy and reproducibility and can be easily adapted to subsequent diagnostic procedures.
- the parameter recordings are also objectified and standardized and can be carried out at high speed.
- the visualization 300 (see FIG. 1) is provided in all phases of the image preprocessing 100 and the image analysis 200.
- the visualization 300 is a false color representation, an artifact marking, an object marking, an edge marking (in each case in the overlay method) and in the image analysis 200 the marking of edge elements, the representation of the color variety and color homogeneity with false color representations corresponding to the belonging to the prototypical color clusters , the visualization of structural elements (overlay process) and symmetry features (e.g. the symmetry axes).
- This visualization has the advantage that parameters of the statistical processing methods can be manually optimized by the operator of the image processing arrangement and the image processing is made transparent and comprehensible.
- the above-described steps of image preprocessing, image analysis, visualization and classification are not limited to the processing of skin images given as an example, but generally in the examination of tissue images (for example wound healing images for examining cosmetic active ingredients, images for assessing ulcers, images for quantification hair loss, images to investigate the effects of cosmetics, images of tissue traces in criminal applications and the like.) Applicable.
- the invention is not limited to optically recorded tissue images, but can generally be used advantageously in all results of imaging methods with complex structured images.
- the figures 12a-f and 13a-d illustrate the application of the invention using a specific example, namely image processing for a malignant melanoma. The illustrations serve only to illustrate the example, and for reasons of printing technology, gray tones or color tones are not reproduced.
- Fig. 12a is a schematic representation of a captured image, which is in the original em digital video image with color tones and brightness gradations.
- the framed area corresponds to the image recorded with a CCD camera chip (512 512 pixels), which represents a skin area of 11.8 11.8 mm.
- CCD camera chip 512 512 pixels
- the figures 12b-f show the object mask derived from the lesion, the artifact mask which only represents the two hairs, the inline contour mask, the flat edge mask and the color symbol image.
- the visualization of the local image parameters was made in grayscale or false color, a differentiation between areas with larger local image parameters B1, which correspond to a wider and more uneven border, and with smaller local image parameters B1 L , which correspond to a more uniform, narrower border.
- the display of the local image parameters thus shows which marginal areas make which contribution to the global image parameters.
- the determination of the boundary homogeneity 222 see FIG.
- the parameter B2 relates to the fluctuations in the border thickness along the circumference of the lesion.
- the local image parameters for individual image tiles are determined accordingly. This is illustrated in Fig. 13a.
- the tiles with a lighter border area refer to an even, narrow border, whereas the tiles with darker border areas emphasize an uneven, wider border.
- the local image parameters can be visualized in a gray value or color selective manner.
- the global image parameters B4 G corresponding to the three alternatives I, II and III are 0.1598, 0.8823 and 0.6904. The differences between the alternatives have been explained above.
- One or more of these image parameters are selected depending on the application.
- the local image parameters are visualized.
- the figures 13b-d show results of the structure recognition methods 242 and 243 (see FIG. 10) by way of example.
- the image extract of the detected network structures is shown in FIG. 13b and that of the detected point structures in FIG. 13c.
- the global parameter D2 G is 0.1213.
- the local visualization of the image parameter D3 according to the method for determining the spatial arrangement of the structures 243 (see FIG. 10) is shown in FIG. 13b.
- Bright image tiles mean the presence of a particularly homogeneous area, while darker image tiles indicate more structured image areas.
- the global image parameter D3 G is 0.04104.
- the C image parameters derived from the color symbol according to FIG. 12f can also be global determine and display locally.
- the local parameters are not illustrated for printing reasons. In real visualization there are false color representations, which indicate areas with few colors and areas with many colors, for example.
- the invention also relates to a device for implementing the method described above, which comprises in particular a recording and lighting device, a camera device, storage devices, data processing devices and a display device.
- the advantages of the invention result from the provision of a high quality and standardized image acquisition.
- a quantitative recording of image parameters is possible which correspond to the image parameters of a visually evaluating expert and are based on his viewing and interpretation habits.
- the image information content is broken down into elementary, reproducible categories, with a high degree of intra- and inter-individual comparability.
- the described steps of image acquisition and image processing are used for the first time in the assessment of biological tissue.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE19754909 | 1997-12-10 | ||
DE19754909A DE19754909C2 (en) | 1997-12-10 | 1997-12-10 | Method and device for acquiring and processing images of biological tissue |
PCT/EP1998/008020 WO1999030278A1 (en) | 1997-12-10 | 1998-12-09 | Method and device for detecting and processing images of biological tissue |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1038267A1 true EP1038267A1 (en) | 2000-09-27 |
Family
ID=7851454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP98965797A Withdrawn EP1038267A1 (en) | 1997-12-10 | 1998-12-09 | Method and device for detecting and processing images of biological tissue |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1038267A1 (en) |
AU (1) | AU2159999A (en) |
DE (1) | DE19754909C2 (en) |
WO (1) | WO1999030278A1 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10021431C2 (en) * | 2000-05-03 | 2002-08-22 | Inst Neurosimulation Und Bildt | Method and device for classifying optically observable skin or mucosal changes |
EP1231565A1 (en) * | 2001-02-09 | 2002-08-14 | GRETAG IMAGING Trading AG | Image colour correction based on image pattern recognition, the image pattern including a reference colour |
DE10307454B4 (en) * | 2003-02-21 | 2010-10-28 | Vistec Semiconductor Systems Gmbh | Method for optical inspection of a semiconductor substrate |
DE102004002918B4 (en) * | 2004-01-20 | 2016-11-10 | Siemens Healthcare Gmbh | Device for the examination of the skin |
DE102005045907B4 (en) * | 2005-09-26 | 2014-05-22 | Siemens Aktiengesellschaft | Device for displaying a tissue containing a fluorescent dye |
US9240043B2 (en) | 2008-09-16 | 2016-01-19 | Novartis Ag | Reproducible quantification of biomarker expression |
DE102008059788B4 (en) | 2008-12-01 | 2018-03-08 | Olympus Soft Imaging Solutions Gmbh | Analysis and classification of biological or biochemical objects on the basis of time series images, applicable to cytometric time-lapse cell analysis in image-based cytometry |
DE102017107348B4 (en) * | 2017-04-05 | 2019-03-14 | Olympus Soft Imaging Solutions Gmbh | Method for the cytometric analysis of cell samples |
US20220237810A1 (en) * | 2019-05-09 | 2022-07-28 | H. Lee Moffitt Cancer Center And Research Institute, Inc. | Systems and methods for slide image alignment |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5016173A (en) * | 1989-04-13 | 1991-05-14 | Vanguard Imaging Ltd. | Apparatus and method for monitoring visually accessible surfaces of the body |
JPH06169395A (en) * | 1992-11-27 | 1994-06-14 | Sharp Corp | Image forming device |
DE4317746A1 (en) * | 1993-05-27 | 1994-12-01 | Max Planck Gesellschaft | Method and device for spatial filtering |
DE4329672C1 (en) * | 1993-09-02 | 1994-12-01 | Siemens Ag | Method for suppressing noise in digital images |
DE19633693C1 (en) * | 1996-08-21 | 1997-11-20 | Max Planck Gesellschaft | Method of detecting target pattern in static or dynamic systems |
WO1998037811A1 (en) * | 1997-02-28 | 1998-09-03 | Electro-Optical Sciences, Inc. | Systems and methods for the multispectral imaging and characterization of skin tissue |
-
1997
- 1997-12-10 DE DE19754909A patent/DE19754909C2/en not_active Expired - Fee Related
-
1998
- 1998-12-09 AU AU21599/99A patent/AU2159999A/en not_active Abandoned
- 1998-12-09 WO PCT/EP1998/008020 patent/WO1999030278A1/en not_active Application Discontinuation
- 1998-12-09 EP EP98965797A patent/EP1038267A1/en not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
See references of WO9930278A1 * |
Also Published As
Publication number | Publication date |
---|---|
AU2159999A (en) | 1999-06-28 |
DE19754909C2 (en) | 2001-06-28 |
WO1999030278A1 (en) | 1999-06-17 |
DE19754909A1 (en) | 1999-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE60307583T2 (en) | Evaluation of the sharpness of an image of the iris of an eye | |
DE69906403T2 (en) | Method and device for detecting a face-like area | |
DE60123378T2 (en) | Digital image processing method with different ways of detecting eyes | |
EP2130174B1 (en) | Method and device for determining a cell contour of a cell | |
DE102010061505B4 (en) | Method for inspection and detection of defects on surfaces of disc-shaped objects | |
DE60310267T2 (en) | MEASUREMENT OF MITOSE ACTIVITY | |
DE69322095T2 (en) | METHOD AND DEVICE FOR IDENTIFYING AN OBJECT BY MEANS OF AN ORDERED SEQUENCE OF LIMIT PIXEL PARAMETERS | |
DE112012004493B4 (en) | Color lighting control method to improve image quality in an imaging system | |
EP1797533B1 (en) | Method and device for segmenting a digital representation of cells | |
DE112008000020B4 (en) | Device and program for correcting the iris color | |
DE112019005143T5 (en) | SYSTEM FOR CO-REGISTRATION OF MEDICAL IMAGES USING A CLASSIFICATOR | |
DE60313662T2 (en) | HISTOLOGICAL EVALUATION OF NUCLEAR PREPARATION | |
DE102019133685A1 (en) | Information processing system and procedures | |
DE102021100444A1 (en) | MICROSCOPY SYSTEM AND METHOD FOR EVALUATION OF IMAGE PROCESSING RESULTS | |
DE19754909C2 (en) | Method and device for acquiring and processing images of biological tissue | |
DE112019004112T5 (en) | SYSTEM AND PROCEDURE FOR ANALYSIS OF MICROSCOPIC IMAGE DATA AND FOR GENERATING A NOTIFIED DATA SET FOR TRAINING THE CLASSIFICATORS | |
WO2000079471A2 (en) | Method and device for segmenting a point distribution | |
DE102005049017B4 (en) | Method for segmentation in an n-dimensional feature space and method for classification based on geometric properties of segmented objects in an n-dimensional data space | |
DE102016105102A1 (en) | Method for examining distributed objects | |
EP1105843B1 (en) | Method and device for detecting colours of an object | |
DE19834718C2 (en) | Digital image processing for a quality control system | |
DE112018001054T5 (en) | Image processing apparatus, image forming method and program | |
DE19726226C2 (en) | Process for the automated recognition of structures in sections through biological cells or biological tissue | |
EP3663976A1 (en) | Method for detecting fingerprints | |
EP2581878A2 (en) | Method and apparatus for quantification of damage to a skin tissue section |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20000609 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE FR GB IT |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
17Q | First examination report despatched |
Effective date: 20010924 |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20020719 |