EP1938270A2 - Method for segmentation in an n-dimensional characteristic space and method for classification on the basis of geometric characteristics of segmented objects in an n-dimensional data space - Google Patents
Method for segmentation in an n-dimensional characteristic space and method for classification on the basis of geometric characteristics of segmented objects in an n-dimensional data spaceInfo
- Publication number
- EP1938270A2 EP1938270A2 EP06806044A EP06806044A EP1938270A2 EP 1938270 A2 EP1938270 A2 EP 1938270A2 EP 06806044 A EP06806044 A EP 06806044A EP 06806044 A EP06806044 A EP 06806044A EP 1938270 A2 EP1938270 A2 EP 1938270A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- objects
- dimensional
- space
- classes
- steps
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Definitions
- the invention relates to a method for segmenting objects in an n-dimensional feature space, which is present as a data space, and to a method for classification on the basis of geometric properties of segmented objects in an n-dimensional data space.
- the dimension n of the data space can be any natural number.
- An example of a 2-dimensional data space is, for example, the amount of data associated with a phase contrast image in the microscopy, an example of a 3-dimensional data space, the amount of data associated with a color image with the color channels R-GB, and an example of a 16-dimensional data space to a radar image with 16 spectral channels associated amount of data.
- This object to provide a method for segmentation is achieved by a method having the features of claim 1.
- Advantageous embodiments of this method result from the features of the dependent claims 2 to 4 and 8 to 13.
- Claim 14 specifies a suitable computer system and claim 15 specifies a suitable computer program product.
- the segmentation method according to claim 1 comprises the following method steps: In a first step, the user selects a single data area in the n-dimensional feature space. This selected data area is always interpreted by the system as containing at least two classes of objects to be segmented. In subsequent steps, the system first determines a separation function in the n-dimensional feature space to distinguish the at least two classes and subsequently applies this separation function to the entire data space or a larger subset of the data space. This segmentation result is then visually presented to the user in real time.
- the results can be optimized in a real-time feedback loop using the pattern recognition capabilities of the user. Furthermore, additional features of the objects to be segmented, such as number of objects in the image, relative area of objects, minimum and maximum size, shape factors such as eccentricity, fralal dimension as a measure of the smoothness of the boundary lines or other suitable properties can be given. This information can be specified by the user as well as automatically extracted from the image by suitable methods. If the user is not yet satisfied with the segmentation result achieved, he can subsequently change the selected data area and thus visually display in each case the segmentation result changed by the system due to the then also changed separation function in real time.
- a larger number of classes can be specified.
- the number of classes can also be determined by an automatic procedure.
- a data area should also be selected by the user, which contains pixels or data points of a corresponding number of classes.
- the separation function can be determined by first determining a reference point of the features (Si) for each dimension of the feature space by a mathematical method, subsequently projecting all data points to all combinations of two-dimensional subspaces of the n-dimensional feature space, and finally by a two-dimensional error minimization method a phase value and an amplitude value to a predetermined wave function are determined so that a suitably defined approximation error for this wave function is minimized by a suitable method.
- a suitable method For sinusoidal or cosine wave functions, such an approximation error is the sum of the squared differences that is minimized by the least squares method.
- the wave function is an integer-periodic, continuous function.
- the wavefunction k is greater than two with k, and the resulting k classes can be combined into at least two classes.
- the wave function may be three-fold and the resulting three classes may be two Classes are summarized.
- the classes with the largest local variance of the gray value distribution can be combined.
- the feature space can be three-dimensional and contain RGB image data of a digital color image.
- the feature space may be four-dimensional and contain image data of four fluorescence channels acquired with four detectors at different wavelengths of light.
- the method can be successively applied to different locations of an object by successively selecting different data areas belonging to the same object. By interpreting each of these data areas such that it contains at least two classes of objects to be segmented, and based thereon determining the separation function, several classes then result, at least two of which are subsequently reunited. In this way, for example, an object that is embedded in two or more different backgrounds can be segmented so that the different backgrounds are distinguished.
- the above method in which the separation function is determined from a wave function, can also be applied to a method for classification on the basis of geometric properties previously according to any method of segmented objects in an n-dimensional data space.
- first at least two objects are selected as representatives for two different classes in a first step, subsequently a number (m) of geometric features per object is calculated by calculating wavefunctions of different integer wavefunctions and finally the objects are calculated on the basis of the determined number of geometric features or Subsets thereof classified.
- the previously required segmentation of the objects can in principle be carried out by means of an arbitrary method, but particularly advantageously according to a method according to the present invention.
- phase values and amplitude values can be calculated from the wave functions, wherein the amplitude values characterize the shape of the objects and the phase values characterize the orientations of the objects.
- the amplitude values calculated from the wave functions describe the shape of the objects size-invariant, translation-invariant and rotationally invariant.
- a computer system suitable for carrying out a method according to the invention should have means (1) for interactively inputting and selecting image areas and a monitor for real-time visualization of the results achieved.
- processor and a data memory for a computer program with a software code by means of which the method according to the invention can be implemented.
- FIG. 1 a schematic diagram of a system suitable for carrying out the invention
- FIG. 2 shows a block diagram of the method steps taking place in the segmentation method according to the invention
- FIG. 3 shows a block diagram of the method steps taking place in the classification method according to the invention
- FIG. 4 shows a microscope image with cells as a starting point for explaining the method according to the invention
- FIG. 5 shows the segmented image generated by the system from the image in FIG. 4;
- FIGS. 6 to 8 shows the segmented image generated by the system from the image in FIG. 4;
- FIG. 9 explanations of the generation of the dumbbells in FIGS. 6 to 8
- FIG. 10 a three-dimensional representation of one derived from FIGS. 6 to 8
- FIG. 11 A phase-contrast image as an example of a texture image
- FIGS. 12 and 13 FIG.
- Segmentation method generated feature spaces that are used for the detection of
- FIG. 14 a two-dimensional histogram of the images in FIGS. 12 and 13;
- FIG. 15 shows a threefold wave function;
- 16 the image from FIG. 11 and the feature space of the texture information for a selected pointing region
- FIG. 17 Images for explaining the shape recognition
- FIG. 1 shows a system comprising a computer (2) with a built-in main memory (4) and processor (3).
- main memory (4) is a program loadable, which enables the processor (3) for carrying out the method according to the invention.
- To the computer (2) is a monitor (1) and an input means (5), e.g. a mouse, connected. Using the mouse, the user can move a pointing area raised in the image displayed by the monitor (1) relative to the image.
- the pointing area is here provided with the reference numeral (24).
- the pointing area (24) should be positioned relative to the image such that the pointing area (24) comprises a part of the object (23) to be segmented and a part of the image background (22).
- the running segmentation process will be explained below with reference to FIG. 2 using the example of a color image.
- the starting point in this case is the image (7), which exists as color brightness information in the three primary colors red, green and blue as a function of the two spatial coordinates x, y.
- a first step (8) the user selects a region in the image (7) which should contain the object to be segmented and image background or image parts of two objects to be distinguished.
- the image information in this selected pointing region is interpreted by the system in a subsequent step (8) such that at least two classes of objects are contained in the selected pointing region.
- a separation function is determined on the basis of the image information in the pointing area.
- suitable features such as color brightness values in the three primary colors are analyzed and in a step (10) a reference point of the various features is determined. Then, in a step (11), all data points are projected onto all two-dimensional subspaces of the feature space. From this projection, which will be described in more detail below, phase values and amplitude values that determine the separation function as a wave function result in a subsequent step (12). In a subsequent step (13), this separation function is applied to the entire data space - or the subspace thereof to be segmented - and the result of the segmentation is displayed in real time in a step (14). As already stated above, the user first roughly points to the boundary between an object (23) and the background (24) or to the boundary between two different adjoining objects as indicated in FIG.
- the user selects an arbitrarily shaped area (22), z. A circle, and positions that pointing area to cover a portion of the object (23) and at the same time a portion of the background (24).
- the segmentation system can therefore assume that the boundary of an object runs within the circle.
- it can be assumed that both the texture / color of the object and the texture / color of the background are present in the pointing area. So this is not only a simplification of the necessary work steps, but at the same time an increase in the information content, namely the information about different textures / colors on the one hand and the border between the textures / colors on the other.
- the set of selected pixels should be such that it can be decomposed into two (or more) disjoint subsets of which one can be assigned to exactly one of several objects or the background.
- This increased information content can, for example, be exploited so that the recognition of an object can take place online while the user travels the pointing area over the image.
- the pointing area can be controlled, for example, by a mouse, a touchpad, a touch screen, a trackball, a joystick or other pointing device used to move the cursor on computers, and then a separation function is determined based on the texture / color in the pointing area
- the result of the segmentation is shown in Fig. 5.
- a wave function is preferably used, and the determination of a suitable separation function will be described below in more detail by an example.
- the user has in the process of the invention during the movement in the image with the pointing area the immediate feedback of the computer program (in real time), whether this has correctly detected the object sought, or if corrections are necessary at some edge points of the object.
- the mean value important for the classification can be optimized by shifting the pointing region, and likewise the representative selection of the relevant pixel subsets can be optimized.
- Such methods are: a) the selection according to the distance of the local point to the known separation planes or their centers of gravity, b) the interpolation between the planes using the relative image coordinates of the example objects and the current picture element (morphing), c) the use of self-learning approaches such as linear or non-linear neural networks or d) the application of all separation planes and the use of the maximum distance.
- a) the selection according to the distance of the local point to the known separation planes or their centers of gravity b) the interpolation between the planes using the relative image coordinates of the example objects and the current picture element (morphing)
- c) the use of self-learning approaches such as linear or non-linear neural networks or d) the application of all separation planes and the use of the maximum distance.
- the present invention has the further advantage that it works in arbitrarily high-dimensional feature spaces.
- An example here is the three-dimensional color space with the colors red, green and blue.
- Another, even higher-dimensional example would be radar images with 16 or more spectral channels.
- the method is not limited to image processing. It works in any higher-dimensional feature space described by locally changing scalar fields.
- the constructed classifier for the separation areas is by constructionem invariant against translation, rotation and extension. With suitable further processing, this also applies to the design recognition described in more detail later.
- Such invariance properties are only possible by the usual methods by complex mathematical treatments such as e.g. To achieve local Fourier analyzes or Gabor wavelets, whereby the serious advantage of real-time capability is lost.
- the image is taken from the color values of a certain environment for which a difference is to be calculated.
- these are the 3 color channels with the indices 1, 2, 3. So, in this example, there are 3 dimensions, and for each dimension, a number of metrics corresponding to the number of image pixels in the pointing area.
- the Characteristics Snm divided by measured values and dimension form an n * m matrix in the general case
- the coefficients of a phase matrix and an amplitude matrix are calculated for all two-relationships ij with j> i between the dimensions from the features Sij.
- the dumbbells arise by plotting 2 sine waves instead of Cartesian coordinates over a circle of radius r.
- the dumbbells arise in this form if r is chosen equal to the amplitude amp of the sine wave.
- the rotation of the dumbbell relative to the axes results from the respectively associated phase values.
- the dumbbell value d from the center of rotation of the polar coordinates is then:
- a separation area (31) in the 3-dimensional color space is now calculated.
- This separation surface is shown in FIG. 10 in 3D.
- the separation area is determined by the fact that the mean value from calculation step 1 lies at this level.
- the following operations still determine the normal vector of the separation surface.
- the components of the interface result from the maxima of the amplitudes in the amplitude matrix.
- the plane two-dimensional subspace of the feature space
- the result of the calculation steps 1 to 3 are 2 vectors, namely the vector mean from the coordinate origin to the mean on the separating surface and the vector plane perpendicular to the separating surface. This makes it possible to decide for each pixel on the basis of its color value Sk, on the softer side of the separating surface it lies, or to which object to be segmented it belongs. For this purpose, a threshold is first calculated, which is given by the scalar product of the two vectors mean and plane.
- Threshold scalar product (me ⁇ / -z, /? / ⁇ A7e)
- the color value lies before the separation surface if the scalar product (Sk, p / ö «e) is smaller than the threshold and the color value lies behind the separation surface if the scalar product (Sk, p / ⁇ 7" ze) is greater than or equal to the threshold ,
- the calculation steps 2 and 3 can be omitted.
- the threshold from calculation step 4 is then equal to the mean value from calculation step 1, since the plane from calculation step 3 thereby shrinks to a point.
- the plane of calculation step 3 shrinks to a straight line.
- the above method corresponds to the result of the known method for segmenting gray scale images by determining an optimized threshold (Calculation of the mean value, division of the pixel quantity into two subset with the mean value as threshold, calculation of the mean values of the two subsets, these mean values correspond to the emphases in the indicated procedure).
- steps 1 to 4 can be segmented objects that are distinguishable by their color or gray value from the background. But now there are pictures in which the objects can not be distinguished from the background by their color or gray value, but by their brightness distribution.
- An example of this is the image of cancer cells as shown in Figure 11. In normal light, these cancer cells are largely transparent. Polarized light makes the cells visible in so-called phase contrast. The objects are distinguished by their brightness dynamics and not by a brightness range from the homogeneous background.
- a feature space adapted for this task is generated.
- the image is transformed before editing so that the real-time capability is restored using three classes.
- a pointing region of the size of, for example, 3 by 3 pixels or larger is automatically applied for each pixel i, j of the image to be processed.
- its distance from the separation surface is calculated for each pixel of the pointing region. All these distances are added up correctly according to the location of the pointing area.
- FIGS. 12 and 13 now illustrate a two-dimensional feature space which is used for the
- the feature space must still be normalized, so that both dimensions of the feature space comprise the same number range, e.g. the range of numbers between 0 and 1 has to be particularly favorable
- a two-dimensional feature space it can be imaged as a two-dimensional histogram, which is shown in FIG.
- the pixels form a triangle in the feature space, with the pixels at the top at the top are the pixels of the background, right below the bright pixels within the objects and lower left are the dark pixels of the cells.
- Such a triangular shape can be recognized with a wave function of period 3.
- a cosine (3 ⁇ ) is fitted according to exactly the same scheme as in the above calculation step 2.
- Such a wave function of period 3 is shown in FIG. 15 on the left in Cartesian coordinates and in FIG. 15 on the right in polar coordinates.
- the 16 shows the feature values Dist and Plen for the pointing region 31 in the left-hand part of FIG. 16.
- the pixels in the phase range of the upper oscillation belong to the background and the pixels in the phase range of the two lower oscillations belong to the object.
- the amplitude of the oscillation is a measure of the quality of detection.
- a possible alternative is the use of a Kohonen neural network with 3 classes as described in T. Kohonen, Self-Organizing Maps, Springer Verlag, ISBN 3-540-62017-6.
- the background is then the class with the smallest class variance and the object is the other two classes.
- the disadvantage here is that no quality of the classification is recognizable.
- the method of using wavefunctions can also be used for the character recognition or for the classification of objects.
- the corresponding method is shown in FIG.
- the starting point (15) is the already segmented image.
- at least two objects to be distinguished are selected by the user in a subsequent step (16).
- a subsequent step (17) a number of geometric features are calculated by calculating wave functions.
- Phase values and amplitude values are again calculated from the wave functions in a subsequent step (18), and the objects are subsequently classified in a step (19) on the basis of the geometric features.
- the classification result is visualized in a step 20 in real time.
- This classification method can be used, for example, for design recognition.
- worms in a phase contrast recording are recognized above via texture segmentation. These worms exist in two forms, stretched (alive) (see left part of picture) and rolled up (dead) (see right part of picture).
- the pointing area necessary for the wave functions is the object itself and the feature space are directly the pixel coordinates of the objects.
- the wave functions 2 fit of a cosine (2 ⁇ )
- 7 fit of a cosine (7 ⁇ )
- xc, yc be the center of gravity of the object.
- the calculation takes place after calculation step 1 as the mean value in the individual dimensions x and y.
- Calculation step 2 as a program: For each pixel x, y of the object
- the shape coefficients (numbers between 0 and 1) are shown as a block diagram.
- This block diagram in turn represents a six-dimensional feature space (shape space).
- the above segmentation method can now be used again by calculating the separation surface between the worm forms in this six-dimensional shape space by means of wave functions.
- One worm shape then lies on one side of the separation surface and the other worm form on the other side of the separation surface.
- the above classification is invariant to translation, rotation, mirroring, enlargement, and reduction without a previous elaborate transformation of image space into an invariant property space.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102005049017A DE102005049017B4 (en) | 2005-10-11 | 2005-10-11 | Method for segmentation in an n-dimensional feature space and method for classification based on geometric properties of segmented objects in an n-dimensional data space |
PCT/EP2006/009623 WO2007042195A2 (en) | 2005-10-11 | 2006-10-05 | Method for segmentation in an n-dimensional characteristic space and method for classification on the basis of geometric characteristics of segmented objects in an n-dimensional data space |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1938270A2 true EP1938270A2 (en) | 2008-07-02 |
Family
ID=37525867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06806044A Withdrawn EP1938270A2 (en) | 2005-10-11 | 2006-10-05 | Method for segmentation in an n-dimensional characteristic space and method for classification on the basis of geometric characteristics of segmented objects in an n-dimensional data space |
Country Status (4)
Country | Link |
---|---|
US (1) | US8189915B2 (en) |
EP (1) | EP1938270A2 (en) |
DE (1) | DE102005049017B4 (en) |
WO (1) | WO2007042195A2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2352076B (en) * | 1999-07-15 | 2003-12-17 | Mitsubishi Electric Inf Tech | Method and apparatus for representing and searching for an object in an image |
CN102027490B (en) * | 2008-05-14 | 2016-07-06 | 皇家飞利浦电子股份有限公司 | Image classification based on image segmentation |
JP5415730B2 (en) * | 2008-09-04 | 2014-02-12 | 任天堂株式会社 | Image processing program, image processing apparatus, image processing method, and image processing system |
CN102800050B (en) * | 2011-05-25 | 2016-04-20 | 国基电子(上海)有限公司 | Connectivity of N-dimensional characteristic space computing method |
CN102592135B (en) * | 2011-12-16 | 2013-12-18 | 温州大学 | Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics |
US11972078B2 (en) * | 2017-12-13 | 2024-04-30 | Cypress Semiconductor Corporation | Hover sensing with multi-phase self-capacitance method |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5313532A (en) * | 1990-01-23 | 1994-05-17 | Massachusetts Institute Of Technology | Recognition of patterns in images |
JPH07123447A (en) * | 1993-10-22 | 1995-05-12 | Sony Corp | Method and device for recording image signal, method and device for reproducing image signal, method and device for encoding image signal, method and device for decoding image signal and image signal recording medium |
US5793888A (en) * | 1994-11-14 | 1998-08-11 | Massachusetts Institute Of Technology | Machine learning apparatus and method for image searching |
FR2740220B1 (en) * | 1995-10-18 | 1997-11-21 | Snecma | METHOD FOR THE AUTOMATIC DETECTION OF EXPERTISABLE ZONES IN IMAGES OF MECHANICAL PARTS |
US6526168B1 (en) * | 1998-03-19 | 2003-02-25 | The Regents Of The University Of California | Visual neural classifier |
US6480627B1 (en) * | 1999-06-29 | 2002-11-12 | Koninklijke Philips Electronics N.V. | Image classification using evolved parameters |
DE10017551C2 (en) * | 2000-04-08 | 2002-10-24 | Carl Zeiss Vision Gmbh | Process for cyclic, interactive image analysis and computer system and computer program for executing the process |
US20020122491A1 (en) * | 2001-01-03 | 2002-09-05 | Marta Karczewicz | Video decoder architecture and method for using same |
US20020164070A1 (en) * | 2001-03-14 | 2002-11-07 | Kuhner Mark B. | Automatic algorithm generation |
KR100446083B1 (en) * | 2002-01-02 | 2004-08-30 | 삼성전자주식회사 | Apparatus for motion estimation and mode decision and method thereof |
CN1830004A (en) * | 2003-06-16 | 2006-09-06 | 戴纳皮克斯智能成像股份有限公司 | Segmentation and data mining for gel electrophoresis images |
-
2005
- 2005-10-11 DE DE102005049017A patent/DE102005049017B4/en not_active Expired - Fee Related
-
2006
- 2006-10-05 WO PCT/EP2006/009623 patent/WO2007042195A2/en active Application Filing
- 2006-10-05 EP EP06806044A patent/EP1938270A2/en not_active Withdrawn
-
2008
- 2008-04-11 US US12/081,142 patent/US8189915B2/en not_active Expired - Fee Related
Non-Patent Citations (1)
Title |
---|
See references of WO2007042195A2 * |
Also Published As
Publication number | Publication date |
---|---|
WO2007042195A3 (en) | 2007-09-07 |
US20080253654A1 (en) | 2008-10-16 |
US8189915B2 (en) | 2012-05-29 |
WO2007042195A2 (en) | 2007-04-19 |
DE102005049017B4 (en) | 2010-09-23 |
DE102005049017A1 (en) | 2007-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE60215063T2 (en) | SYSTEM AND METHOD FOR DETERMINING IMAGE LENGTH | |
DE60034668T2 (en) | METHOD FOR TEXTURE ANALYSIS OF DIGITAL IMAGES | |
DE69428089T2 (en) | Device and method for image analysis | |
DE69031774T2 (en) | Adaptive grouper | |
DE60114469T2 (en) | Method and device for determining interesting images and for image transmission | |
DE102017220307B4 (en) | Device and method for recognizing traffic signs | |
DE60109278T2 (en) | Method and device for locating characters in images from a digital camera | |
DE69805798T2 (en) | FINGERPRINT CLASSIFICATION BY MEANS OF SPACE FREQUENCY PARTS | |
EP1316057B1 (en) | Evaluation of edge direction information | |
DE69322095T2 (en) | METHOD AND DEVICE FOR IDENTIFYING AN OBJECT BY MEANS OF AN ORDERED SEQUENCE OF LIMIT PIXEL PARAMETERS | |
DE112015000964T5 (en) | Image processing apparatus, image processing method and image processing program | |
DE69231049T2 (en) | Image processing | |
DE102019127282A1 (en) | System and method for analyzing a three-dimensional environment through deep learning | |
DE60303138T2 (en) | COMPARING PATTERNS | |
DE102005049017B4 (en) | Method for segmentation in an n-dimensional feature space and method for classification based on geometric properties of segmented objects in an n-dimensional data space | |
WO2010133204A1 (en) | Apparatus and method for identifying the creator of a work of art | |
DE60217748T2 (en) | Method and device for displaying a picture space | |
DE102022201780A1 (en) | Visual analysis system to evaluate, understand and improve deep neural networks | |
DE19928231C2 (en) | Method and device for segmenting a point distribution | |
DE102020215930A1 (en) | VISUAL ANALYSIS PLATFORM FOR UPDATING OBJECT DETECTION MODELS IN AUTONOMOUS DRIVING APPLICATIONS | |
DE60033580T2 (en) | METHOD AND APPARATUS FOR CLASSIFYING AN IMAGE | |
DE10017551C2 (en) | Process for cyclic, interactive image analysis and computer system and computer program for executing the process | |
DE112019004112T5 (en) | SYSTEM AND PROCEDURE FOR ANALYSIS OF MICROSCOPIC IMAGE DATA AND FOR GENERATING A NOTIFIED DATA SET FOR TRAINING THE CLASSIFICATORS | |
DE102019105293A1 (en) | Estimation of the movement of an image position | |
EP2096578A2 (en) | Method and device for characterising the formation of paper |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20080308 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
17Q | First examination report despatched |
Effective date: 20101122 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: CARL ZEISS MICROIMAGING GMBH |
|
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: CARL ZEISS MICROSCOPY GMBH |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20170503 |