WO2001086585A1 - Verfahren und anordnung zum ermitteln eines objekts in einem bild - Google Patents
Verfahren und anordnung zum ermitteln eines objekts in einem bild Download PDFInfo
- Publication number
- WO2001086585A1 WO2001086585A1 PCT/DE2001/001744 DE0101744W WO0186585A1 WO 2001086585 A1 WO2001086585 A1 WO 2001086585A1 DE 0101744 W DE0101744 W DE 0101744W WO 0186585 A1 WO0186585 A1 WO 0186585A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- image
- local resolution
- partial area
- recorded
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
- G06V10/7515—Shifting the patterns to accommodate for positional errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/248—Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
- G06V30/2504—Coarse or fine approaches, e.g. resolution of ambiguities or multiscale approaches
Definitions
- the invention relates to a method for determining an object in an image and arrangements for determining an object in an image.
- the method is ended and the recognized object for which the extracted features have been formed is output as a recognized object.
- the method is carried out iteratively for different sub-areas of the image until the object has been determined or until a predetermined termination criterion is fulfilled, for example a predetermined number of iterations or the object to be recognized is recognized with sufficient accuracy.
- a disadvantage of this procedure is in particular the very large computing time required to determine an object in the image to be examined. This is due in particular to the fact that all partial areas of the image are treated in the same way, that is to say the local resolution is the same for all partial areas of the image in the context of the method for object detection.
- the two-dimensional Gabor transformations are basic functions that use local spatial bandpass filters to achieve the theoretical optimal overall resolution in the spatial and frequency ranges, that is, in the one-dimensional spatial range and in the two-dimensional frequency range.
- the invention is based on the problem of determining an object in an image, the determination being able to be carried out with a statistically lower computing time requirement. Furthermore, the invention is based on the problem of training an arrangement capable of learning in such a way that it can be used in the context of determining an object in an image, so that less computing time is required to determine the object in an image using the trained arrangement capable of learning than with the known procedure.
- a method for determining an object in an image information is acquired from the image with a first local resolution.
- a first feature extraction is carried out for the recorded information.
- At least one partial area in which the object could be located is selected from the image.
- Information with a second local resolution is also acquired from the selected partial area.
- the second local resolution is larger than the first local resolution.
- a second feature extraction is carried out for the information that has been acquired with the second local resolution, and a check is carried out to determine whether a predetermined criterion regarding the features extracted from the information by means of the second feature extraction is fulfilled.
- the predefined criterion In the event that the predefined criterion is not met, information is iteratively recorded from at least one sub-region of the selected sub-region, in each case with a higher local resolution, and it is checked whether the information recorded with the respectively higher local resolution fulfills the predefined criterion for as long as until the specified criterion is met, or a further partial area is selected from the image and information from the further partial area is recorded with a second local resolution. Alternatively, the process can be ended.
- the information can be brightness information and / or color information that is / are assigned to pixels of a digitized image.
- the invention achieves considerable computing time savings in the context of the determination of an object in an image.
- the invention is clearly based on the knowledge that in the context of the visual perception of a living being Probably a hierarchical approach to the perception of individual areas of different sizes with different local resolution usually leads to the goal of recognizing a searched object.
- the invention is clearly to be seen in that, in order to determine an object in an image, hierarchical sub-areas and sub-sub-areas are selected, each of which is recorded with a different resolution on each hierarchical level and is compared with features of the object to be recognized after feature extraction has taken place. If the object is recognized with sufficient certainty, the object to be recognized is output as the recognized object. However, if this is not the case, there are alternatively the options available either to select a further sub-area of the current sub-area and to acquire information from this sub-area with a further increased local resolution, or to select a different sub-area and then in turn to this according to the object to be recognized investigate.
- an image that contains an object to be determined is captured.
- the position of the object to be recognized within the image and the object itself are predefined.
- Several feature extractions are carried out for the object, each with a different local resolution.
- the further refinements relate both to the methods, the arrangements, the computer-readable storage medium and the computer program element.
- the test can be used as a predetermined criterion as to whether the information recorded with the respective local resolution is sufficient to determine the object with sufficient accuracy.
- the predefined criterion can also be a predefined number of iterations, that is to say a predefined number of maximum iterations, in each of which a lower part area is selected and examined with an increased local resolution.
- the predefined criterion can be a predefined number of subareas to be examined or maximum subareas to be examined.
- the feature extraction can take place by means of a transformation with different local resolutions.
- a wavelet transformation is preferably used as the transformation, preferably a two-dimensional Gabor transformation (2D Gabor transformation).
- the image information is encoded in an optimal manner both in the spatial area and in the spectral area, that is to say an optimal compromise is achieved in the context of the reduction of redundancy information between the local area coding and the frequency area coding.
- Any transformation that meets the following requirements in particular can be used as a transformation:
- the aspect ratio of the elliptical Gaussian envelope should be essentially 2: 1; • the plane wave should have its direction of propagation along the shorter axis of the elliptical Gaussian envelope;
- the half-amplitude bandwidth of the frequency response should have approximately 1 to 1.5 octaves along the optimal direction.
- the mean value of the transformation should have the value zero in order to ensure a permissible functional basis for the wavelet transformation.
- the transformation can take place by means of a neural network or a plurality of neural networks, preferably by means of a recurrent neural network.
- a very fast transformation arrangement that can be adapted to the object to be recognized or to the correspondingly captured image information is used in particular.
- a plurality of partial areas is determined in the image, with a probability being determined for each partial area that the corresponding partial area contains the object to be recognized.
- the iterative procedure is carried out for detail areas in the order according to the falling probability of belonging to the object to be determined accordingly. This procedure results in a further reduction in the computing time required, since an optimal procedure for determining the object to be recognized is specified from a statistical point of view.
- At least one neural network can be used as an arrangement capable of learning.
- the neurons of the neural network are preferably arranged topographically.
- FIG. 1 is a block diagram in which the architecture of the
- FIG. 2 shows a block diagram in which the structure of the module for carrying out the two-dimensional Gabor
- FIG. 3 shows a block diagram in which the recognition module from FIG. 1 according to the exemplary embodiment is shown in detail
- FIG. 4 is a block diagram showing the architecture of the
- FIGS. 5a and 5b show sketches of an image with different objects from which the object to be determined is to be determined, the different recorded objects being shown in FIG. 5a and the recognition result having been determined in FIG. 5b at different local resolutions;
- FIG. 6 is a flowchart showing the individual steps of the method according to the embodiment of the invention.
- FIG. 1 shows a sketch of an arrangement 100 with which the object to be determined is determined.
- the arrangement 100 has a visual field 101.
- a detection unit 102 is provided, with which information from the image can be detected via the visual field 101 with different local resolution.
- the detection unit 102 has one
- FIG. 1 shows in the acquisition unit 102 a multiplicity of feature extraction units 103, each of which acquires information from the image with a different local resolution.
- Features extracted from the captured image information are fed to the recognition module, that is to say the recognition unit 104, as feature vector 105 by the feature extraction unit 103.
- a pattern comparison of the feature vector 105 with a previously formed feature vector is carried out in the manner explained in more detail below.
- the recognition result is fed to a control unit 106, from which it is decided which sub-area or sub-area, as will be explained in more detail below, of the image is selected, and with which local resolution the respective sub-area or sub-area is examined.
- the control unit 106 also has a decision unit in which it is checked whether a predefined criterion with regard to the extracted features is met.
- Arrows 107 symbolically indicate that the individual detection units 104 are "switched" to acquire information in different detection areas 108, 109, 110, each with a different local resolution, depending on control signals from the control unit 106.
- each recorded frequency is referred to as an octave.
- Each octave is referred to below as a local resolution.
- Each unit that performs a wavelet transformation at a given local resolution has an arrangement of neurons whose detection range corresponds to a two-dimensional Gabor function and is dependent on a specific orientation.
- Each feature extraction unit 103 has a recurrent neural network 200, as shown in FIG. 2.
- n 128, that is to say according to the exemplary embodiment, the image has 16384 pixels).
- Each pixel is a brightness value I? ⁇ 1 ⁇ between "0 *
- the brightness value I ° f ig denotes the
- the image 201 that is to say the pixels which lie in the respective detection area, becomes an average brightness value DC,
- the brightness values I ° -g of the pixels of the image 201 which lie in the detection range and the average brightness value DC is determined by a
- Contrast correction unit 202 is subtracted from the brightness values I? J lg of each pixel.
- the result is a set of brightness values that are contrast invariant.
- Brightness values of the pixels in the detection area are formed in accordance with the following regulation:
- the DC-free brightness values are fed to a neuron layer 203, the neurons of which carry out an extraction of simple features.
- the neurons in the neuron layer 203 have receptive ones
- the Gabor Wavelet is at
- the frequency bandwidth is determined with the constant K.
- a family of a discrete 2D Gabor wavelet G] pg_ (x, y) can be determined by discretizing the frequencies, orientations and centers of the continuous wavelet function (3) according to the following rule:
- Gkpql / y a "k ⁇ ⁇ l (a ⁇ k x - pb, a ⁇ k y - qb), (7)
- ⁇ ⁇ l ⁇ (x cos (l ⁇ 0 ) + y sin (l ⁇ 0 ), - x sin (l ⁇ o) + y cos (l ⁇ 0 )) (8)
- the activation of a neuron in the neuron layer 203 is also referred to as rj qi.
- the activation rj qi depends on a certain local frequency, which is dependent on the octave k for a preferred orientation, which is determined by the rotation index 1 and an excitation at the center, determined by the indices p and q.
- Neuron layer 203 is defined as the convolution of the corresponding receptive field and the image, that is to say that
- Brightness values of the pixels which results in the activation rj qi of a neuron according to the following rule:
- Detection unit is designated with the corresponding local resolution k.
- the activation rj ⁇ q] _ of a neuron is a complex number, which is why the exemplary embodiment uses two neurons for coding a brightness value Ij_j, one neuron for the real part of a brightness value Iij and one neuron for the imaginary part of the transformed brightness information Iij.
- the neurons 206 of the neuron layer 205 which detect the transformed brightness signal 204, generate a neuron output value 207.
- a reconstructed image 209 is formed in an image reconstruction unit 208 by means of the neuron output signal 207.
- Image reconstruction unit 208 neurons that perform a Gabor wavelet transformation.
- the image reconstruction unit 208 has neurons that operate according to a feed-forward structure connected to each other that correspond to a Gabor-receptive field.
- a constant C denotes the density of the wavelet base used. Due to the non-orthogonality of the Gabor-Wavelet basic functions, regulation (13) and its linear superposition do not guarantee that a minimum of a reconstruction error E, which is formed according to the following regulation:
- a correction of this regulation (14) can be obtained by dynamically optimizing the reconstruction error E by means of a feedback connection.
- a feedback correction term r or 1 r is formed for each neuron 206 of the neuron layer 205.
- the dynamics of the recurrent neural network 200 are determined in such a way that a dynamic reconstruction error is formed in accordance with the following regulation:
- the dynamic reconstruction error of the recurrent neural network 200 is minimized.
- the constant C is formed according to the following rule:
- the reconstruction error signal 214 is formed by means of a differential unit 210.
- the contrast-free brightness signal 211 and the reconstructed brightness signal 212 are fed to the differential unit 210.
- a reconstruction error value 213 is formed, which is fed to the receptive field, that is to say the Gabor filter.
- a training method according to regulation (16) is carried out for each object to be determined from a set of objects to be determined, that is to say to be recognized, and for each local resolution in the feature extraction unit 103 described above.
- the recognition unit 104 stores the extracted feature vectors 105 in their weights of the neurons individually for each local resolution.
- Different feature extraction units 103 are thus trained in accordance with each local resolution for each object to be determined, as is indicated in FIG. 1 by the different feature extraction units 103.
- the positions of the centers of the receptive fields are discretized and result for a local resolution of the degree k
- the receptive fields cover the entire detection area in the same way at every local resolution, that is to say they always overlap in the same way.
- a feature extraction unit 103 thus has the local resolution k
- the Gabor neurons are clearly identified by means of the index kpql and the activation ⁇ ], ql, which, as described above, are given by the folding of the corresponding receptive field with the brightness values I j of the pixels of the detection area.
- a feature extraction unit 103 which is preferably used, quickly becomes one through the forward-looking Gabor connections a sufficiently good set of wavelet basis functions for the greatly improved coding of the brightness values is determined, which is formed by the recurrent dynamic analysis of the reconstruction error value 213, so that a smaller number of iterations is achieved in order to determine the minimum of the reconstruction error value 213.
- the feedback reconstruction error E is used according to the exemplary embodiment in order to dynamically improve the forward-facing Gabor representation of the image 201 in the sense that the problem of redundancy set out above in the description of the image information is dynamically corrected on account of the non-orthogonality of the Gabor wavelets ,
- the redundancy of the Gabor feature description has therefore been dynamically reduced considerably by improving the reconstruction in accordance with the internal representation of the image information.
- the number of iterations required to achieve optimal predictive coding of the image information can be further reduced by using an over-complete number of Gabor neurons for the feature coding.
- a base that is thus complete allows a larger number of base vectors than input signals.
- Characteristics corresponding to the octave at least the number of number given by the local resolution K is used.
- the neurons of the neuron layer 205 are explained in detail below (see FIG. 3).
- each neuron 206 (a neuron 300 is provided for a real part and a neuron 301 for the imaginary part of the Gabor transformation, as explained above, that is to say two neurons for a “logical” neuron) with the corresponding connections to the feature extraction unit 103 in each case as weight information, which the description is stored by means of feature vectors of an object for a specific local resolution and a specific position of the object in the detection area.
- the neurons 206 of the neuron layer 205 are arranged in columns so that the neurons are arranged topographically.
- the receptive fields of the recognition neurons are set up in such a way that only a limited square detection area of the neuron input values is transmitted around a certain center area.
- the size of the quadratic receptive fields of the recognition neurons is constant and the recognition neurons are set up in such a way that only the signals from neurons 206 of the neuron layer 205 which are located within the detection range of the respective recognition neurons 301, 302 are taken into account.
- the center of the receptive field is in the brightness center of the respective object.
- Translation invariance is achieved in that for each object to be learned, that is to say to be recognized in the application phase, of identical recognition neurons, that is to say neurons that share the same weights but have different centers, are distributed over the entire coverage area.
- Rotation invariance is achieved by storing the sum of the wavelet coefficients along the different orientations at each position.
- a separate number of recognition neurons is provided for each new object to be learned during the learning phase, which store in their weights the corresponding wavelet-based internal description of the respective object, that is to say the feature vectors that describe the objects.
- a recognition neuron For each local resolution, a recognition neuron is generated which corresponds to the respective internal description in accordance with the corresponding octave, that is to say the corresponding local one Resolution corresponds and the respective recognition neuron for all center positions is distributed in the entire detection area.
- the recognition neurons are linear neurons, which output a linear correlation coefficient between its input weights and the input signal, which are formed by the neurons 206 of the neuron layer, which are located in the feature extraction unit 103.
- each object is clearly provided at a time in the detection area at a predetermined, freely definable position ,
- the recognition neurons store the wavelet-based information in their weights. For a given PPoossiittion, that is, a center with the pixel coordinates ⁇ C ⁇ f Cyj, two recognition neurons are provided for each object to be learned, one for storing the real part of the wavelet description and one for storing the imaginary part of the internal wavelet description.
- Re () denotes the real part and Im () denotes the imaginary part and applies to the indices p and q:
- R is the width of the receptive field in recorded pixels.
- R 32 pixels is selected.
- the center (c x , Cyj is formed by the center of brightness of the respective object, which is given according to:
- Neurons that are activated due to excitation in another center are formed in the same way, with the same weights for recognizing the same object be used in a shifted position within the detection range.
- Correlation coefficient that describes the correlation between the weights and the output of the neurons 206 of the neuron layer 205.
- the output of a recognition neuron in the recognition unit 104 at a local resolution k based on the real parts of the neurons 206 of the neuron layer 205).
- the local resolution k and related to the center ⁇ z x ,, Zzy y ]) given by:
- (A) denotes the mean value and ⁇ a the standard deviation of a variable a over the detection range, i.e. over all indices p, q.
- the neurons are activated at every local resolution depending on the detection of the same object but also on the different positions, since the same weights are stored for different positions according to the object.
- the different detection units 104 are thus activated serially by the control unit 106, as will be described below.
- a check is carried out to determine whether a predetermined criterion is met or not, the activation of the recognition neurons with the greatest activation being determined in accordance with the octave, which is greater than or equal to the current octave, that is to say by taking into account only the activated ones Detection units 104 at the appropriate time.
- a so-called winner-takes-all strategy is used in deciding which recognition neuron is selected in such a way that the selected recognition neuron, which is assigned to a specific center and a specific object, is analyzed by the control unit 106.
- control unit 106 can further decide whether the identification of the corresponding object is sufficiently precise or whether a more precise analysis of the object by selecting a smaller, more detailed area with a higher local resolution is required. If this is the case, then further neurons are activated in the further feature extraction units 103 or recognition units 104, so that the local resolution is increased.
- a priority map is formed by the recognition unit 104 for the detection area with the coarsest local resolution, individual priority areas of the image area being indicated by the priority map and the probability being assigned to the corresponding area areas, indicating how likely it is is that the object to be recognized is in the partial area (see FIG.).
- a partial area 401 is characterized by a center 402 of the partial area 401.
- a serial feedback mechanism is provided for masking the detection areas, whereby successive others
- Detection units 102 and feature extraction units 103 and detection units 104 are activated in accordance with the respectively selected increased resolution k, that is to say the control unit 106 regulates the positioning and size of the detection area in which visual information is received by the system and processed further.
- this rough local resolution usually only the position of the object is practically recognizable and a very rough determination of the global shape of an object is determined.
- control unit stores the result of the recognition unit as a priority map and selects a partial area of the image in which, as will be described below, image information is examined.
- the corresponding selection of the partial area is fed back through the same feedback connections through the activated wavelet module.
- the selection of the sub-area is dependent on the pixels which describe the object of the last activated local resolution.
- the corresponding pixels are selected on the basis of the pixels which enable a good reconstruction, that is to say a reconstruction with a small reconstruction error, and by pixels which do not correspond to a filtered black background.
- the attention mechanism is object-based in the sense that only the areas in which the object lies are further analyzed in series with a higher local resolution.
- the attention mechanism is described mathematically using a matrix G ⁇ j, the elements of which have the value "1 * if the corresponding pixels are to be taken into account and have the value "0" if the corresponding pixels are not to be taken into account.
- the priority map is generated and the control unit 106 decides which object is to be analyzed in more detail in a further step, so that only the pixels which lie in the image area, that is to say in the selected partial area, are taken into account in the context of the next higher local resolution.
- the first condition is that the reconstructed image has brightness values I j> 0 and the second condition is that the reconstruction error is not greater than a predetermined threshold, that is to say:
- the control unit 106 thus decides that the object is analyzed in more detail at a center (c x , Cy) in the priority map, then the mask, given by the matrix Gij, is updated in accordance with the following regulations:
- the attention feedback between the local resolution k and the subsequent local resolution k - 1 i.e. the increased local
- a new matrix value G j is therefore defined in accordance with the exemplary embodiment for the activation of the next, increased local resolution k-1 in accordance with the following regulation:
- a first object 501 has a global form of an H and has object components of the form T as local elements, which is why the first object is called Ht.
- the second object 502 has a global H-shape and also H-shaped components as local object components, which is why the second object 502 is referred to as Hh.
- a third object 503 has a global and also a local T-shaped structure, which is why the third object 503 is referred to as Tt.
- a fourth object 504 has a global T-shape and a local H-shape of the individual object components, which is why the fourth object 504 is referred to as Th.
- 5b shows the recognition results of a device according to the invention for different local resolutions, in each case for the first object 501 (recognized object at first local resolution 510, at second local resolution 511, at third local resolution 512, at fourth local resolution 513).
- 5b also shows the recognition results of a device according to the invention for different local resolutions, in each case for the second object 502 (recognized object at first local resolution 520, at second local resolution 521, at third local resolution 512, at fourth local resolution 523).
- 5b also shows the recognition results of a device according to the invention for different local resolutions, in each case for the third object 503 (recognized object at first local resolution 530, at second local resolution 531, at third local resolution 532, at fourth local resolution 533).
- 5b also shows the recognition results of a device according to the invention for different local resolutions, in each case for the fourth object 504 (recognized object with first local resolution 540, with second local resolution 541, with third local resolution 542, with fourth local resolution 543).
- the respective object is already recognized with a very good, at least sufficient accuracy, at the highest local resolution.
- step 601 the for the pixels, that is, for the brightness values
- a feature extraction with a first local resolution j-1 is carried out on the captured image (step 602).
- a first partial area Tbi is formed from the image (step 603).
- a probability is determined that the object to be determined is in the corresponding sub-area Tbi.
- the result is a priority map that contains the respective assignments probability and partial area (step 604).
- a test step 608 it is checked whether the object has been recognized with sufficient certainty (step 608).
- the recognized object is output as a recognized object (step 609). If this is not the case, then in a further test step (step 610) it is checked whether a predetermined termination criterion has been met, according to the exemplary embodiment, whether a predetermined number of iterations has been reached.
- step 611 If this is the case, the method is ended (step 611).
- step 612 it is checked in a further test step (step 612) whether a further lower part area should be selected.
- Step 613 and the method continues in step 606 by incrementing the local resolution for the corresponding sub-area.
- a further partial area Tbi + 1 is selected from the priority map (step 614), and the method is continued in a further step (step 605).
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP01940216A EP1281157A1 (de) | 2000-05-09 | 2001-05-07 | Verfahren und anordnung zum ermitteln eines objekts in einem bild |
JP2001583457A JP2003533785A (ja) | 2000-05-09 | 2001-05-07 | 画像中の物体を求めるための方法および装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10022480 | 2000-05-09 | ||
DE10022480.6 | 2000-05-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001086585A1 true WO2001086585A1 (de) | 2001-11-15 |
Family
ID=7641256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/DE2001/001744 WO2001086585A1 (de) | 2000-05-09 | 2001-05-07 | Verfahren und anordnung zum ermitteln eines objekts in einem bild |
Country Status (5)
Country | Link |
---|---|
US (1) | US20030133611A1 (de) |
EP (1) | EP1281157A1 (de) |
JP (1) | JP2003533785A (de) |
CN (1) | CN1440538A (de) |
WO (1) | WO2001086585A1 (de) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003053231A2 (de) * | 2001-12-20 | 2003-07-03 | Siemens Aktiengesellschaft | Erstellen eines interessenprofils einer person mit hilfe einer neurokognitiven einheit |
CN107728143A (zh) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | 基于一维卷积神经网络的雷达高分辨距离像目标识别方法 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3863775B2 (ja) * | 2001-12-25 | 2006-12-27 | 株式会社九州エレクトロニクスシステム | 画像情報圧縮方法及び画像情報圧縮装置並びに画像情報圧縮プログラム |
US7733465B2 (en) * | 2004-05-26 | 2010-06-08 | Bae Systems Information And Electronic Systems Integration Inc. | System and method for transitioning from a missile warning system to a fine tracking system in a directional infrared countermeasures system |
US8370755B2 (en) * | 2007-12-27 | 2013-02-05 | Core Wireless Licensing S.A.R.L. | User interface controlled by environmental cues |
US9015093B1 (en) | 2010-10-26 | 2015-04-21 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US8775341B1 (en) | 2010-10-26 | 2014-07-08 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US10192327B1 (en) * | 2016-02-04 | 2019-01-29 | Google Llc | Image compression with recurrent neural networks |
CN109074665B (zh) | 2016-12-02 | 2022-01-11 | 阿文特公司 | 用于经由医学成像系统导航到目标解剖对象的系统和方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5579439A (en) * | 1993-03-24 | 1996-11-26 | National Semiconductor Corporation | Fuzzy logic design generator using a neural network to generate fuzzy logic rules and membership functions for use in intelligent systems |
US6714665B1 (en) * | 1994-09-02 | 2004-03-30 | Sarnoff Corporation | Fully automated iris recognition system utilizing wide and narrow fields of view |
US6263122B1 (en) * | 1998-09-23 | 2001-07-17 | Hewlett Packard Company | System and method for manipulating regions in a scanned image |
US6639998B1 (en) * | 1999-01-11 | 2003-10-28 | Lg Electronics Inc. | Method of detecting a specific object in an image signal |
-
2001
- 2001-05-07 US US10/276,069 patent/US20030133611A1/en not_active Abandoned
- 2001-05-07 JP JP2001583457A patent/JP2003533785A/ja not_active Withdrawn
- 2001-05-07 WO PCT/DE2001/001744 patent/WO2001086585A1/de not_active Application Discontinuation
- 2001-05-07 EP EP01940216A patent/EP1281157A1/de not_active Withdrawn
- 2001-05-07 CN CN01812200A patent/CN1440538A/zh active Pending
Non-Patent Citations (2)
Title |
---|
CONCEPCION V ET AL: "Detection and localization of objects in time-varying imagery using attention, representation and memory pyramids", PATTERN RECOGNITION, PERGAMON PRESS INC. ELMSFORD, N.Y, US, vol. 29, no. 9, 1 September 1996 (1996-09-01), pages 1543 - 1557, XP004008857, ISSN: 0031-3203 * |
SAJDA P ET AL: "Integrating Neural Networks with Image Pyramids to Learn Target Context", NEURAL NETWORKS, ELSEVIER SCIENCE PUBLISHERS, BARKING, GB, vol. 8, no. 7, 1995, pages 1143 - 1152, XP004000076, ISSN: 0893-6080 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003053231A2 (de) * | 2001-12-20 | 2003-07-03 | Siemens Aktiengesellschaft | Erstellen eines interessenprofils einer person mit hilfe einer neurokognitiven einheit |
WO2003053231A3 (de) * | 2001-12-20 | 2003-09-25 | Siemens Ag | Erstellen eines interessenprofils einer person mit hilfe einer neurokognitiven einheit |
CN107728143A (zh) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | 基于一维卷积神经网络的雷达高分辨距离像目标识别方法 |
Also Published As
Publication number | Publication date |
---|---|
CN1440538A (zh) | 2003-09-03 |
EP1281157A1 (de) | 2003-02-05 |
JP2003533785A (ja) | 2003-11-11 |
US20030133611A1 (en) | 2003-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0780002B1 (de) | Verfahren und vorrichtung zur rekonstruktion von in rasterform vorliegenden linienstrukturen | |
DE60130742T2 (de) | Mustererkennung mit hierarchischen Netzen | |
DE69516733T2 (de) | Verfahren und System mit neuronalem Netzwerk zur Bestimmung der Lage und der Orientierung | |
DE69031774T2 (de) | Adaptiver Gruppierer | |
DE69610689T2 (de) | System zum Klassifizieren von Fingerabdrücken | |
DE69919464T2 (de) | Elektronische Vorrichtung zur Bildausrichtung | |
DE102017220307B4 (de) | Vorrichtung und Verfahren zum Erkennen von Verkehrszeichen | |
WO2020192849A1 (de) | Automatische erkennung und klassifizierung von adversarial attacks | |
DE4406020C1 (de) | Verfahren zur automatisierten Erkennung von Objekten | |
EP3847578A1 (de) | Verfahren und vorrichtung zur klassifizierung von objekten | |
DE60037416T2 (de) | Drehkorrektur und duplikatbildern detektion mit musterkorrelation mittels diskreter fourier-transform | |
DE112020000448T5 (de) | Kameraselbstkalibrierungsnetz | |
DE102019209644A1 (de) | Verfahren zum Trainieren eines neuronalen Netzes | |
DE69805280T2 (de) | Gerät und verfahren zur mustererkennung. | |
WO2001086585A1 (de) | Verfahren und anordnung zum ermitteln eines objekts in einem bild | |
DE69230940T2 (de) | Verfahren zum Ableiten der Merkmale von Zeichen in einem Zeichenerkennungssystem | |
EP1180258A1 (de) | Mustererkennung mittels prüfung zusätzlicher merkmale nach teilverarbeitung | |
DE102021207613A1 (de) | Verfahren zur Qualitätssicherung eines Systems | |
DE102018100315A1 (de) | Erzeugen von Eingabedaten für ein konvolutionelles neuronales Netzwerk | |
DE102019127622B4 (de) | Abwehrgenerator, Verfahren zur Verhinderung eines Angriffs auf eine KI-Einheit und computerlesbares-Speichermedium | |
WO2021180470A1 (de) | Verfahren zur qualitätssicherung eines beispielbasierten systems | |
EP0981802B1 (de) | Verfahren zur identifizierung von fingerabdrücken | |
EP1359539A2 (de) | Neurodynamisches Modell der Verarbeitung visueller Informationen | |
DE10126375B4 (de) | Verfahren und System zur Erkennung von Objekten | |
DE10361838B3 (de) | Verfahren zur Bewertung von Ähnlichkeiten realer Objekte |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2001940216 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10276069 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 018122000 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2001940216 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2001940216 Country of ref document: EP |