IL98843A - Method and device for the characterization and localization in real time of singular features in a digitalized image - Google Patents

Method and device for the characterization and localization in real time of singular features in a digitalized image

Info

Publication number
IL98843A
IL98843A IL9884391A IL9884391A IL98843A IL 98843 A IL98843 A IL 98843A IL 9884391 A IL9884391 A IL 9884391A IL 9884391 A IL9884391 A IL 9884391A IL 98843 A IL98843 A IL 98843A
Authority
IL
Israel
Prior art keywords
image
function
dots
attribute
gray level
Prior art date
Application number
IL9884391A
Other languages
Hebrew (he)
Other versions
IL98843A0 (en
Original Assignee
Thomson Trt Defense
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Trt Defense filed Critical Thomson Trt Defense
Publication of IL98843A0 publication Critical patent/IL98843A0/en
Publication of IL98843A publication Critical patent/IL98843A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Description

n>mao nm*n o*»nin»» Method and device for the characterization and localization In real time of singular features 1n a dfg tall zed Image THOMSON TRT DEFENSE C. 84012 METHOD AND DEVICE FOR THE CHARACTERIZATION AND LOCALIZATION IN REAL TIME OF SINGULAR FEATURES IN A DIGITALIZED IMAGE, NOTABLY FOR THE RECOGNITION OF SHAPES IN A SCENE ANALYSIS PROCESSING OPERATION BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method and a device to characterize and localize certain singular features of a digitalized image in real time.
The invention shall be described chiefly in the context of an application to the analysis of a scene in which it is sought to recognize characteristic shapes corresponding to contours with high curvature, for example the corners of a polygonal contour or small-sized regions.
Once these singular features have been characterized (namely distinguished from the rest of the image, notably from the neighboring contour dots) and localized (i.e. referenced by their coordinates in the image), they will constitute "characteristic dots" corresponding to "primitives" of the processed image which could lend themselves to a certain number of processing operations such as, for example, image segmentation operations (wherein a region is enlarged from the "seed dots" formed by the characteristic dots recognized beforehand) or for applications making use of techniques for the placing of characteristic dots in correspondence with one another.
It is thus, for example, that when it is sought to localize man-made objects (roads, bridges, railways, canals etc.) in a natural environment, in infrared images given by a camera placed on board an aircraft, such objects when observed generally appear in a polygonal shape. As a typical application, we might cite path-correction operations in aircraft navigation.
This application to shape recognition does not, however, restrict the scope of the present invention, which can also be used for other applications necessitating the extraction of characteristic dots, for example stereovision applications, motion analysis etc. 2. Description of the Prior Art In general, if the image is considered to be a 2 restriction to N of a mathematical function F(x,y) with two variables and with real values, hereinafter called a "gray level function", the principle of the method consists in the preparation, from this function F (unprocessed image function) of another function enabling the reliable characterization and localization of the characteristic dots of the image. This other function shall be called the "attribute" .
Indeed, it is necessary to use an attribute such as this for, if we considered only dots such that the value of the gray level function F(x,y) were to be extreme in a neighborhood of varying size, then the characteristic dots sought would have to correspond to simple peaks in the gray level of the image, and this would entail a particularly restrictive hypothesis. Furthermore, such a test would be extremely sensitive to the noise spikes in the image.
Hence, when an attribute has been defined and when its value for all the pixels of the image has been computed (in the characterization step), the characteristic dots are found (in the localization step) by the application of a simple criterion to this attribute, for example a test of maximality in the entire image, in a sub-image (local test) or else in a neighborhood of a given central pixel.
One of the first types of attribute proposed has been a statistical attribute, namely an attribute constituted by an operator carrying out, at each dot of the image, an estimation of the local variance in the oriented neighborhoods centered on the dot processed. In this respect, reference may be made to the work by H.P. Moravec, developed in "Towards Automatic Visual Obstacle Avoidance" in Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 1977 and M.J. Hannah in Bootstrap Zero, in Proceedings of the Image Understanding Workshop, DARPA Conference, 1980. These techniques are characterized by a volume of computations that are very costly in terms of physical implementation.
Another approach consists in the use of an attribute that is no longer statistical but differential, i.e. first of all, an approximation is made of the function of the gray level of the image (i.e. the vector formed at each dot by the partial derivatives of the function F is computed), then the vector function thus obtained is analyzed.
Thus, for each pixel, a complex value (i.e. in other terms, a vector) will be defined, said vector containing, for each dot, two information elements, namely: - a measurement of the local transition of the gray levels in the vicinity of this dot, represented by the norm G(x,y) of the gradient vector G(x,y), and - an estimation of the direction in which this transition is made, represented by the argument *(x,y) of the gradient vector G(x,y); should a contour be effectively present in the neighborhood of this dot, this gradient direction will be perpendicular to the direction of the contour.
It is then possible, from the measurement of the local transition of the gray level, to extract only the contour dots, i.e. to keep only the dots corresponding to local maxima of this function (i.e. maxima in a given neighborhood V, for example a 3 x 3 neighborhood) in thus keeping only the "peak lines" of the gray level function of the image.
This condition may be formulated as follows: a given dot M(x,y) is a contour dot if and only if the following relationship is verified: G(x,y) > G(x\y') V (x\y') e V(x,y) n D(x,y), D(x,y) designating the orientation straight line Φ(χ,γ) and V(x,y) designating the given neighborhood of (x,y) · The gradient information cannot, however, be used directly for the search for the characteristic dots. Indeed, a dot for which the gradient amplitude is locally the maximum is necessarily recognized as being a contour dot, and it is therefore difficult to distinguish it from its neighbors located on the same contour .
Certain techniques have been proposed to make a search, after the contour dots have been thus detected, for those dots for which the variation in the direction of the gradient is locally the maximum (angle, turning-back point etc.)- Such techniques are notably described by P.R. Baudet in "Rotational Invariant Image Operators" in Proceedings of the International Joint Conference on Pattern Recognition (IJCPR), 1978; L . Kitchen and A. Rosenfeld in "Gray Level Corner Detection" in Pattern Recognition Letters, Volume 1, 1982; D.A. Zuniga and R. Haralick in "Corner Detection Using the Facet Model", in Proceedings of the IEEE Conference on Vision and Pattern Recognition ( CVPR ) , 1983 and R.L. Dreschler and H.H. Nagel in "On the Selection of Critical Points and Local Curvature Extrema of Regions Boundaries for Interframe Matching" in Image Sequence Processing and Dynamic Scene Analysis, T.S. Huang ed., NATO ASI Series, Volume F2, Springer Verlag, 1983.
These techniques are, however, relatively unwieldy to implement for they are all based on the use and combination of the first order and second order partial derivatives of the gray level function of the image; it is therefore necessary to know (and hence to compute) these derivatives for all the pixels of the image.
Another technique of analysis has been proposed by J.Y. Dufour and H. Waldburger in Recalage d ' Images par association de primitives, construction et analyses d ' histogrammes multidimensionnnels (Resetting of Images by Association of Primitives, Construction and Analyses of Multidimensional Histograms), in Actes du Congres GRETSI , Antibes 1989.
In this technique, a search is made for dots maximizing a criterion of local radiality of the gradient vector field (which amounts more or less to searching for the centers of curvature of the contour at the positions where this curvature is the greatest). It will be noted that these dots that are searched for, which are at the intersection of the supports of the vectors of their neighborhoods, are dots close to a contour but not located on it.
This latter technique, like the preceding ones, is characterized however by costly layout in terms of the number of circuits to be used.
SUMMARY OF THE INVENTION One of the aims of the invention is to propose a particular processing operation, with an adapted simplified architecture, enabling the real-time performance of this processing operation for the characterization and localization of the singular features of a digitalized image.
It will be seen, in particular, that the processing operations done all have a local character, namely that the analysis of a given pixel is done exclusively as a function of the pixels located around it in a limited neighborhood. This makes it possible to achieve a relative simplicity of the processing to be done, unlike in prior art methods which generally make it necessary to consider the totality of the image, thus making it necessary to provide for an architecture that is relatively complex and costly in terms of circuits (large-capacity frame memories, large volume of computations etc.).
The detailed description of the invention will also highlight . the adaptive character of the processing operations with respect to the content of the image: this features will notably enable the quality of these processing operations to be improved.
To this effect, the present invention proposes a method to characterize and localize the characteristic dots of a digitalized image, notably for the recognition of shapes in a scene analysis processing operation, these characteristic dots being dots of contours with high curvature such as corners or small-sized regions, said image being formed by a two-dimensional frame of pixels, each having a determined gray level, wherein said method comprises the steps of : (a) the approximating, for each pixel to be analyzed, of the second order partial derivatives of the gray level function of the image (F(x,y)), (b) the determining, from these derivatives, of an attribute (\ ) representing the characteristic sought 2 and the assigning of this attribute to the pixel thus analyzed, and (c) the searching, from among the pixels of the image thus analyzed, for the dots maximizing said attribute .
Very advantageously, said approximation is done by convoluting the gray level function of the image (F(x,y)) with functions corresponding to the second order partial derivatives of a smoothing function, this smoothing function enabling the noise present in the image to be attenuated.
Another object of the invention is a device that is constituted by means enabling these functions to be implemented, and in which the characterization and localization of the characteristic dots are then advantageously done in real time at the video rate.
Advantageously, said attribute is the second inherent value of the matrix of said partial derivatives : ¾(x,y) = I Fxx + Fxy I - [ (Fxx - ]l2, with Fxx(x,y) = ^F/dx2, FyyCx.y) = a2F/9y, and Fxy(x,y) = d2F/dxdy.
Also advantageously, said smoothing function is a Gaussian function, notably a centered Gaussian function: G(x,y) = (2.π. I ∑ I ^2)"1. exp [ -1/2. (x,y) . Σ"1. (x.y)1 ], where ∑ is the matrix of covariance of the Gaussian function.
BRIEF DESCRIPTION OF THE DRAWINGS An embodiment of the invention shall now be described with reference to the appended drawings .
Figure 1 is a block diagram illustrating a preferred architecture enabling the real-time implementation of the processing method of the invention ; Figure 2 gives a schematic illustration of a first embodiment of the convolution circuit of figure 1.
Figure 3 gives a schematic illustration of a second, more elaborate embodiment of the convolution circuit of figure 1.
Figure 4 gives a schematic illustration of an embodiment of the first function circuit of figure 1.
Figure 5 gives a schematic illustration of an embodiment of the second function circuit of figure 1.
DESCRIPTION OF THE PREFERRED EMBODIMENT General Presentation of the Processing Method The method consists in processing an unprocessed image formed by a two-dimensional frame of pixels each having a determined gray level, for example the image delivered by a video camera such as an air-ground infrared camera on board an aircraft.
From this unprocessed image, first of all, for each pixel, the second order derivatives of the gray level function F(x,y) of the analyzed image (or an approximation of these derivatives) shall be determined. It is assumed, naturally, tht the gray level function F(x,y) can be twice differentiated.
These second derivaitves may be presented in matrix form (Hessian transform of the function) the notations used being the following .
F (x,y) = d2F/dxdy.
The idea that forms the starting point of this invention relates to the use of an attribute based on the approximation, at each point of the function constituted by the second inherent value of the Hessian transform H(x,y) of the function F(x,y) which is: x,y) = I Fxx + Fxy I - [ (Fxx - ]1/2 The choice of the second inherent value corresponds to a placing in a direction orthogonal to that where F is the maximum, i.e. in a relative xx reference position where the variation in x is the most marked .
The direct approximation of the partial derivatives of the image function prove, however, to be most usually insufficient to obtain satisfactory results in the applications usually envisaged.
Indeed, apart from the high sensitivity to noise of such operators, the neighborhood used for the approximation is highly limited, and the information extracted therefore has an excessively marked local character.
To overcome this drawback, the present invention proposes operating not on the original image (function F(x,y)), but on a modified image (function F'(x,y)) obtained by convoluting the original image with an appropriate "smoothing" function G(x,y) .
There will thus be: F'(x,y) = (F * G) (x,y) The smoothing function should naturally be positive, capable of being integrated and at least twice continually differentiable .
As a general rule it is possible to adopt, for example, for this function G, a centered Gaussian function: G(x,y) = (2.jr. I ∑ I ^2)"1. exp [ -1/2. (x,y) . Σ"1 · (x, )11 ∑ being the matrix of covariance of this function (which may or may not be diagonal).
It will be noted that this type of function does not restrict the scope of the invention and that it is also possible, for the same purpose, to to use other smoothing functions which may or may not be Gaussian functions, and isotropic or non-isotropic Gaussian functions.
If the function G is appropriately chosen (as is the case with the above Gaussian function), the convolution operation brings two advantages.
First of all, it "smoothens" the image, i.e. it attenuates the noise and rounds out the angular contours .
Secondly, it facilitates the derivation operations (which are very cumbersome from the viewpoint of the volume of computations, hence of the complexity of the circuits to be used); indeed, it may be observed that: 9η/3χ η-> (F*G) = F * d dx'dy"'1' G.
The function 1 that will be used as an attribute 2 will then be defined by: 2(x,y) = I F* LQI - [ (F * T1G)2 + (F * T2G)2 } , with the following notations: LG = G^j. + Gyy (namely the Laplace operator of the function G) T1G = Gxx~Gyy With an attribute such as this, the dots for which the absolute value of A ' is high are dots located in 2 the neighborhood of a contour of the image F" having a curvature.
Furthermore, the absolute value of A1 is all the 2 greater as the local contrast is high and . as the curvature is accentuated, which typically corresponds to the properties of the characteristic dots usually sought .
Furthermore, the dots such that A1 is positive 2 are located in the concavity of the local curvature, and the presence of a corner dot of the contour is characterized, if its curvature is sufficiently accentuated, by the presence of a local maximum of ' 2 at a position close to that of the center of the curvature, or the center of the region if the contour demarcates a sufficiently small-sized region.
It is thus seen that the characteristic dot determined through this attribute is close to the contour but external to it, which advantageously makes it possible to distinguish it from the contour dots proper. After processing, therefore, the same image may preserve both the contour dots and the characteristic dots, these two types of dots being separate.
This attribute is therefore perfectly suited to the characterization and to the pinpoint localization of corners and small-sized regions in an image.
Architecture for the Real-Time Implementation of The Processing Method Referring to the block diagrams of the figures, a description shall now be given of a circuit architecture capable of carrying out the processing operations of the above-described method in real time . Indeed, the formulations of the processing operations explained further above have been chosen in order to enable this processing in real time, in taking account of the very high rates, which may go up to 20 MHz for the pixel rate.
This architecture, which is shown schematically in its totality in figure 1, essentially has three blocks.
The first block, referenced 10, carries out the following three convolutions on the basis of the gray level function F(x,y), received at input at the video image rate: L(x,y) = (F*G)(x,y), T^x.y) = (F * T1G) (x,y), and T2(x,y) = (F * T2G) (x,y) The second block, referenced 20, is a block enabling the computation, from two values x and y at 2 2 1/2 input, of the quadratic mean (x +y ) of these two values; these two values x and y shall herein be the convolution results T and T delivered by the 1 2 convolution circuit 10.
The third function block, referenced 30, is a block enabling the computation, from two values x and y, of the term |x|-a.y, a being a constant. These two terms x and y shall herein be the convolution result L delivered by the circuit 10 and the quadratic mean 2 2 1/2 (T +T ) , delivered by the first function circuit 1 2 20.
The result delivered by this circuit 30 is therefore : namely the attribute A' explained further above. 2 We shall now describe each of the blocks in detail.
The block 10 carrying out the convolution has been shown in figures 2 and 3 in two different forms.
This circuit, in either of its forms, is made from two universal VLSI circuits developed within the framework of the European program EUREKA ("MIP" project; No. EU34: Modular Image Processing), namely the Video Memory Circuit (memory function) and the Linear Filter Circuit (linear filter function).
The Video Memory MIP circuit is a memory circuit designed for the organization of the data for the Linear Filter MIP Circuit. It enables the memorizing of four video lines of 1024 pixels each (maximum size), each pixel being capable of being coded on eight gray level bits. Thus, at the video cadence, it can deliver a column of five pixels (the four pixels stored plus the current pixel) to a linear filter placed downline.
The length of the lines can be programmed by external command, with a maximum size of 1024 pixels.
The Linear Filter MIP Circuit, for its part, is a dedicated circuit that can be used to carry out the convolution of an image E with a mask K according to the relationship: C(n,m) = ∑ E(n+l,m+j) . K(iJ) This circuit has the following functional characteristics : - processing neighborbood: 5 x 10, - two possible modes of operation: "real" mode (single convolution) and "complex" mode (two simultaneous convolutions with two different masks), - programmable video format (line return, frame return) , - maximum video rate: 20 MHz , - input : five 8-bit pixels, - output: 16 bits (in real mode) or 24 bits (2 x 12 bits in complex mode), - possibility of integrated post-processing operations : * the adjusting of the outputs by a transformation of the following type S(n,m) = a . C(n,m) .2b + c, with a, b and c programmable, * thresholding: the values below a given threshold may be forced to zero, the values above this threshold being kept in their state, or forced to 1 depending on the thresholding mode; * computation of histogram, and * search for the minimum and for the maximum on the result values .
In a first architecture envisaged for the implementation of the invention, illustrated in figure 2, the convolution circuit 10 has a video memory 11 supplying two linear filters 12 and 13 in parallel.
The first linear filter 12 works in real mode and enables the computation of the Laplacian L. The linear filter 13 works in complex mode (two simultaneous convolutions with different masks) and delivers T and 1 T at output in parallel. The coefficients of the mask 2 are applied to the linear filters by an external command (not shown) at the same time as the other commands for the parametrization of this circuit.
This architecture is relatively simple from the viewpoint of the number of circuits used, but it may be noted that its use is highly restricted owing to the reduced size (5x5) of the convolution cores that may be used.
In the case of figure 3, two video memories 11 and 11' are associated in cascade so as to have a 9 x 9 neighborhood available (the current line plus the eight previous lines, loaded in the memories 11 and 11') thus procuring a bigger convolution core.
In the same way, two groups of two linear filters 12, 12' are placed in cascade (computation of the Laplacian) and 13, 13' (computation of T and T ). 1 2 The two function blocks 20 and 30, illustrated separately in figures 4 and 5, have a similar architecture.
The non-linear operations which they imply may be carried out entirely by two Function Module type MIP circuits 21, 31, associated with respective RAMs 22, 32. For, the MIP function module enables the approximation of any two-variable continuous function to the video rate.
To this effect, the RAM that is associated with it contains the values of the function on a sampling of dots (X , Y ), withO < i < I and 0 < j < J. The i j function module determines the value of the function for (X, Y) by a bilinear or linear interpolation.
The following are its characteristics: - the storage of the values of the function on a 128 x 128 grid, - maximum video rate: 20 MHz, - inputs: 2 x 12 bits, - output: 12 bits.
External command signals (not shown) enable the loading, into the respective RAMs 22, 32, of the values of the non-linear function to be carried out (function 2 2 1/2 (x +y ) in the case of the circuit 20, function |x|-a.y in the case of the circuit 30).

Claims (8)

WHAT IS CLAIMED IS:
1. A method to characterize and localize characteristic dots of a digitalized image, notably for the recognition of shapes in a scene analysis processing operation, these characteristic dots being dots of contours with high curvature such as corners or small-sized regions, said image being formed by a two-dimensional frame of pixels, each having a determined gray level, wherein said method comprises the steps consisting in: (a) the approximating, for each pixel to be analyzed, of the second order partial derivatives of the gray level function of the image (F(x,y)), (b) the determining, from these derivatives, of ah attribute (A ) representing the characteristic sought 2 and the assigning of this attribute to the pixel thus analyzed, and (c) the searching, from among the pixels of the image thus analyzed, for the dots maximizing said attribute .
2. The method of claim 1, wherein said approximation is done by convoluting the gray level function of the image (F(x,y)) with functions corresponding to the second order partial derivatives of a smoothing function, this smoothing function enabling the noise present in the image to be attenuated.
3. The method of claim 1, wherein said attribute (A ) is the second inherent value of the matrix of said 2 second order derivatives : λ^χ,ν) = I Fxx + Fxy I - [ (Fxx - F^)2 + 4-F^ I 2, with Fyyix.y) = 2F/dy, and ' F^x.y) = a2F/axay.
4. The method of claim 2, wherein said smoothing function is a Gaussian function.
5. The method of claim 2, wherein said smoothing function is a centered Gaussian function: G(x,y) = (2.π. I ∑ I ^2)"1. exp [ -1/2. (x,y) . Σ"1. ( ,y)t ], ∑ being the matrix of covariance of the Gaussian function.
6. A device to characterize and localize characteristic dots of a digitalized image, notably for the recognition of shapes in a scene analysis processing operation, these characteristic dots being dots of contours with high curvature such as corners or small-sized regions, said image being formed by a two-dimensional frame of pixels, each having a determined gray level, wherein said device comprises : - derivation means carrying out the approximation, for each pixel to be analyzed, of the second order partial derivatives of the gray level function of the image (F(x,y)), characterizing means carrying out the determination, from these derivatives, of an attribute (Λ ) representing the characteristic sought and the 2 assigning of this attribute to the pixel thus analyzed, and - localizing means carrying out a search, from among the pixels of the image thus analyzed, for the dots maximizing said attribute.
7. The device of claim 6, wherein the derivation means include convolution means carrying out a convolution of the gray level function of the image (F(x,y)) with functions corresponding to the second order partial derivatives of a smoothing function, this smoothing function enabling the noise present in the image to be attenuated.
8. The device of either of the claims 6 or 7, wherein the characterization and localization of characteristic dots are done in real time at the video rate. For fhe Applicants DR. REINH0LD COHN AND PARTNERS
IL9884391A 1990-07-31 1991-07-15 Method and device for the characterization and localization in real time of singular features in a digitalized image IL98843A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR9009743A FR2665601A1 (en) 1990-07-31 1990-07-31 METHOD AND DEVICE FOR REAL-TIME CHARACTERIZATION AND LOCALIZATION OF SINGULARITES OF A DIGITIZED IMAGE, IN PARTICULAR FOR THE RECOGNITION OF FORMS IN SCENE ANALYSIS PROCESSING

Publications (2)

Publication Number Publication Date
IL98843A0 IL98843A0 (en) 1992-07-15
IL98843A true IL98843A (en) 1994-01-25

Family

ID=9399260

Family Applications (1)

Application Number Title Priority Date Filing Date
IL9884391A IL98843A (en) 1990-07-31 1991-07-15 Method and device for the characterization and localization in real time of singular features in a digitalized image

Country Status (5)

Country Link
EP (1) EP0469986A1 (en)
AU (1) AU641794B2 (en)
CA (1) CA2047809A1 (en)
FR (1) FR2665601A1 (en)
IL (1) IL98843A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2049273A1 (en) * 1990-10-25 1992-04-26 Cindy E. Daniell Self adaptive hierarchical target identification and recognition neural network
ES2322120B1 (en) * 2007-10-26 2010-03-24 Consejo Superior De Investigaciones Cientificas METHOD AND SYSTEM FOR ANALYSIS OF SINGULARITIES IN DIGITAL SIGNS.
CN109359560A (en) * 2018-09-28 2019-02-19 武汉优品楚鼎科技有限公司 Chart recognition method, device and equipment based on deep learning neural network

Also Published As

Publication number Publication date
CA2047809A1 (en) 1992-02-01
EP0469986A1 (en) 1992-02-05
IL98843A0 (en) 1992-07-15
FR2665601A1 (en) 1992-02-07
AU641794B2 (en) 1993-09-30
AU8150091A (en) 1992-02-06
FR2665601B1 (en) 1997-02-28

Similar Documents

Publication Publication Date Title
Steger Extracting curvilinear structures: A differential geometric approach
Toet et al. Merging thermal and visual images by a contrast pyramid
Rodehorst et al. Comparison and evaluation of feature point detectors
Koschan A comparative study on color edge detection
Ziou et al. Edge detection techniques-an overview
Canny A Variational Approach to Edge Detection.
US5233670A (en) Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recognition in scene analysis processing
CN108876723B (en) Method for constructing color background of gray target image
US20080166016A1 (en) Fast Method of Object Detection by Statistical Template Matching
O'Gorman et al. Matched filter design for fingerprint image enhancement.
Lacroix et al. Feature extraction using the constrained gradient
CN111914596B (en) Lane line detection method, device, system and storage medium
Zhu et al. Super-resolving commercial satellite imagery using realistic training data
KR101921608B1 (en) Apparatus and method for generating depth information
Cheon et al. A modified steering kernel filter for AWGN removal based on kernel similarity
AU641794B2 (en) Method and device for the characterization and localization in real time of singular features in a digitalized image, notably for the recognition of shapes in a scene analysis processing operation
CN115035281B (en) Rapid infrared panoramic image stitching method
Cumani et al. Image description of dynamic scenes
Bai Overview of image mosaic technology by computer vision and digital image processing
JPH11506847A (en) Visual identification method
Hu et al. Feature extraction and matching as signal detection
Nair et al. Single Image Dehazing Using Multi-Scale DCP-BCP Fusion
Ardö et al. Height Normalizing Image Transform for Efficient Scene Specific Pedestrian Detection
Subramanyam Feature based image mosaic using steerable filters and harris corner detector
Abadpour et al. Fast registration of remotely sensed images for earthquake damage estimation

Legal Events

Date Code Title Description
RH Patent void