AU641794B2 - Method and device for the characterization and localization in real time of singular features in a digitalized image, notably for the recognition of shapes in a scene analysis processing operation - Google Patents

Method and device for the characterization and localization in real time of singular features in a digitalized image, notably for the recognition of shapes in a scene analysis processing operation Download PDF

Info

Publication number
AU641794B2
AU641794B2 AU81500/91A AU8150091A AU641794B2 AU 641794 B2 AU641794 B2 AU 641794B2 AU 81500/91 A AU81500/91 A AU 81500/91A AU 8150091 A AU8150091 A AU 8150091A AU 641794 B2 AU641794 B2 AU 641794B2
Authority
AU
Australia
Prior art keywords
image
function
dots
attribute
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU81500/91A
Other versions
AU8150091A (en
Inventor
Jean-Yves Dufour
Hugues Waldburger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson TRT Defense
Original Assignee
Thomson TRT Defense
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson TRT Defense filed Critical Thomson TRT Defense
Publication of AU8150091A publication Critical patent/AU8150091A/en
Application granted granted Critical
Publication of AU641794B2 publication Critical patent/AU641794B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Description

S F Ref: 186754
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
0 a Sa se a~i Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Thomson TRT Defense rue Quynemer 78280 Guyancourt
FRANCE
Jean-Yves Dufour and Hugues Waldburger Spruson Ferguson, Patent Attorneys Level 33 St Martins Tower,,31 Market Street Sydney, New South Wales, 2000, Australia Method and Device for the Characterization and Localization in Real Time of Singular Features in a Digitalized Im&ge, Notably for the Recognition of Shapes in a Scene Analysis Processing Operation The following statement is a full de'scription of th'q invention, including the best method of performing it Known to melus:- 5845/ 3 0 0040 4.
4 U SO 0 0 4 1 METHOD AND DEVICE FOR THE CHARACTERIZATION AND LOCALIZATION IN REAL TIME OF SINGULAR FEATURES IN A DIGITALIZED IMAGE, NOTABLY FOR THE RECOGNITION OF SHAPES IN A SCENE ANALYSIS PROCESSING OPERATION BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method and a device to characterize and localize certain singular features of a digitalized image in real time.
The invention shall be described chiefly in the context of an application to the analysis of a scene in which it is sought to recognize characteristic shapes corresponding to contours with high curvature, for example the corners of a polygonal contour or small-sized regions.
Once these singular features have been characterized (namely distinguished from the rest of the image, notably from the neighboring contour dots) and localized referenced by their coordinates in the image), they will constitute "characteristic dots" corresponding to "primitives" of the processed image which could lend themselves to a certain number of processing operations such as, for example, image segmentation operations (wherein a region is enlarged from the "seed dots" formed by the characteristic dots recognized beforehand) or for applications making use of techniques for the placing of characteristic dots in correspondence with one another.
2 It is thus, for example, that when it is sought to localize man-made objects (roads, bridges, railways, canals etc.) in a natural environment, in infrared images given by a camera placed on board an aircraft, such objects when observed generally appear in a polygonal shape. As a typical application, we might cite path-correction operations in aircraft navigation.
This application to shape recognition does not, however, restrict the scope of the present invention, which can also be used for other applications necessitating the extraction of characteristic dots, for example stereovisioi applications, motion analysis gee.
etc.
2. Description of the Prior Art *o 15 In general, if the image is considered to be a S* 2 restriction to N of a mathematical function F(x,y) with two variables and with real values, hereinafter called a "gray level function", the principle of the method consists in the preparation, from this function 20 F (unprocessed image function) of another function enabling the reliable characterization and localization of the characteristic dots of the image. This other function sha',l be called the "attribute".
Indeed, it is necessary to use an attribute such 0 as this for, if we considered only dots such that the value of the gray level function F(x,y) were to be extreme in a neighborhood of varying size, then the characteristic dots sought would have to correspond to simple peaks in t' gray level of the image, and this would entail a particularly restrictive hypothesis.
Furtheniore, such a test would be extremely sensitive to the noise spikes in the image.
Hence, when an attribute has been defined and when its value for all the pixels of the image has been computed (in the characterization step), the characteristic dots are found (in the localization step) by the application of a simple criterion to this attribute, for example a test of maximality in the entire image, in a sub-image (local test) or else in a neighborhood of a given central pixel.
0 One of the first types of attribute proposed has been a statistical attribute, namely an attribute 15 constituted by an operator carrying out, at each dot of the image, an estimation of the local variance in the oriented neighborhoods centered on the dot processed.
In this respect, reference may be made to the work by H.P. Moravec, developed in "Towards Automatic Visual 20 Obstacle Avoidance" in Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), *Se 1977 and M.J. Hannah in Bootstrap Zero, in Proceedings of the Image Understanding Workshop, DARPA Conference, 1980. These techniques are characterized by a volume of computations that are very costly in terms of physical implementation.
Another approach consists in the use of an attribute that is no longer statistical but differential, i.e. first of all, an approximation is made of the function of the gray level of the image the vector formed at each dot by the partial derivatives of the function F is computed), then the vector function thus obtained is analyzed.
Thus, for each pixel, a complex value in other terms, a vector) will be defined, said vector containing, for each dot, two information elements, namely: a measurement of the local transition of the gray levels in the vicinity of this dot, represented by the norm G(x,y) of the gradient vector and an estimation of the direction in which this b transition is made, represented by the argument 15 of the gradient vector should a contour be o' effectively present in the neighborhood of this dot, this gradient direction will be perpendicular to the direction of the contour.
SIt is then possible, from the measurement of the O0 o local transition of the gray level, to extract only the contour dots, i.e. to keep only the dots corresponding to local maxima of this function maxima in a given neighborhood V, for example a 3 x 3 neighborhood) in thus keeping only the "peak lines" of the gray level function of the image.
This condition may be formulated as follows: a given dot M(x,y) is a contour dot if and only if the following relationship is verified: G(x,y) V e V(x,y) n D(x,y), D(x,y) designating the orientation straight line S(x,y) and V(x,y) designating the given neighborhood of The gradient information cannot, however, be used directly for the search for the characteristic dots.
Indeed, a dot for which the gradient amplitude is locally the maximum is necessarily recognized as being a contour dot, and it is therefore difficult to distinguish it from its neighbors located on the same contour.
o* Certain techniques have been proposed to make a search, after the contour dots have been thus detected, 15 for those dots for which the variation in the direction of the gradient is locally the maximum (angle, turning-back point etc.). such techniques are notably described by P.R. Baudet in "Rotational Invariant Image perators" in Proceedings of the International Joint 20 Conference on Pattern Recognition (IJCPR), 1978; L.
Kitchen and A. Rosenfeld in "Gray Level Corner Detection" in Pattern Recognition Letters, Volume 1, 1982; D.A. Zuniga and R. Haralick in "Corner Detection Using the Facet Model", in Proceedings of the IEEE Conference on Vision and Pattern Recognition (CVPR), 1983 and R.L. Dreschler and H.H. Nagel in "On the Selection of Critical Points and Local Curvature Extrema of Regions Boundaries for Interframe Matching" in Image Sequence Processing and Dynamic Scene Analysis, T.S. Huang ed., NATO ASI Series, Volume F2, Springer Verlag, 1983.
These techniques are, however, relatively unwieldy to implement for they are all based on the use and combination of the first order and second order partial derivatives of the gray level function of the image; it is therefore necessary to know (and hence to compute) these derivatives for all the pixels of the image.
Another technique of analysis has been proposed by J.Y. Dufour and H. Waldburger in Recalage d'images par association de primitives, construction et analyses d'histogrammes multidimensionnnels (Resetting of Images by Association of Primitives, Construction and Analyses 1 15 of Multidimensional Histogzams), in Actes du Congres GRETSI, Antibes 1989.
In this technique, a search is made for dots maximizing a criterion of local radiality of the gradient vector field (which amounts more or less to 20 searching for the centers of curvature of the contour at the positions where this curvature is the greatest).
It will be noted that these dots that are searched for, which are at the intersection of the supports of the o vectors of their neighborhoods, are dots close to a
S
contour but not located on it.
This latter technique, like the preceding ones, is characterized however by costly layout in terms of the number of circuits to be used.
SUMMARY OF THE INVENTION One of the aims of the invention is to propose a particular p- essing operation, with an adapted simplified architecture, enabling the real-time performance of this processing operation for the I recd -U me characterization and localizationA of the singular features of a digitalized image.
It will be seen, in particular, that the processing operations done all have a local character, namely that the analysis of a given pixel is done exclusively as a function of the pixels located around it in a limited neighborhood. This makes it possible to achieve a relative simplicity of the processing to be dlone, unlike in prior art methods which generally make 15 it necessary to consider the totality of the image, thus making it necessary to provide for an architecture that is relatively complex and costly in terms of circuits (large-capacity frame memories, large volume of computations etc.).
20 The detailed description of the invention will also highlight the adaptive character of the processing operations with respect to the content of the image: this features wil, notably enable the quality of these processing operations to be improved.
a To this effect, the present invention proposes a method to characterize and localize the characteristic dots of a digitalized image, notably for the T recognition of shapes in a scene analysis processing 1 7 ot^ ,C 0\ 8 operation, these characteristic dots being dots of contours with high curvature such as corners or small-aizsd regions, said image being formed by a two-dimensional frame of pixels, each having a determined gray level, wherein said method comprises the steps of: the approximating, for each pixel to be analyzed, of the second order partial derivatives of the gray level function of the image the determining, from these derivatives, of an attribute (a representing the characterirtic sought 2 and the assigning of this attribute to the pixel thus analyzed, and *e*e the searching, from among the pixels of the 15 image thus analyzed, for the dots maximizing said attribute.
Very advantageously, said approximation is done by convoluting the gray level function of the image with functions corresponding to the second 20 order partial derivatives of a smoothing function, this smoothing function enabling the noise present in the image to be attenuated.
Another object of the invention is a device that is constituted by means enabling these functions to be implemented, and in which the characterization and localizationA of the characteristic dots are then g advantageously done in real time at the video rate.
9 Advantageously, said attribute is the second inherent value of the matrix of said partial derivatives: 2(xy) I Fxx F I (Fxx Fy)2 4.F 2 1 2 with Fxx(x,y) a@F/ax 2 Fyy(x,y) a 2 F/ay, and Fxy(xy) 8 2 F/axay.
Also advantageously, said smoothing function is a Gaussian function, notably a centered Gaussian function: G(x,y) I X£ 1 1 exp 1 (xy) t 15 where X is the matrix of covariance of the Gaussian
*I
function.
BRIEF DESCRIPTION OF THE DRAWINGS An embodiment of the invention shall now be described with reference to the appended drawings.
S 20 Figure 1 is a block diagram illustrating a .preferred architecture enabling the real-time implementation of the processing method of the invention; Figure 2 gives a schematic illustration of a first o embodiment of the convolution circuit of figure 1.
Figure 3 gives a schematic illustration of a second, more slaborate embodiment of the convolution circuit of figure 1.
Figure 4 gives a schematic illustration of an embodiment of the first function circuit of figure 1.
Figure 5 gives a schematic illustration of an embodiment of the second function circuit of figure 1.
DESCRIPTION OF THE PREFERRED EMBODIMENT General Presentation of the Processing Method The method consists in processing an unprocessed image formed by a two-dimensional frame of pixels each having a determined gray level, for example the image delivered by a video camera such as an air-ground infrared camera on board an aircraft.
O. From this unprocessed image, first of all, for each pixel, the second order derivatives of the gray level function F(x,y) of the analyzed image (or an 15 approximation of these derivatives) shall be determined. It is assumed, naturally, tht the gray level function F(x,y) can be twice differentiated.
These second derivaitves may be presented in matrix form (Hessian transfom of the function) 2 20 y. (Fxx(x,y) Fxy(x,y) H(x,y) Fxy(x,y) Fyy(x,y) the notations used being the following Fxx(x,y) 2F/x2 Fyy(x,y) a2F/y2, Fxy(x,y) 2pF/axay.
a I a-a, a aV 1a toae YJ a .Ja ,3 a a a a 66 Y~a a a asY Ca a ag a a 11 The idea that forms the starting point of this invention relates to the use of an attribute based on the approximation, at each point of the function constitLued by the second inherent value of the Hessian transform H(x,y) of the function F(x,y) which is: F x4y (FX 2 ]1/2 The choice of the second inherent value corresponds to a placing in a direction orthogonal to that where F is the maximum, i.e. in a relative xx reference position where the variation in x is the most marked.
The direct approximation of the partial derivatives of the image function prove, however, to be 15 most usually insufficient to obtain satisfactory results in the applications usually envisaged.
Indeed, apart from the high sensitivity to noise of such operators, the neighborhood used for the approximation is highly limited, and the information 20 extracted therefore has an excessively marked local character.
To overcome this drawback, the present invention proposes operating not on the original image (function but on a modified image (function obtained by convoluting the original image with an appropriate "smoothing" function G(x,y).
There will thus be: F G) (x,y) The smoothing function should naturally be positive, capable of being integrated and at least twice continually differentiable.
As a general rule it is possible to adopt, for example, f" this function G, a centered Gaussian function: G(x,y) I 1 1 2) 1 exp -1 (x,y)t] Z being the matrix of covariance of this function (which may or may not be diagonal).
It will be noted that this type of function does not restrict the scope of the invention and that it is *ee also possible, for the same purpose, to to use other smoothing functions which may or may not be Gaussian 15 functions, and isotropic or non-isotropic Gaussian functions.
If the function G iP, appropriately chosen (as is the case with the above Gaussian function), the convolution operation brings two advantages.
20 First of all, it "smoothens" the image, i.e. it attenuates the noise and rounds out the angular contours.
Secondly, i.t facilitates the derivation operations (which are very cumbersome from the viewpoint of the volume of computations, hence of the complexity of the circuits to be used); indeed, it may be observed that: an/axiayn- F an/axiayni G.
The function A' that will be used as an attribute 2 will then be defined by: k'2(x,y) I F LG I TG) 2 (F T2G) 2 1/2 with the following notations: LG Gxx Gy (namely the Laplace operator of the function G) TI Gx Gyy T2G 2.Gxy 10 With an attribute such as this, the dots for which oo*o the absolute value of A' is high are dots located in S* 02 the neighborhood of a contour of the image F" having a a curvature.
Furthermore, the absolute value of A' is all the 2 greater as the local contrast is high and as the curvature is accentuated, which typically corresponds to the properties of the characteristic dots usually sought.
Furthermore, the dots such that A' is positive 2 are located in the concavity of the local curvature, and the presence of a corner dot of the contour is o S characterized, if its curvature is sufficiently accentuated, by the presence of a local maximum of 2 at a position close to that of the center of the curvature, or the center of the region if the contour demarcates a sufficiently small-sized region.
It is thus seen that the characteristic dot determined through this attribute is close to the 14 contour but external to it, which advantageously makes it possible to distinguish it from the contour dots proper. After processing, therefore, the same image may preserve both the contour dots and the characteristic dots, these two types of dots being separate.
This attribute is therefore perfectly suited i the characterization and to the pinpoint localization of corners and small-sized regions in an image.
Architecture for the Real-Time Implementation of The 10 Processing Method Referring to the block diagrams of the figures, a description shall now be given of a circuit
S
architecture capable of carrying out the processing operations of the above-described method in real time.
Indeed, the formulations of the processing operations explained further above have been chosen in order to enable this processing in real time, in taking account of the very high rates, which may go up to MHz for the pixel rate.
This architecture, which is shown schematically in *Goo its totality in figure 1, essentially has three blocks.
The first block, referenced 10, carries out the following three convolutions on the basis of the gray level function received at input at the video image rate: L(x,y) (F G) Tl(x,y) (F T1G) and
T
2 (F ToG) (x,y) The second block, referenced 20, is a block enabling the computation, from two values x and y at 2 2 1/2 input, of the quadratic mean (x +y of these two values; these two values x and y shall herein be the convolution results T and T delivered by the 1 2 convolution circuit The third function block, referenced 30, is a block enabling the computation, from two values x and y, of the term ixl-a.y, a being a constant. These two terms x and y shall herein be the convolution result L delivered by the circuit 10 and the quadratic mean 2 2 1/2 (T +T delivered by the first function circuit 1 2 The result delivered by this circuit 30 is therefore: I a.[(T 2 +T 2 1 /2] namely the attribute A' explained further above.
2 We shall now describe each of the blocks in detail.
The block 10 carrying out the convolution has been shown in figures 2 and 3 in two different forms.
This circuit, in either of its forms, is made from two universal VLSI circuits developed within the framework of the European program EUREKA ("MIP" project; No. EU34: Modular Image Processing), namely the Video Memory Circuit (memory function) and the Linear Filter Circuit (linear filter function).
The Video Memory MIP circuit is a memory circuit designed for the organization of the data for the Linear Filter MIP Circuit. It enables the memorizing of four video lines of 1024 pixels each (maximum size), each pixel being capable of being coded on eight gray level bits. Thus, at the video cadence, it can deliver a column of five pixels (the four pixels stored plus the current pixel) to a linear filter placed downline.
The length of the lines can be programmed by 10 external command, with a maximum size of 1024 pixels.
The Linear Filter MIP Circuit, for its part, is a 0 9 dedicated circuit that can be used to carry out the 0 convolution of an image E with a mask K according to the relationship: C(n,m) Y E(n+l,m+j) K(ij) o This circuit has the following functional characteristics: processing neighborbood: 5 x two possible modes of operation: "real" mode (single convolution) and "complex" mode (two S S simultaneous convolutions with two different masks), programmable video format (line return, frame return), maximum video rate: 20 MHz, input five 8-bit pixels, output: 16 bits (in real mode) or 24 bits (2 x .2 bits in complex mode), possibility of integrated post-processing operations: the adjusting of the outputs by a transformation of the following type S(n,m) a. 2b c, with a, b and c programmable, thresholding: the values below a given threshold may be forced to zero, the values above this threshold being kept in their state, or forced to 1 n069 10 depenCing on the thresholding mode; e**O
S
computation of histogram, and S search for the minimum and for the maximum on the result values.
In a first architecture envisaged for the implementation of the invention, illustrated in figure 2, the convolution circuit 10 has a video memory 11 supplying two linear filters 12 and 13 in parallel.
The first linear filter 12 works in real mode and enables the computation of the Laplacian L. The linear filter 13 works in complex mode (two simultaneous convolutions with different masks) and delivers T and 0 1 T at output in parallel. The coefficients of the mask 2 are applied to the linear filters by an external command (not shown) at the same time as the other commands for the parametrization of this circuit.
This architecture is relatively simple from the viewpoint of the number of circuits used, but it may be noted that its use is hbghly restricted owing to the 18 reduced size (5x5) of the convolution cores that may be used.
In the case of figure 3, two video memories 11 and 11' are associated in cascade so as to have a 9 x 9 neighborhood available (the current line plus the eight previous lines, loaded in the memories 11 and 11') thus procuring a bigger convolution core.
In the same way, two groups of two linear filters 12, 12' are placed in cascade (computation of the 10 Laplacian) and 13, 13' (computation of T and T 1 2 The two function blocks 20 and 30, illustrated separately in figures 4 and 5, have a similar architecture.
The non-linear operations which they imply may be carried out entirely by two Function Module type MIP circuits 21, 31, associated with respective RAMs 22, 32. For, the MIP function module enables the approximation of any two-variable continuous function to the video rate.
To this effect, the RAM that is associated with it contains the values of the function on a s'mpling of o* dots (X Y withO i r I and 0 S j S J. The i j function module determines the value of the function for Y) by a bilinear or linear interpolation.
The following are its characteristics: the storage of the values of the function on a 128 x 128 grid, maximum video rate: 20 MHz, 4 4 19 inputs: 2 x 12 bits, output: 12 bits.
External command signals (not shown) enable the loading, into the respective RAMs 22, 32, of the values of the non-linear function to be carried out 2 2 1/2 (x +y in the case of the circuit 20, function !xl-a.y in the case of the circuit @Ge

Claims (9)

1. A method to characterize and localize in real time characteristic dots of a digitalized image, notably for the recognition of shapes in a scene analysis processing operation, these characteristic dots being dots of contours with high curvature such as corners or small-sized regions, said image being formed by a two-dimensional frame of pixels, each havirn a determined gray level, wherein said method comprises the consisting in: the approximating, for each pixel to be analyzed, of second order partial derivatives of a gray level function of the image the determining, from these derivatives, of an attribute (X 2 representing a characteristic sought and the assigning of this attribute to the pixel thus analyzed, and the searching, from among the pixels of the image thus analyzed, for the dots maximizing said attribute.
2. The method of claim 1, wherein said approximation is done by convoluting the gray level function of the image with functions corresponding to the second order partial derivatives of a smoothing function, this smoothing function enabling noise present in the image to be attenuated. i. 9* 9 9 OO 9 BFD/306K
3. The method of claim 1, wherein said attribute at (A isA-eke-second inherent value offA. matrix of said 2 second order derivatives: IFX FX, I E F7-FY 2
+4.F Y23]1/2 FX jx,y) 0tFO,2 F Y, jX,y) 2 F/ay, and F =XY a 'Vlaxay. 0~. Wes" 10 4. The method of claim 2, wherein said smoothing function is a Gaussiian function.
The method of claim 2, wherein said smnooth.Ing *function is a centered Gaussian function: G(x,y) (2.7r. 1 Z 1/2)-1 exp C 3 1 being the matrix of covariaace of the Gaussian function.
6. A device to chazacterize and localize 4 characteristic dots of a digitalized image, notably for the recognition of shapes in a s(.ene analysis girocessing operation, these characteristic dots being a dots of contours with high curvature such as corners or small-sized regions, said image being formed by a two-dimensional frame of pixels, each having a determined gray level, wherein siaid device comprises: -derivation means carrying out the approximation, for each pixel to be analyzed, of 4the- second order partial derivatives of a gray level function of the image characterizing neans carrying out the determination, from these derivatives, of an attribute (k 2 representing a characteristic sought and the assigning of this attribute to the pixel thus analyzed, and localizing means carrying out a search, from among the pixels of the image thus analyzed, for the dots maximizing said attribute.
7. The device of claim 6, wherein the derivation means Include convolution means carrying out a convolution of the gray level function of the image with functions corresponding to the second order partial derivatives of a smoothing function, this smoothing function enabling noise present in the image to be attenuated.
8. The device of either of the claims 6 or 7, wherein the characterization and localization of characteristic dots are done in real time at the video rate.
9. A method substantially as herein described and as shown in the accompanying drawings. A device substantially as herein described and as shown in the accompanying drawings. i. DATED this TWENTY-FIRST day of JULY 1993 to*. Thomson TRT Defense o S Patent Attorneys for the Applicant SPRUSON FERGUSON 0* 0: BFD/306K Method and Device for the Characterization and Localization in Real Time of Singular Features in a Digitalized Image, Notably for the Recognition of Shapes in a Scene Analysis Processing Operation Abstract of the Disclosure The singularities constituting the charecteristic dots of the image are dots of contours with high curvature such as corners of small-sized regions, said image being formad by a two-dimensional frame of pixels, each having a determined gray level. The device compries: derivation means 10) carrying out the approximation, for each pixel to be analyzed, of itie second order partial derivatives of the gray level function of the image characterizing means (20) carrying out the determination, from these derivatives, of an attribute representing the characteristic sought and the assigning of this attribute to the pixel thus analyzed, a-d localizing means (30) carrying out a search, from among the pixels of the image thus analyzed, for the dots maximizing said attribute. Very advantageously, the derivation means (10) include convolution means (11, 12, 13) carrying out a convolution of the gray level function of the image with functions corresponding to the second order partial derivatives of a smoothing function, this smoothing 20 function enabling the noise present in the image to be attenuated. Preferably, the attribute chosen is the inherent second value of the matrix of second order derivatives and the smoothing function is a Gaussian function, notably a centered Gaussian function. u U gure 1 Figure 1 8082DIGMM
AU81500/91A 1990-07-31 1991-07-30 Method and device for the characterization and localization in real time of singular features in a digitalized image, notably for the recognition of shapes in a scene analysis processing operation Ceased AU641794B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR9009743 1990-07-31
FR9009743A FR2665601A1 (en) 1990-07-31 1990-07-31 METHOD AND DEVICE FOR REAL-TIME CHARACTERIZATION AND LOCALIZATION OF SINGULARITES OF A DIGITIZED IMAGE, IN PARTICULAR FOR THE RECOGNITION OF FORMS IN SCENE ANALYSIS PROCESSING

Publications (2)

Publication Number Publication Date
AU8150091A AU8150091A (en) 1992-02-06
AU641794B2 true AU641794B2 (en) 1993-09-30

Family

ID=9399260

Family Applications (1)

Application Number Title Priority Date Filing Date
AU81500/91A Ceased AU641794B2 (en) 1990-07-31 1991-07-30 Method and device for the characterization and localization in real time of singular features in a digitalized image, notably for the recognition of shapes in a scene analysis processing operation

Country Status (5)

Country Link
EP (1) EP0469986A1 (en)
AU (1) AU641794B2 (en)
CA (1) CA2047809A1 (en)
FR (1) FR2665601A1 (en)
IL (1) IL98843A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2049273A1 (en) * 1990-10-25 1992-04-26 Cindy E. Daniell Self adaptive hierarchical target identification and recognition neural network
ES2322120B1 (en) * 2007-10-26 2010-03-24 Consejo Superior De Investigaciones Cientificas METHOD AND SYSTEM FOR ANALYSIS OF SINGULARITIES IN DIGITAL SIGNS.
CN109359560A (en) * 2018-09-28 2019-02-19 武汉优品楚鼎科技有限公司 Chart recognition method, device and equipment based on deep learning neural network

Also Published As

Publication number Publication date
CA2047809A1 (en) 1992-02-01
EP0469986A1 (en) 1992-02-05
IL98843A0 (en) 1992-07-15
IL98843A (en) 1994-01-25
FR2665601A1 (en) 1992-02-07
AU8150091A (en) 1992-02-06
FR2665601B1 (en) 1997-02-28

Similar Documents

Publication Publication Date Title
CN111063021B (en) Method and device for establishing three-dimensional reconstruction model of space moving target
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
Rodehorst et al. Comparison and evaluation of feature point detectors
Canny A Variational Approach to Edge Detection.
US5247583A (en) Image segmentation method and apparatus therefor
US5233670A (en) Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recognition in scene analysis processing
CN111738995A (en) RGBD image-based target detection method and device and computer equipment
CN111553869B (en) Method for complementing generated confrontation network image under space-based view angle
US5173946A (en) Corner-based image matching
AU641794B2 (en) Method and device for the characterization and localization in real time of singular features in a digitalized image, notably for the recognition of shapes in a scene analysis processing operation
CN108109125A (en) Information extracting method and device based on remote sensing images
Treible et al. Learning dense stereo matching for digital surface models from satellite imagery
CN112116561B (en) Power grid transmission line detection method and device based on image processing fusion network weight
Tripodi et al. Automated chain for large-scale 3d reconstruction of urban scenes from satellite images
CN113724273A (en) Edge light and shadow fusion method based on neural network regional target segmentation
Bai Overview of image mosaic technology by computer vision and digital image processing
Gapon et al. Defect detection and removal for depth map quality enhancement in manufacturing with deep learning
Onmek et al. Evaluation of underwater 3D reconstruction methods for Archaeological Objects: Case study of Anchor at Mediterranean Sea
Schenker et al. Fast Adaptive Algorithms For Low-Level Scene Analysis: Applications Of Polar Exponential Grid (PEG) Representation To High-Speed, Scale-And-Rotation Invariant Target Segmentation
CN112884664B (en) Image processing method, device, electronic equipment and storage medium
Ardö et al. Height Normalizing Image Transform for Efficient Scene Specific Pedestrian Detection
Nair et al. Single Image Dehazing Using Multi-Scale DCP-BCP Fusion
Subramanyam Feature based image mosaic using steerable filters and harris corner detector
Abidi et al. Cloud motion measurement from radar image sequences
Griffiths et al. Adapting CNNs for Fisheye Cameras without Retraining