WO2001069286A2 - System and method for data analysis of x-ray images - Google Patents

System and method for data analysis of x-ray images Download PDF

Info

Publication number
WO2001069286A2
WO2001069286A2 PCT/US2001/008692 US0108692W WO0169286A2 WO 2001069286 A2 WO2001069286 A2 WO 2001069286A2 US 0108692 W US0108692 W US 0108692W WO 0169286 A2 WO0169286 A2 WO 0169286A2
Authority
WO
WIPO (PCT)
Prior art keywords
wavelet
image
dimensional
scale
circular symmetric
Prior art date
Application number
PCT/US2001/008692
Other languages
French (fr)
Other versions
WO2001069286A9 (en
WO2001069286A3 (en
Inventor
Paul Patti
Jinquan Li
Raghuveer M. Rao
Original Assignee
Analysis & Simulation, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Analysis & Simulation, Inc. filed Critical Analysis & Simulation, Inc.
Priority to AU2001269679A priority Critical patent/AU2001269679A1/en
Priority to US10/221,879 priority patent/US20040022436A1/en
Publication of WO2001069286A2 publication Critical patent/WO2001069286A2/en
Publication of WO2001069286A3 publication Critical patent/WO2001069286A3/en
Publication of WO2001069286A9 publication Critical patent/WO2001069286A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • G06V10/7515Shifting the patterns to accommodate for positional errors

Definitions

  • This invention relates to computerized image processing, and more specifically to the matching of computerized images
  • Wavelet transforms use various high pass and low pass filters to filter out either high frequency or low frequency portions of the signal
  • the signal is a row or column of pixels
  • the transform of the image is the combined transform of the rows and columns This procedure is repeated and each time some portion of the signal corresponding to some frequencies being removed from the signal
  • a signal has frequencies up to 1000 Hz
  • the first stage it is split into two parts by passing the signal through a high pass and a low pass filter which results in two different versions of the same signal portion of the signal corresponding to 0-500 Hz (low pass portion), and 500-1000 Hz (high pass portion)
  • the process may continue like this until the signal is decomposed to a predefined certain level. That provides a collection of signals which represent the same signal, but all corresponding to different frequency bands.
  • a complete transform is usually not necessary.
  • the image signals are bandlimited, and therefore, computation of the transform for a limited interval of scales is usually adequate.
  • the one-dimensional wavelet transform above described is easily extended to the two-dimensional wavelet transform.
  • the two-dimensional wavelet transform is useful for image analysis.
  • Conventional 2-dimensional wavelet transforms apply a one- dimensional wavelet transform to each row of an image. Then the same transform is applied to each column. However, the one-dimensional wavelet transform is performed sequentially level by level.
  • a wavelet matching method fits a mother wavelet to a signal of interest, such that: (1) the wavelet generates an orthogonal multi-resolution analysis (MRA) and (2) the wavelet is as close as possible to the given signal in the sense of minimizing the mean squared errors between their squared amplitude spectra and their group delay spectra respectively. This generates a wavelet that is close to the given signal in shape.
  • MRA orthogonal multi-resolution analysis
  • the invention provides a two dimensional smooth circular symmetric wavelet that is rotatable.
  • the wavelet is a modification of the Mexican hat wavelet.
  • the wavelet is used in computer imaging systems and programs to process images.
  • Reference images of a target for example, x-ray images of an apple, are processed to provide one or more filter wavelets that are matched to the reference image(s).
  • Each filter wavelet may be chosen for its desired characteristic.
  • the wavelet that has the highest coefficient of convolution provides the most likely filter wavelet for use in detecting images of the object in target images.
  • the target images may be photographs or x-ray images.
  • the wavelet operates on the image to recognize portions of the image that correlate to the filter wavelet. As such, the target image may include a number of images of the target object and other objects.
  • the invention is implemented in a computer that matches a target image signal to one or more known or reference image signals.
  • the computer has a processor and a main memory connected to the processor.
  • the memory holds programs that the processor executes and stores data relevant to the operation of the computer and the computer programs.
  • the computer has an output display subsystem connected to the processor and data entry subsystem connected to the processor.
  • An input device such as a keyboard, mouse or scanner enters commands and data.
  • a software program stored in the memory operates the computer to perform an number of steps for creating a matched wavelet of an object image and for processing a target image to detect the target object in the target image.
  • the program begins by matching a plurality of two dimensional circular symmetric daughter wavelets to an original pixel image to produce a filter matched wavelet that operates on target images.
  • Fig.4. illustrates a wavelet matching procedure.
  • Fig. 5. is a flowchart of a general algorithm for wavelet matching.
  • Fig. 7 is a flowchart of a shrink-convolution-restore algorithm.
  • Fig. 14a is a matching wavelet filter for an apple.
  • This 2-D circular symmetric wavelet function is defined as:
  • the central aim of each loop is to find the maximum coefficient and its location.
  • the flowchart of Fig. 6 shows the steps in finding the maximum coefficient M and the coordinates (x, y) where the value M is taken.
  • the basic approach to calculating the 2D wavelet function and performing the wavelet transform requires substantial computing time and computer memory. It was desired to have one or more methods for reducing the time and computing capacity for a wavelet transform without substantially sacrificing accuracy.
  • the invention provides several methods for quickly calculating the maximum coefficient M.
  • the invention's matching wavelet possesses two important special properties: its behavior under dilation and rotation. These properties make the matching method more useful and efficient in the optimal detection of a signal when in the presence of signal dilations and noise.
  • Fig. 9 shows a sample result of matching an object (a "pear").
  • the result of matching is a sequence of coefficients, including the coordinates, the values, and their dilations. Once this information is calculated, the pattern image is expressed as the sum of circular wavelet units.
  • the invention varies the matched result by simply making changes to the coefficients. One such change is dilation.
  • Fig. 10 shows the above matching result being dilated in 2 scales, and the enlarged (shrunk) images are placed below each dilation for comparison. Behavior Under Rotation
  • the result of matching has the property of rotation invariance which makes it possible that only one angular position per object is needed to be calculated through the matching process, and the matched wavelet at other angles can be quickly obtained by simply rotating the one matched result to the right angle.
  • Fig. 11 shows the procedure of matching the pear at varying rotation angles.
  • the invention's matching algorithm has been applied to many different images to extract features about edge, shape, and texture: x-ray images of fruit, luggage, mechanical parts, and CT slices, as well as samples of ultrasonic images and thermal images.
  • Fig. 12 shows part of a mechanical fuse, with a spring cut out as the pattern to be matched. The matching result was then applied to sample images to detect the possible defects caused by improper positioning of the spring.
  • Figs. 13a-d show two sample images and the results after applying the above matching filter.
  • the relative position of the spring 10 with respect to the block 20 can be detected by analyzing the values along the lines 10a- 1 Od in the figures on the right, which are the results of applying the matching result of Fig. 12.
  • the image in row one-left has the spring touching the block closely and corresponds to the transform result at right having less blue line above the red line.
  • the spring doesn't touch the block and correspondently its transformation has one solid blue line above the red line.
  • Fig. 14 shows one filter created using the matching wavelet method for an apple image and the result of applying that filter to the entire image of the apple. The filter was calculated using a small patch on the pulpy area of the apple.
  • Fig. 15 shows original images of a cherry (upper row) and a piece of candy (lower row) and demonstrate the use of the method to distinguish between the two by applying a matching wavelet for texture pattern recognition.
  • a feature extraction method that uses the class labels of the data may miss important structure that is not exhibited in the class labels, and therefore be more biased to the training data than a feature extractor that relies on the high-dimensional structure. This suggests that an unsupervised feature extraction method may have better generalization properties in high-dimensional problems.
  • the Bienenstock, Cooper and Munro (BCM) neuron performs exploratory projection pursuit using a projection index that measures multi-modality. Sets of these neurons are organized in a lateral inhibition architecture which forces different neurons in the network to find different projections (i.e., features).
  • a network implementation which can find several projections in parallel while retaining its computational efficiency, can be used for extracting features from very high dimensional vector space.
  • the invention includes an Unsupervised Feature Extraction Network (UFENET) based on the BCM Neuron.
  • UFENET Unsupervised Feature Extraction Network
  • the BCM neurons are constructed such that their properties are determined by a nonlinear function ⁇ m which is called the modification threshold.
  • the inhibited activity of neuron k is defined as
  • Fig. 17 presents the flowchart for the UFENET adaptation algorithm.
  • M — I — ⁇ (E — I) where I is the unit matrix and E is the matrix with every
  • G '( x) is the derivative of G ( x) .
  • the invention uses the following method to judge the status of a BCM network. 1 .
  • the invention's goal here is only to find a set of good features such that the critical information for discriminating among different clusters is kept. Therefore, for the consideration of efficiency, precise convergence of the network is not really necessary. Therefore, a looser standard to judge the network results is used to observe the clustering of the resulting features extracted by using the neurons obtained.
  • the single neuron network is used in two ways: it is simple, making it usable in analyzing and adjusting the network, and it has a single output when applied to the data, allowing its use as a simple classifier.
  • the 6 pictures in Fig. 21a have small inner products (-0.72-0.14), corresponding to the six images with no gap (i.e., touching), while the 6 in Fig. 21b have large inner products (7.9-11.0) corresponding to the other six images with wider gap (i.e., not touching).
  • Figs. 22a,b samples of cherries and hard candy have a matching wavelet applied and the results are sent to the network.
  • Fig. 22b shows that the cherry feature vectors (represented by the dotted lines) and the candy features (represented by the plain lines) are separated (almost) after training.
  • the invention is most useful in doing a variety of different types of image analysis.
  • the matched wavelet of a sample can be applied to a given image and the locations of high similarity (a good match) will show strong distinguishable peaks.
  • the invention can thus examine images individually and manually, or it can automate the analysis of images through the use of the UFENET and possibly a back-propagation network.
  • the analysis can be done in a spatial context or in a textural context. See Fig. 23.
  • the invention creates a matched wavelet of a shape, object or arrangement of objects being examined.
  • the invention applies this filter to samples that may or may not contain the desired shape.
  • the filter returns a strong signal on samples that match well.
  • the invention reduces the dimensionality of the output to allow better examination of the data.
  • the invention may then pass this data to a back-propagation network if fully automatic grouping is desired. This allows each sample processed to be labeled as group A/group B or good/bad etc. Processing the texture of samples is another application. Often the differences between a "good" sample and a "bad” sample are hidden in the texture. In this case the mvention subtracts the background from each of the samples (Fig.
  • the invention creates a matched wavelet of one of the classes to be differentiated by applying the matching algorithm (described below) on a sample.
  • the matched wavelet returned by the matching procedure can be applied as a whole, or as is the case with texture, it is sometimes helpful to use only a small piece of the matched wavelet as the filter.
  • the basic flow of the matching procedure is to apply a number of different-scale circular wavelets to the input image (using convolution) on each pass of the main loop and to choose the one coefficient from the pool of all the transforms that has the greatest absolute value. The location and value of this coefficient is determined and stored along with the scale of the wavelet that produced it. Next a circular wavelet equal to the determined value multiplied by a circular wavelet (of the stored scale) is subtracted from the original image and added to an accumulator image (the matched wavelet). The next time through the loop the process is repeated using the remainder of the previous loop. This process continues until one of two conditions is met: the maximum number of coefficients is reached or the correlation between the original image and the matched wavelet is above a desired level.
  • Step 2 (Fig. 24 Reference 2): Main Loop
  • Step 5 If the current scale wavelet is smaller than a determined scale (baseScale) calculate coefficients in a straightforward manner (go to Step 5) otherwise we will shrink the current remainder and the wavelet to speed up the process and get an approximate location of the maximum. We then return to the current remainder and calculate more exact values in the area that we have determined the true max lies, (go to Step 6) Step 5 (Fig. 24 Reference 12): Convolve the circular wavelet at the current scale with the current remainder and store in templmg. (goto Step 13) Step 6 (Fig. 24 Reference 6):
  • Step 7 (Fig. 24 Reference 9): Convolve the new shrunken remainder with the wavelet at baseScale.
  • Step 14 (Fig. 24 Reference 18): Increment curScale and goto Step 3
  • Step 15 (Fig. 24 Reference 7):
  • Step 16 (Fig. 24 Reference 10) :
  • Step 17 (Fig. 24 Reference 13): Subtract the subtraction image from remainder to create a new remainder.
  • Step 19 (Fig. 24 Reference 21): return matched wavelet (original - remainder)
  • the matched wavelet can now be applied to individual images for analysis or to groups of images for processing by the UFENET.
  • Each wavelet transform returned by this process is then reshaped to a one-dimensional vector, and they are all grouped together into a matrix for use with the UFENET.
  • Each row in the matrix is the one-dimensional version of the wavelet transform for a single sample.
  • the resulting matrix is 40 x N 2 .
  • UFENET Training and/or Feature Extraction Once we have our matrix (we will call it d) we can use the UFENET (described below) to reduce the dimensionality of the data. Instead of dealing with N 2 values per sample a UFENET with b neurons will reduce the number of values per sample to b. We can then use a smaller number of features to classify our samples.
  • the output of training is an anay of weights that when applied to our matrix d, will give us a new matrix of fewer dimensions.
  • Step 1
  • Step 14 Set the current inhibited activity ckbar as the sel-th column ckbars(:,sel) of the whole inhibited activities matrix ckbars.
  • Step 21
  • Input Parameters a -the scale of the desired 2D circularly symmetric wavelet to return
  • suitcase may be quickly transformed to locate one or more contraband items, such as weapons or illegal substance.

Abstract

This invention relates to an improvement matching algorithm for analysing an X-ray images including an adaptive wavelet function as a linear combination of basic circular wavelet function which is an optimal wavelet function using dilations and shift of circular wavelets as building blocks. The resulting algorithm sequentially selects the most significant coefficient as one term on the linear combination that approximates the object. The matching results satisfy properties of rotation invariance, enabling match using only one angle view. The matched wavelet for other views can be quickly obtained by simply rotating the one matched result to the right angle.

Description

SYSTEM AND METHOD FOR DATA ANALYSIS OF X-RAY IMAGES
Cross-Reference to Related Applications
This patent claims the benefit of United States Patent Application Serial No 60/189,736, filed March 16, 2000
Field of Invention This invention relates to computerized image processing, and more specifically to the matching of computerized images
Background
Wavelet transforms of images provide a representation of space and frequency Very often a particular frequency component occurring at any location in an image can be of particular interest For example, frequency analysis can be used to provide edge detection (high frequency) as well as information about texture (apple vs orange). In such cases it is beneficial to know the spatial intervals these particular spectral components occur Wavelet transforms are capable of providing the space and frequency information simultaneously, hence giving a space-frequency representation of the signal
Wavelet transforms use various high pass and low pass filters to filter out either high frequency or low frequency portions of the signal For an image, the signal is a row or column of pixels The transform of the image is the combined transform of the rows and columns This procedure is repeated and each time some portion of the signal corresponding to some frequencies being removed from the signal
For example, suppose a signal has frequencies up to 1000 Hz In the first stage it is split into two parts by passing the signal through a high pass and a low pass filter which results in two different versions of the same signal portion of the signal corresponding to 0-500 Hz (low pass portion), and 500-1000 Hz (high pass portion)
The process is repeated on either portion (usually low pass portion) or both This operation is called "decomposition." Assuming that one uses the lowpass portion, the result is three sets of data, each corresponding to the same signal at frequencies 0-250 Hz, 250-500 Hz, 500-1000 Hz.
If one takes the lowpass portion again and passes it through low and high pass filters, the results are four sets of signals corresponding to 0-125 Hz, 125-250 Hz,250-
500 Hz, and 500-1000 Hz. The process may continue like this until the signal is decomposed to a predefined certain level. That provides a collection of signals which represent the same signal, but all corresponding to different frequency bands.
Higher frequencies are better resolved in space and lower frequencies are better resolved in frequency. This means that, a certain high frequency component (edge detection) can be located better in space (with less relative enor) than a low frequency component. On the contrary, a low frequency component (texture) can be located better in frequency than a high frequency component.
The term "wavelet" means a small wave. Its smallness refers to its (window) function that is of finite length. The wave refers to the condition that this function is oscillatory. The term "mother" implies that the functions with a different region of support that are used in the transformation process are derived from one main function, or the mother wavelet. In other words, the mother wavelet is a prototype for generating the other window functions. The term "translation" is used in the sense that the window is shifted through the signal. This term, obviously, corresponds to spatial information in the transform domain. The parameter "scale" in the wavelet analysis is similar to the scale used in maps. As in the case of maps, high scales correspond to a non-detailed global view (of the signal), and low scales correspond to a detailed view. Similarly, in terms of frequency, low frequencies (high scales) correspond to a global information of a signal
(that usually spans the entire signal), whereas high frequencies (low scales) correspond to a detailed information of a hidden pattern in the signal (that usually lasts a relatively short time).
Scaling, as a mathematical operation, either dilates or compresses a signal. Larger scales correspond to dilated (or stretched out) signals and small scales correspond to compressed signals. Fortunately in practical applications, low scales (high frequencies or edges on objects in an image) do not last for the entire area of an image.
The mother wavelet is chosen to serve as a prototype for all windows in the process. All the windows that are used are the dilated (or compressed) and shifted versions of the mother wavelet. There are a number of functions that are used for this puφose. The Morlet wavelet and the Mexican hat function are conventional mother wavelets.
Once the mother wavelet is chosen the computation starts with s=l and the continuous wavelet transform is computed for all values of s, smaller and larger than " 1". However, depending on the image signal, a complete transform is usually not necessary. For all practical puφoses, the image signals are bandlimited, and therefore, computation of the transform for a limited interval of scales is usually adequate.
For convenience, the procedure will be started from scale s=l and will continue for the increasing values of s , i.e., the analysis will start from high frequencies and proceed towards low frequencies. This first value of s will correspond to the most compressed wavelet. As the value of s is increased, the wavelet will dilate. The wavelet is placed at the beginning of the signal at the point which corresponds to time = 0. The wavelet function at scale " 1" is multiplied by the signal and then integrated over all times. The result of the integration is then multiplied by the constant number l/sqrt{s} . This multiplication is for energy normalization puφoses so that the transformed signal will have the same energy at every scale. The final result is the value of the transformation, i.e., the value of the continuous wavelet transform at time zero and scale s=l . In other words, it is the value that corresponds to the point tau =0 , s=l in the time-scale plane. The wavelet at scale s=l is then shifted towards the right by tau amount to the location t=tau , and the above equation is computed to get the transform value at t=tau , s=l in the time- frequency plane.
This procedure is repeated until the wavelet reaches the end of the signal. One row of pixels on the space-scale plane for the scale s=l is now completed.
Then s is increased by a small value. This is a continuous transform, and therefore, both tau and s must be incremented continuously . However, if this transform needs to be computed by a computer, then both parameters are increased by a sufficiently small step size. This corresponds to sampling the time-scale plane. The above procedure is repeated for every value of s. Every computation for a given value of s fills the corresponding single row of the time-scale plane. When the process is completed for all desired values of s, the CWT of the signal has been calculated.
Lower scales (higher frequencies) have better scale resolution (narrower in scale, which means that it is less ambiguous what the exact value of the scale) which correspond to poorer frequency resolution . Similarly, higher scales have scale frequency resolution (wider support in scale, which means it is more ambitious what the exact value of the scale is) , which correspond to better frequency resolution of lower frequencies. Computers are used to do most computations.
At higher scales (lower frequencies), the sampling rate can be decreased, according to Nyquist's rule. In other words, if the time-scale plane needs to be sampled with a sampling rate of N_l at scale s_l , the same plane can be sampled with a sampling rate of N_2 , at scale s_2 , where, s_l < s_2 (corresponding to frequencies fl>f2 ) and N_2 < N_l . In other words, at lower frequencies the sampling rate can be decreased which will save a considerable amount of computation time.
The one-dimensional wavelet transform above described is easily extended to the two-dimensional wavelet transform. The two-dimensional wavelet transform is useful for image analysis. Conventional 2-dimensional wavelet transforms apply a one- dimensional wavelet transform to each row of an image. Then the same transform is applied to each column. However, the one-dimensional wavelet transform is performed sequentially level by level. A wavelet matching method fits a mother wavelet to a signal of interest, such that: (1) the wavelet generates an orthogonal multi-resolution analysis (MRA) and (2) the wavelet is as close as possible to the given signal in the sense of minimizing the mean squared errors between their squared amplitude spectra and their group delay spectra respectively. This generates a wavelet that is close to the given signal in shape. The application of such a wavelet would be in the optimal detection of the signal in the presence of signal dilations and noise. See JO. Chapa and M.R. Raghuveer (now known as R. M. Rao), "Optimal matched wavelet construction and its application to image pattern recognition," Proc. SPIE Vol. 2491, Wavelet Application II, Harold H. Szu; Ed. April 1995.
Approach to a Two-dimensional Wavelet Matching Solution
As expected, the approach of extending the matching method to the 2-D situation is not straightforward. The first one-dimensional sequence (or the horizontal sequence) is formed by taking the first (top) row of the image. Scanning along this top horizontal row of the image creates a one-dimensional sequence. That procedure is repeated for the next rows of the image to create a series of one-dimensional sequences. These one-dimensional sequences are then wavelet-transformed to compute the (horizontal) wavelet coefficients. The horizontal wavelet coefficients are placed back into two-dimensional images (sub-images). These images are then scanned to create the vertical sequences. These vertical sequences are then wavelet-transformed, and the resulting sequences are rearranged to the two-dimensional image format.
For more background, the reader is refened to THE ENGINEER'S ULTIMATE GUIDE TO WAVELET ANALYSIS: THE WAVELET TUTORIAL by Robi Pokikar at http://www.public.iastate.edu/~φθlikar/WAVELETS/WTtutorial.html. Further background is found in U. S. Patent No. 5,598,481 whose entire disclosure is incoφorated by reference to that patent. Another approach is to attempt matching a separable 2-D wavelet to the object. But separable wavelets are not robust with respect to rotation. A wavelet matched to an object at one orientation will not work when the object is rotated because the conelation or convolution is carried out in Cartesian coordinates. Although the basic non-separable, circularly symmetric wavelet function works well in extracting, separating, and enhancing features of an image, most of the objects of interest do not necessarily have a circular shape or structure.
The main problem in two dimensions is the new degree of freedom related to orientation. A wavelet matched to an object at one orientation will not work when the object is rotated. Thus an isotropic wavelet is required. Summary
The invention provides a two dimensional smooth circular symmetric wavelet that is rotatable. The wavelet is a modification of the Mexican hat wavelet. The wavelet is used in computer imaging systems and programs to process images. Reference images of a target, for example, x-ray images of an apple, are processed to provide one or more filter wavelets that are matched to the reference image(s). Each filter wavelet may be chosen for its desired characteristic. However, the wavelet that has the highest coefficient of convolution provides the most likely filter wavelet for use in detecting images of the object in target images. The target images may be photographs or x-ray images. The wavelet operates on the image to recognize portions of the image that correlate to the filter wavelet. As such, the target image may include a number of images of the target object and other objects.
The invention is implemented in a computer that matches a target image signal to one or more known or reference image signals. The computer has a processor and a main memory connected to the processor. The memory holds programs that the processor executes and stores data relevant to the operation of the computer and the computer programs. The computer has an output display subsystem connected to the processor and data entry subsystem connected to the processor. An input device such as a keyboard, mouse or scanner enters commands and data. A software program stored in the memory operates the computer to perform an number of steps for creating a matched wavelet of an object image and for processing a target image to detect the target object in the target image. The program begins by matching a plurality of two dimensional circular symmetric daughter wavelets to an original pixel image to produce a filter matched wavelet that operates on target images. The filter wavelet is applied to the target image to generate a number of scaled wavelets transforms of the target image. The program the constructs a lower-dimensional projection of the combined wavelet function with a maximized projection index and then finds a plurality of lower-dimensional projections of the combined wavelet function. The next step is comparing the lower-dimensional projections of the combined wavelet function to one or more lower-dimensional projections of combined wavelet functions for known images, to find one or more known images matching the original unknown image.
Description of Drawings
Fig.l shows the circular wavelet at scales 0.5, 1 and 2. Figs. 2a-2d shows the Barbara photo (2a) and how stripes are extracted (2c-
2d).
Figs. 3a-3g compare edge extraction using a circular wavelet, a Haar wavelet, and a Daubechies 4 wavelet.
Fig.4. illustrates a wavelet matching procedure. Fig. 5. is a flowchart of a general algorithm for wavelet matching.
Fig. 6 is a flowchart for selecting coefficients.
Fig. 7 is a flowchart of a shrink-convolution-restore algorithm.
Fig. 8 is a flowchart of an estimation-maximization algorithm.
Fig. 9 is a set of illustrations regarding a pear image. Row 1 is the original image, the final approximation, the values of the coefficients as a function of the loops and the correlation of the approximation to the original image. ; rows 2-4 illustrate coefficient selection through loops 1-12. Each circle represents a coefficient and its size represents its dilation.
Fig. 10 compares row 1 the matched results image results for scale, V2 dilation and scale 2 with row 2 of the original image, a Vi shrunk image and a 2x enlarged image.
Fig. 11 compares row 1 of the matched wavelet at 0, 30 and 60 degrees rotation with the actual image at 0, 30 and 60 degrees.
Fig. 12a shows an image of a spring. Fig. 12b is an enlarge portion of Fig. 12a.
Figs.l3a,b and 13c,d are pairs of sample images showing the wire touching the spring and not touching, respectively.
Fig. 14a is a matching wavelet filter for an apple.
Fig. 14b is an image of an apple filtered with the wavelet of Fig. 14a.
Figs. 15a, b and 15c, d are, respectively, sets of cherry and its wavelet transform and a candy and its wavelet transform.
Fig. 16 is a 3-D representation of the projection of the images showing how one projection clusters images and a rotated projection does not cluster.
Fig. 17 is flowchart of the BCM neural network. Figs. 18a,b show the values of the neuron weights to determine convergence: left 18a converges and right 18b diverges.
Figs. 19a,b shows how the neuron weights determine convergence: left 19a convergent and right 19b divergent.
Fig. 20 shows the angle between data vectors and the neuron vector to determine the convergence.
Fig. 21a shows one neuron test results for a touching wire (21a) and for a gap between the wire and the spring (21b).
Fig. 22a shows images of cherries and hard candies.
Fig. 22b shows the training results: dotted lines are cherries; plain lines are candies.
Detailed Description of Invention
The fact that an isotropic wavelet is required implies that a circularly symmetric wavelet has to be employed. Therefore, the invention's approach is based on the circular wavelet and reflects the features of the object of interest, that is, it extracts portions of the pattern using the circular symmetric wavelet and combines the portions to form the desired matching wavelet function.
Smooth Circularly Symmetric Two-Dimensional Wavelet Functions
The invention incoφorates a 2-D filter which can pick up features uniformly along all directions, that is, a rotation of a 1-D function with the form of F(r, 8) = f(r), where f is a properly selected 1 -dimensional function.
This 2-D circular symmetric wavelet function is defined as:
ha ( /x> y) . =
Figure imgf000009_0001
which is generated by rotating a 1-D function, modified from the double derivative of the famous Gaussian probability density function (a is the scale factor). Shapes of this wavelet at three different dilations (scales equal 0.5, 1, and 2, respectively) are as shown in Fig. 1. This is a modification of the well-known Mexican Hat wavelet.
Features of the 2-D Circular Wavelet
To illustrate the orientation-invariance of circular wavelets, consider the image Barbara in Fig. 2. In this original image, similar textures appear in different orientations: stripes on the trousers, stripes on the headdress, grids on the tablecloth, grids on the chair. The circular wavelet with a=0.08 is able to extract them with high accuracy, and equally among all directions. In Fig. 2, the upper left picture shows the original image, the upper right the result after applying the circular wavelet, lower left and right are 2 enlargements of the wavelet result that show how well the stripes are extracted . Fig. 3a shows the original x-ray image of the M549 Fuze. Fig. 3b shows the comparison of the results using the circular and two separable wavelet methods respectively. The first column shows the results for the circular wavelet, the second column is for the Haar Wavelet, and the third column is for the Daubechies 4 wavelet. The results in the second row are the edges of the object obtained from thresholding the images above them. It is evident that the circular wavelet can extract the edge and shape feature from the image better-more complete curve lines, smoother edges, and more precise details.
However, the circular wavelet concept alone is not enough to work as a matching wavelet method. The object to be detected does not necessarily have a circular contour. The invention builds up a matching wavelet on the circular wavelet as described in the next section. 77ze 2-D Wavelet Matching Concept
A linear combination of wavelets is also a wavelet. The invention uses a smooth, circularly symmetric, 2-D wavelet for each object as its basic building block.
The invention builds the adaptive wavelet function as a best-fitting linear combination using dilations and shifts of these basic circular wavelet functions, producing an optimal wavelet function. The algorithm is designed to recursively select the most significant coefficients as terms in the linear combination to approximate the object. This approach yields a wavelet which allows object detection at different orientations. A simple sample illustrates this concept. In Fig. 4 the four images in row one show: (1) the sample image (original), (2) the final approximation after 20 loops, (3) the plot of correlation of every approximation with the original image to show the process of convergence, and (4) the final remainder after 20 coefficients have been extracted. The next two rows show the results of the process of each loop and the individual coefficients selected in each loop. Images in the second row are the results of approximations when selecting the first 5,10,15, and 20 coefficients, respectively; images in the third row are the coefficient map of the current 5 selections during that loop. A circle represents a coefficient with the disk indicating the non-zero area of the daughter wavelet, and the color of the circle indicates the value.
The Matching Algorithm General Algorithm for Selecting Coefficients
This algorithm recursively detects the most significant coefficient and takes a multiple of a daughter wavelet out of the image. The weighted summation of the daughter wavelets serves as an approximation of the original image. The main steps are shown in the graph of Fig. 5.
Mathematically, this method is expressed more precisely as follows: Suppose φ is the mother wavelet and / is the input image. Then the CWT of I with respect to φ is:
w(a, bl 9 b2 ) = (l{t , t2 φ aA0 (t - bl t t2 - b2 )
where ( ) is the notation of dot product and
( aa..b, ,.bb ,λ(' Xl )
Figure imgf000011_0001
is the daughter wavelet at dilation a and shift (b„ b2). What follows will compute a sequence of remainder functions {fn} to estimate new coefficients and build up the matching wavelet. Initially assign f0: f0(tλ,t2) = 7(t,,t2), for (t,J2) e domain(T). Assume that the (n-l)-th remainder fn_, has been obtained and compute the , as follows:
Mn = maxa, b\, b2\(fn_l (tl , t2 ),φaA0(t] - bλ , t2 - b2
Now suppose α , b , b are the scale and shift values such that
Mn = W\ α. bx , b2
and then the n-th remainder is
Figure imgf000012_0001
The final approximation after N steps will be
(/«-! " ) = /θ "
Figure imgf000012_0002
and is the adapted wavelet function matching the image I.
Implementation of the General Algorithm
The central aim of each loop is to find the maximum coefficient and its location. The flowchart of Fig. 6 shows the steps in finding the maximum coefficient M and the coordinates (x, y) where the value M is taken. However, the basic approach to calculating the 2D wavelet function and performing the wavelet transform requires substantial computing time and computer memory. It was desired to have one or more methods for reducing the time and computing capacity for a wavelet transform without substantially sacrificing accuracy. The invention provides several methods for quickly calculating the maximum coefficient M.
The invention's implementation of this algorithm uses three techniques to facilitate obtaining a match with acceptable speed and accuracy. Specifically, the "SCR (Shrink-Convolution-Restore) method" speeds up the computation, the "Estimation-Maximization-Computation method" provides accuracy, and the "Complete Subtraction of Background method" permits the matching of textures. The resulting algorithms analyze the various types of images of interest and the behavior under rotation. They may be combined into a single efficient algorithm.
SCR (Shrink-Convolution-Restore) Method
Lack of speed is a salient factor preventing most image-matching algorithms from being practical. To choose each coefficient, the convolutions must be computed at all scales. The problem is that at each scale the computation task is the same as a complete process with the basic wavelet transformation alone. Furthermore, the time to find this one set of multiple convolutions is only a single step in choosing one coefficient. That is to say, if the average computation time for one convolution is A, then our new task will need at least the time A - S L , where S is the range of the scale values and Lis the number of coefficients to be collected. Since the average computation time increases for coefficients with larger scale values, A must be reduced.
The invention's algorithm represented in Fig. 7 and described below mitigates this problem. The invention's method is based on the following mathematical computation. From the original definition
Figure imgf000013_0001
It is easy to check that h satisfies the following equation by computing the h function values at scale k a ,
Figure imgf000013_0002
Therefore, for the convolution, we have
Figure imgf000014_0001
= \\f(k-s,θ)--ha(s,θ) k-s)-d{k-s)-dθ (use s = — ) k
Figure imgf000014_0002
= k convl[f{k
Figure imgf000014_0003
This proves the following formula about the relation between the dilation of image and the dilation of the wavelet
Figure imgf000014_0004
k convl{f{k r,θ),ha(r,θ)].
Intuitively, it says that a factor can be moved from the subscnpt of h to the first argument of f by multiplying the result by that factor This formula makes it possible to compute a convolution of a large scale ka by instead computing another convolution of a smaller scale a.
The Estimation-Maximizatwn-Computation Method
The algorithm descnbed above (SCR -Shnnk-Convolution-Restore) is fast but not accurate in some situations This is especially true when k is large which makes the shrunk image distorted The algorithm shown in Fig 8 achieves both speed and accuracy as follows
1) Use SCR for estimation of the coefficient matnxes at all scales,
2) Find the scale value and its shift position of the maximum coefficient, (note that the maximum value is discarded after selecting it),
3) Then use CONVONE as shown m Fig 8 to compute the accurate value of the inner product This may not be the real maximum value but it is considered sufficiently close for the current stage of processing. The real maximum coefficient will be selected in a later stage. The inner product computed is only one value, instead all the values in the whole matrix. This process requires only a tiny portion of the time A of computing a single convolution: A divided by the size of the image.
The Complete Subtraction Method
The invention addresses the following problem: when the selection reaches the edge of the image I, the convolution of the remainder might not be zero at the position of the coefficient just subtracted. This might cause the selection to become stuck at a certain position, or an inaccurate coefficient to be calculated.
Mathematical analysis of the problem: a) Normally, if Vj o v = m and vχ v2 = 1 , then,
Figure imgf000015_0001
b) In the step of computing the wavelet at each scale, the wavelet coefficient at position (x,y), COEF(I, W, x, y) = J(x, y), where J(x, y) is computed as the dot
product of I with W (the part of I and W intercepting at the shift position
(x,y)). Since W , is not the whole W, the norm of it might not be 1. Suppose that
/ o W = m and W ° W = p , where p does not equal 1, so
( 7 - m W} o W = I o W - m ( W ° w} = m - m p ≠ 0
c) The invention solves this problem as follows:
Define a new number
m = — , as the modified coefficient at (x, y), which will make the difference to P
be 0. Then (,._. ~ ~_ — ~ —. — Λ m 1 - m - Wy W = 1 o W - m - Wo W) = m p = 0
This modification is compatible with the regular case: where rV ° vv — 1 This will force each selection to be totally subtracted, solving the problem.
Features of The Matching Wavelet
The invention's matching wavelet possesses two important special properties: its behavior under dilation and rotation. These properties make the matching method more useful and efficient in the optimal detection of a signal when in the presence of signal dilations and noise. First see Fig. 9, which shows a sample result of matching an object (a "pear").
Behavior Under Dilation
The result of matching is a sequence of coefficients, including the coordinates, the values, and their dilations. Once this information is calculated, the pattern image is expressed as the sum of circular wavelet units. The invention varies the matched result by simply making changes to the coefficients. One such change is dilation. Fig. 10 shows the above matching result being dilated in 2 scales, and the enlarged (shrunk) images are placed below each dilation for comparison. Behavior Under Rotation
The result of matching has the property of rotation invariance which makes it possible that only one angular position per object is needed to be calculated through the matching process, and the matched wavelet at other angles can be quickly obtained by simply rotating the one matched result to the right angle. Fig. 11 shows the procedure of matching the pear at varying rotation angles.
Applications
The invention's matching algorithm has been applied to many different images to extract features about edge, shape, and texture: x-ray images of fruit, luggage, mechanical parts, and CT slices, as well as samples of ultrasonic images and thermal images.
Edge Analysis
As one example of using matching wavelet in edge analysis, Fig. 12 shows part of a mechanical fuse, with a spring cut out as the pattern to be matched. The matching result was then applied to sample images to detect the possible defects caused by improper positioning of the spring.
Figs. 13a-d show two sample images and the results after applying the above matching filter. The relative position of the spring 10 with respect to the block 20 (from the figures on the left) can be detected by analyzing the values along the lines 10a- 1 Od in the figures on the right, which are the results of applying the matching result of Fig. 12.
Note that the image in row one-left has the spring touching the block closely and corresponds to the transform result at right having less blue line above the red line. For the example in row two, the spring doesn't touch the block and correspondently its transformation has one solid blue line above the red line.
Texture Analysis
If a part of the image being matched includes a certain pattern of texture, we can arrange the matching process to let the result match the texture. We then use the result as a filter to extract information on similar pattern of texture in other images. Fig. 14 shows one filter created using the matching wavelet method for an apple image and the result of applying that filter to the entire image of the apple. The filter was calculated using a small patch on the pulpy area of the apple.
The same technique was used on different fruits/objects to extract different features that distinguish them. Fig. 15 shows original images of a cherry (upper row) and a piece of candy (lower row) and demonstrate the use of the method to distinguish between the two by applying a matching wavelet for texture pattern recognition. Unsupervised Feature Extraction Network and Clustering Tool
Background
Although matching wavelet and other techniques can enhance and reveal various features such as edges and textures, it is not feasible to directly apply these results into a classifier or make judgement because of their high dimensionality. The curse of dimensionality says that it is impossible to base the recognition on the high-dimensional vectors directly, because the number of training patterns needed for training such a classifier should increase in an exponential order with the dimensionality. Mathematically speaking, feature extraction can be viewed as such a dimensionality reduction from the original high-dimensional vector space to a lower dimensional vector space that enough information that discriminates the different classes is kept. That is, if the important structure (for classification) can be represented in a low-dimensional space, dimensionality reduction should take place before attempting the classification. Furthermore, due to the large number of parameters involved, a feature extraction method that uses the class labels of the data may miss important structure that is not exhibited in the class labels, and therefore be more biased to the training data than a feature extractor that relies on the high-dimensional structure. This suggests that an unsupervised feature extraction method may have better generalization properties in high-dimensional problems.
Many feature extraction theories for object recognition are based on the assumption that objects are represented by clusters of points in a high dimensional feature space. Projection pursuit is to pick interesting low-dimensional projections of a high-dimensional point cloud by maximizing an objective function called the projection index. As a simple explanation, the next picture shows when a good direction is selected, one projection can reveal better cluster information than that of a bad one.
The Bienenstock, Cooper and Munro (BCM) neuron performs exploratory projection pursuit using a projection index that measures multi-modality. Sets of these neurons are organized in a lateral inhibition architecture which forces different neurons in the network to find different projections (i.e., features). A network implementation, which can find several projections in parallel while retaining its computational efficiency, can be used for extracting features from very high dimensional vector space. The invention includes an Unsupervised Feature Extraction Network (UFENET) based on the BCM Neuron. Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. Journal Neuroscience, 2:32-48
Build Ufenet
The BCM neurons are constructed such that their properties are determined by a nonlinear function θ m which is called the modification threshold. The activity of
neuron k is given by ck = tnkd , where mk is the synaptic weight vector of neuron
k, and d is the input vector. The inhibited activity of neuron k is defined as
Figure imgf000019_0001
and the modification threshold is
θm = E
where a is a monotone saturating function - a smooth sigmoidal function. The synaptic modification equations are given by
Figure imgf000019_0002
where <7 ' represents the derivative of the function O , and μ is the learning rate.
Fig. 17 presents the flowchart for the UFENET adaptation algorithm. In the flowchart: M — I — η(E — I) where I is the unit matrix and E is the matrix with every
element as 1 ,
1 σ (x) l + e -ax
and G '( x) is the derivative of G ( x) .
Determining the Stopping Point of UFENET
In order to configure UFENET and to terminate it properly, the invention uses the following method to judge the status of a BCM network. 1 . Check the convergence of the iterations: a) check the tendency of the iterations by plotting some neuron coordinates (Fig. 18). b) check the increment of the neuron coordinates (Fig. 19). c) check the angles(generalize in hyperspace] between the neuron and the sample vectors: If the computation is converged, the resulting neuron vector, should be peφendicular (or as peφendicular as possible) to all but one cluster of the samples in the space.
2. Observe the clustering situation of the resulting feature distribution
The invention's goal here is only to find a set of good features such that the critical information for discriminating among different clusters is kept. Therefore, for the consideration of efficiency, precise convergence of the network is not really necessary. Therefore, a looser standard to judge the network results is used to observe the clustering of the resulting features extracted by using the neurons obtained.
3. Judge the performance of the network by the final classification results. Because the ultimate goal here is to obtain accurate classification results, the eventual judgment of the result should be the final classification result. It is suggested that a test classification can be performed to judge the convergence. Single Neuron Network
The single neuron network is used in two ways: it is simple, making it usable in analyzing and adjusting the network, and it has a single output when applied to the data, allowing its use as a simple classifier.
A simple test using the neuron vector resulting from the previous training showed the following: the inner product of 12 testing samples (other than the 5 training samples) was calculated with the neuron. The larger values correlated to those samples with the wider gap as shown in Fig. 21b.
The 6 pictures in Fig. 21a have small inner products (-0.72-0.14), corresponding to the six images with no gap (i.e., touching), while the 6 in Fig. 21b have large inner products (7.9-11.0) corresponding to the other six images with wider gap (i.e., not touching).
Multiple Neuron UFENET
With reference to Figs. 22a,b, samples of cherries and hard candy have a matching wavelet applied and the results are sent to the network. Fig. 22b shows that the cherry feature vectors (represented by the dotted lines) and the candy features (represented by the plain lines) are separated (almost) after training.
Operation of The Invention
The invention is most useful in doing a variety of different types of image analysis. The matched wavelet of a sample can be applied to a given image and the locations of high similarity (a good match) will show strong distinguishable peaks. The invention can thus examine images individually and manually, or it can automate the analysis of images through the use of the UFENET and possibly a back-propagation network. The analysis can be done in a spatial context or in a textural context. See Fig. 23.
In examining individual images, the invention creates a matched wavelet of a shape, object or arrangement of objects being examined. The invention then applies this filter to samples that may or may not contain the desired shape. The filter returns a strong signal on samples that match well. By feeding a number of these samples to the UFENET, the invention reduces the dimensionality of the output to allow better examination of the data. The invention may then pass this data to a back-propagation network if fully automatic grouping is desired. This allows each sample processed to be labeled as group A/group B or good/bad etc. Processing the texture of samples is another application. Often the differences between a "good" sample and a "bad" sample are hidden in the texture. In this case the mvention subtracts the background from each of the samples (Fig. 23, Reference 2) in order to isolate the background. The invention then proceeds in a manner similar to object detection, except that it may use only a piece of the matched wavelet as its filter. This helps keep the processing localized, as is desired in texture analysis. Again, the invention can use the UFENET and possibly the back-propagation network to better analyze the results. Collecting and Pre-processing the Data Samples In order to use the matching procedure to differentiate two classes of images the mvention first selects a number of samples of each class of image, e.g., in comparing samples of apple textures vs. samples of orange textures, the invention uses a representative number of small images of each. In order to isolate the texture, some preprocessing is done such as subtracting the local mean or median from the samples. Wavelet matching
The invention creates a matched wavelet of one of the classes to be differentiated by applying the matching algorithm (described below) on a sample. The matched wavelet returned by the matching procedure can be applied as a whole, or as is the case with texture, it is sometimes helpful to use only a small piece of the matched wavelet as the filter.
See Fig. 24. The basic flow of the matching procedure is to apply a number of different-scale circular wavelets to the input image (using convolution) on each pass of the main loop and to choose the one coefficient from the pool of all the transforms that has the greatest absolute value. The location and value of this coefficient is determined and stored along with the scale of the wavelet that produced it. Next a circular wavelet equal to the determined value multiplied by a circular wavelet (of the stored scale) is subtracted from the original image and added to an accumulator image (the matched wavelet). The next time through the loop the process is repeated using the remainder of the previous loop. This process continues until one of two conditions is met: the maximum number of coefficients is reached or the correlation between the original image and the matched wavelet is above a desired level. If the scales in use get too large, the convolution on each pass may take too long. So when the scale of the wavelet goes above some threshold (baseScale) the invention shrinks the image, applies a smaller wavelet, gets an approximate maximum, then after choosing which scale will give the maximum, performs the proper convolution in the desired location only (convoneAc). This method speeds up the process greatly and produces results very similar to those achieved without using the shrink method.
Matching Algorithm Step by Step
See Fig. 24. Input Parameters :
Original -the image to match. It is assumed in this implementation that the image inputted to the algorithm is pre-padded with r pixels of real image data on all four sides where r is equal to the radius of a wavelet of MAXSCALE
MINSCALE -the smallest wavelet scale to use STEP -the step amount between wavelet scales to use
MAXSCALE -the largest wavelet scale to use
MAXCOEF -the maximum number of coefficients to calculate
MINCORR -the minimum correlation required to quit before MAXCOEF is reached
Output: Mwavelet -the matched wavelet calculated
Step 1 (Fig. 24 Reference 1 ): General Initialization
Step 2 (Fig. 24 Reference 2): Main Loop
We will loop until the desired number of coefficients have been determined or until the desired correlation between the matched wavelet and the image to be matched has been reached. At the end of each loop we will have found the location, value and wavelet scale of the single coefficient we have determined to be the most dominant.
We will then subtract a multiple of the circular wavelet of this scale, value and location from the image and add it to the wavelet transform. The remainder of this image is used as the starting image for the next loop while the wavelet transform acts as an accumulator adding more and more small circular wavelets together creating a larger matched wavelet. If the correlation of original and matched wavelet reaches the maximum desired percentage(Fig. 24 Reference 22) goto step 19. Step 3 (Fig. 24 Reference 5): Inner Loop
For each coefficient in the outer loop we must loop through all the possible wavelet scales (specified through MINSCALE,MAXSCALE and STEP) to determine the scale that has the greatest effect. If we have looped through all the scales for the current coefficient go to step 15. Step 4 (Fig. 24 Reference 8): Determine method of choosing coefficients
If the current scale wavelet is smaller than a determined scale (baseScale) calculate coefficients in a straightforward manner (go to Step 5) otherwise we will shrink the current remainder and the wavelet to speed up the process and get an approximate location of the maximum. We then return to the current remainder and calculate more exact values in the area that we have determined the true max lies, (go to Step 6) Step 5 (Fig. 24 Reference 12): Convolve the circular wavelet at the current scale with the current remainder and store in templmg. (goto Step 13) Step 6 (Fig. 24 Reference 6):
Shrink the remainder by the ratio of baseScale over the current scale. Step 7 (Fig. 24 Reference 9): Convolve the new shrunken remainder with the wavelet at baseScale.
Step 8 (Fig. 24 Reference 11):
Find the location of the maximum coefficient in the shrunken transform. Step 9 (Fig. 24 Reference 14):
Scale the location of the max to its location in the large remainder image. Step 10 (Fig. 24 Reference 15):
Calculate a window of interest around this point based on the resize ratio that was used to find the location. This is to account for location ercor caused by the scaling back up. This window should contain all the points that may contain the true maximum associated with the maximum found in the shrunken image. Step 11 (Fig. 24 Reference 17):
Convolve an area around that window with the true wavelet at the curcent scale such that the valid return of that convolution is an area the size of the determined window.
Step 12 (Fig. 24 Reference 20):
Find the location and value of the max in this window and adjust our estimated values to the new corrected values.
Step 13 (Fig. 24 Reference 19):
Store the location and and value of the maximum determined from the process for the current scale. Step 14 (Fig. 24 Reference 18): Increment curScale and goto Step 3
Step 15 (Fig. 24 Reference 7):
After the maximum value for each scale has been determined for the current coefficient, we choose the maximum of all of these and this will be the coefficient we will use. Step 16 (Fig. 24 Reference 10) :
Call I_nvone2 with the information for the chosen coefficient. This will create an image to subtract from the original which consists of a wavelet at the determined scale and location multiplied by the determined value. Step 17 (Fig. 24 Reference 13): Subtract the subtraction image from remainder to create a new remainder.
Step 18 (Fig. 24 Reference 16):
Calculate the correlation percentage between the original image and the matched wavelet (orig-rem). Go to step 2
Step 19 (Fig. 24 Reference 21): return matched wavelet (original - remainder)
Apply filter to samples
The matched wavelet can now be applied to individual images for analysis or to groups of images for processing by the UFENET.
Once we have our samples and our filter we apply the filter to each sample by rotating the filter 180 degrees and convolving it (2D correlation) with each image.
Each wavelet transform returned by this process is then reshaped to a one-dimensional vector, and they are all grouped together into a matrix for use with the UFENET. Each row in the matrix is the one-dimensional version of the wavelet transform for a single sample. Thus if there are 40 samples and each wavelet transform is N2 then the resulting matrix is 40 x N2. Sometimes it is desirable to apply multiple matched filters to the samples in parallel. In such a situation, if there are 40 samples and the first wavelet transform is N2 pixels and the second is M2 pixels then the resulting matrix is 40 x (N2+ M2).
UFENET Training and/or Feature Extraction Once we have our matrix (we will call it d) we can use the UFENET (described below) to reduce the dimensionality of the data. Instead of dealing with N2 values per sample a UFENET with b neurons will reduce the number of values per sample to b. We can then use a smaller number of features to classify our samples. The output of training is an anay of weights that when applied to our matrix d, will give us a new matrix of fewer dimensions.
Function TrainUFENET
Input and Configuration Parameters:
Seqv - the input vectors LOOPNUM - maximum number of iterations
Alpha - the alpha parameter to the sigmoid function
NumNeuron - the number of neurons to build
LnRate - the learning rate
Eta -a constant for lateral inhibitory NormalizeNeurons -a flag: 1 if the neurons should be normalized before starting
Output Parameters: m -the vectors of the BCM neurons
Step 1 :
Prepare the input matrix(d) in which each row represents an input Step 2:
Set the sample number as the height of d Step 3 :
Set the vector length (VECTlen) as the width of d Step 4:
Build the lateral inhibitory matrix MAT as the matrix of size NumNeuron x NumNeuron where the diagonal elements are 1 and the other elements are -eta
Step 5:
Set m to be a random matrix of size numNeuron x VECTlen where Max(m) <=1 and Min(m) >= -1 Step 6: If the flag normalizeNeurons is 0 goto step 8
Step 7:
Normalize each row of m Step 8:
Initialize the index ii = 1 ; Step 9:
If ii > LOOPNUM goto Step 21 Step 10:
Randomly select an integer sel, ranging from 1 to SAMPnum Step 11 : Set dt as the sel-th row of d
Step 12:
Compute the activity matrix cks of each input vectors paired with each neuron, cks = m*d', the matrix multiplication of the neuron matrix m with the transpose of the input matrix d Step 13:
Compute the inhibited activities matrix ckbars of all the input and all the neurons ckbars = sigmoid(Mat*cks,alpha), sigmoidal transforming the product of the lateral inhibitory matrix with the activity matrix cks. Step 14: Set the current inhibited activity ckbar as the sel-th column ckbars(:,sel) of the whole inhibited activities matrix ckbars. Step 15:
Compute the threshold theta as the mean of the activity matrix ckbars. Step 16:
Compute phi as ckbar pointwisely multiple with the difference of ckbar subtracting theta.
Step 17:
Compute the cunent increment factor Inc as the product of three factors: the learning rate InRate, the function phi from step 16, and a term for lateral inhibition, which is obtained from Mat times dsigmoid(ckbar, alpha), where dsigmoid is the derivative of the sigmoid function in step 13.
Step 18:
Obtain the current neurons by adding the increment Inc*dt to the old m Step 19:
Increment ii to ii +1 Step 20:
Go to Step 9 Step 21 :
Return m, the trained neurons. Classification Lastly, we can directly analyze the output of the UFENET or if desired we can feed this data to a classification system such as a back-propagation neural network to classify the samples.
Additional Functions Used
function convoneAc Input Parameters: img - the image r - the row at which we do convone c - the column at which we do convone rsize - floor of the radius of the wavelet xx - the wavelet to convone with Output: vl - the convolution of the wavelet and the (2 * rsize + 1) area of img centered at (r,c) returns only those parts of the convolution that are computed without padding
function invone2
Input Parameters: si - the height of desired output s2 - the width of desired output r - the row the coefficient should be centered on c - the column the coefficient should be centered on m - the value of the coefficient at r ,C wy - floor of the radius of the wavelet -y direction wx - floor of the radius of the wavelet -x direction xx - the wavelet to convolve with
Output: invimg - the inverse of a single coefficient
Step l : Set invimg = zeros(sl,s2)
Step 2:
Set the area of invimg centered at (r,c) and the size of (2*wy+l ,2*wx+l ) = m*xx;
Step 3:
Return invimg
function ccmex
Input Parameters: a -the scale of the desired 2D circularly symmetric wavelet to return
Output Parameters: x -the 2D circularly symmetric wavelet of scale a
Step 1 : General Initialization a) Set err = l.Oe-1 a b) Set w = 64 c) Set dt = 64 d) Set step = 1 e) Set tx = a f) Set ty = a Step 2:
If Step > 6 Goto Step 11 Step 3: Setdt = dt/2
Step 4:
Sett = w/10 Step 5:
If abs((l-(t/a)Λ2)*exp(-(t/a)Λ2)/abs(a)) >= en Goto Step 8 Step 6:
Set w = w-dt Step 7:
Goto Step 9 Step 8: Set w = w+dt
Step 9:
Set Step = Step +1 Step 10:
Goto Step 2 Step 11:
Set x = zeros(2*w+l,2*w+l ) Step 12:
Iftx>wGoto24 Step 13: Ifty>wGoto22
Step 14: Set t=sqrt(tx"2+ty"2)/l a Step 15:
Set tem=(l-(t/a)"2)*exp( -(t a)"2)/sqrt(25*pi)/a Step 16: x(w+l+tx,w+l+ty)=tem
Step 17: x(w+ 1 -tx,w+ 1 +ty)=tem Step 18: x(w+ 1 -tx,w+ 1 -ty)=tem Step 19: x(w+ 1 +tx,w+ 1 -ty)=tem Step 20:
Set ty = ty+1 Step 21: Goto Step 13
Step 22:
Set tx = tx+1 Step 23:
Goto Step 12 Step 24:
Return x
Conclusion, Ramifications, and Scope of Invention From the above descriptions, figures and nanatives, the invention's advantages in matching images to known images should be clear.
Although the description, operation and illustrative material above contain many specificities, these specificities should not be construed as limiting the scope of the
invention but as merely providing illustrations and examples of some of the preferred
embodiments of this invention. For example, the invention may be used to analyze photographs to extract key information for satellite surveillance. A wavelet of a
desired object, such as a missel emplacement or a ship could be used to detect missels or ships in a photo. The wavelet could also be applied to locate tumors or other bodily anomalies in x-rays. It is further useful as a tool for non-destructive inspection. As
shown above, a ordinance such as an artillery shell can be x-rayed and evaluated without disassembly. Another use is in aπport inspection stations. An x-ray image of
suitcase may be quickly transformed to locate one or more contraband items, such as weapons or illegal substance.
Thus the scope of the invention should be determined by the appended claims and their legal equivalents, rather than by the examples given above.

Claims

Claims
1. A method for applying one or more daughter wavelets to an image signal, wherein each daughter wavelet is derived from a mother wavelet characterized by two dimensional circularly symmetry and constructed in accordance with the following formula:
Figure imgf000033_0001
2. A method for matching a target image to one or more reference images, comprising the steps of: selecting a plurality of samples of the reference images; creating a filter wavelet corresponding to a matched circular symmetric two dimensional wavelet of the sample images; varying the scale of the filter wavelet and applying a plurality of the scaled filter wavelets to a target image; generating a plurality of convolution coefficients from the target image and the scaled filter wavelets; selecting a coefficient with the largest absolute value and storing it with the scale of the filter wavelet that produced it; subtracting from the image the largest coefficient and adding the subtracted image portion to an accumulator; repeating the above steps until a maximum number of coefficients is reached or correlation between the image and the matched wavelet exceeds a threshold.
3. The method of claim Claim 2 wherein the two dimensional circular symmetric daughter wavelets are derived from a mother wavelet constructed in accordance with the following formula:
Figure imgf000034_0001
4 A method of filtering a target image with a plurality of scaled matching two dimensional circular symmetric filter wavelets to determine the correlation between the filter wavelets and the target image comprising the steps of: applying a number of different-scale two dimensional circular symmetric daughter wavelets to a reference image to produce a pool of coefficients, choosing a coefficient from the pool that has the greatest absolute value, storing the chosen coefficient and the scale of the wavelet that produced it, subtracting from the original image a scaled two dimensional circular symmetric daughter wavelet equal to the chosen coefficient multiplied by a two dimensional circular symmetric daughter wavelet (of the stored scale), adding the scaled two dimensional circular symmetric daughter wavelet to an accumulator image (the matched wavelet), calculating the conelation between the original image and the accumulator image, repeating all said steps until either the maximum number of coefficients is reached or the conelation between the original image or the matched wavelet is above a desired level.
5. The method of claim 4, wherein the step of applying a number of different-scale wavelets to the base image further comprises the steps of: testing the scale of the two dimensional circular symmetric daughter wavelet to determine whether it is above a base threshold, and if the scale of the wavelet is above the base threshold, shrinking the image, applying a smaller two dimensional circular symmetric daughter wavelet, getting an approximate maximum, choosing which scale will produce the maximum, performing convolution in the desired location only.
6. The method of claim 4, wherein the step of calculating the conelation between the original image and the accumulator image further comprises the step of producing a dot product of wavelet-predicted pixels versus image pixels.
7. The method of claim 4, wherein the original pixel image comprises a textured portion of a second pixel image.
8. The method of claim 7, wherein the textured portion of the original pixel image is subtracted from the whole original pixel image to produce a new original pixel image.
9. A method for generating a matching wavelet of a reference pixel image, comprising the steps of: applying a number of different-scale two dimensional circular symmetric daughter wavelets to the reference image to produce a pool of convolution coefficients, from the pool of coefficients selecting the coefficient of greatest absolute value,
10. The method of claim 9 wherein the two dimensional circular symmetric daughter wavelets are derived from a mother wavelet constructed in accordance with the following formula:
Figure imgf000035_0001
11. The method of claim 9 comprising further steps to estimate a maximum coefficient: shrinking the image by a first factor, r do derived a shrunken image; convolving the shrunken image with one scale factor, a, to derive a coefficient matrix; restoring the shrunken image to its original size and then convolving the original image with the derived coefficient matrix.
12. The method of claim 9 comprising further steps to estimate a maximum coefficient with speed and accuracy: shrinking the image by a first factor, r do derived a shrunken image; convolving the shrunken image for all scale factors to derive an estimated coefficient matrixes at all scales; select the scale value and it shift position of the maximum coefficient; convolve the value of the inner product to determine one estimated maximum coefficient.
13. The method of claim 14 wherein the step of calculating the conelation between the original image and the accumulator image further comprises the step of producing a dot product of wavelet-predicted pixels versus image pixels.
14. The method of claim 9, wherein the step of applying a number of different-scale wavelets to the reference image further comprises the steps of: testing the scale of the two dimensional circular symmetric daughter wavelet to determine whether it is above a base threshold, and if the scale of the wavelet is above the base threshold, shrinking the image, applying a smaller two dimensional circular symmetric daughter wavelet, getting an approximate maximum, choosing which scale will produce the maximum, performing convolution in the desired location only.
15. Claim y002x. A method for matching a target image to one or more reference images, comprising the steps of: generating a filter wavelet from the reference image(s); wavelet tranforming the target image with the filter wavelet to obtain a wavelet transform for each known image; reshaping each wavelet transform for a known image to a one-dimensional vector, grouping all one-dimensional vectors into a matrix wherein each row in the matrix is the one-dimensional version of the wavelet transform for a single known image, selecting a lower-dimensional projection of the combined wavelet function with a maximized projection index, finding a plurality of lower-dimensional projections of the combined wavelet functions, comparing the lower-dimensional projections of the combined wavelet function for the new image to one or more lower-dimensional projections of combined wavelet functions for known images, to find one or more known images matching the original unknown image.
16. The method of claim 15 wherein the filter wavelet is a two dimensional circular symmetric daughter wavelets are derived from a mother wavelet constructed in accordance with the following formula:
Figure imgf000037_0001
16. The method of claim 15 wherein the step of applying a filter wavelet to each known image further comprises the steps of: rotating the filter 180 degrees, convolving the filter with the known image.
17. The method of claim 15 wherein the step of selecting a lower-dimensional projection of the combined wavelet function with a maximized projection index further comprises the step of performing exploratory projection pursuit using Bienenstock, Cooper and Munro (BCM) neurons.
18. The method of claim 15 wherein the step of finding a plurality of lower- dimensional projections of the combined wavelet function further comprises the step of applying a network of parallel BCM neurons concunently to multiple combined wavelet functions to find a plurality of lower-dimensional projections of the combined wavelet functions.
19. The method of claim 15 further comprising the step of using a back-propagation network of neurons to find lower-dimensional projections of the combined wavelet function automatically.
20. A computer program stored on a machine readable medium for operating a computer to carry out steps comprising: generating two dimensional symmetric mother wavelet and a plurality of daughter wavelets matched to a a reference image; applying the two dimensional circular symmetric daughter wavelets to a target image to produce a pool of coefficients, choosing a coefficient from the pool that has the greatest absolute value, storing the chosen coefficient and the scale of the wavelet that produced it, subtracting from the target image a scaled two dimensional circular symmetric daughter wavelet equal to the chosen coefficient multiplied by a two dimensional circular symmetric daughter wavelet (of the stored scale), and adding the scaled two dimensional circular symmetric daughter wavelet to an accumulator image (the matched wavelet), conelating the original image and the accumulator image until either the maximum number of coefficients is reached or the conelation between the original image and the matched wavelet is above a desired level.
21. The computer program of claim 20 wherein the reference image comprises a textured portion of the image.
22. A computer for matching a new image signal to one or more known image signals, comprising: a processor; a main memory connected to the processor, for the execution of software programs; a mass storage subsystem connected to the processor, for the storage of software programs and data; a display subsystem connected to the processor; a data entry subsystem connected to the processor; an input device for entering commands and data, an output device for displaying results, one or more processors for executing commands; one or more memory units for holding programs and data; a software program stored in one of the memory units for operating the computer to perform the following steps: matching a plurality of two dimensional circular symmetric daughter wavelets to an original pixel image to produce a single combined matched wavelet, applying a filter wavelet generated from the new image to each known image to obtain a wavelet transform for each known image, constructing a lower-dimensional projection of the combined wavelet function with a maximized projection index, finding a plurality of lower-dimensional projections of the combined wavelet function, comparing the lower-dimensional projections of the combined wavelet function to one or more lower-dimensional projections of combined wavelet functions for known images, to find one or more known images matching the original unknown image.
23. The computer of claim 18 wherein the two dimensional circular symmetric daughter wavelets are derived from a mother wavelet constructed in accordance with the following formula:
Figure imgf000040_0001
24. The computer of claim 22, wherein the software program further comprises steps for: reshaping each wavelet transform for a known image to a one-dimensional vector, grouping all one-dimensional vectors into a matrix wherein each row in the matrix is the one-dimensional version of the wavelet transform for a single known image, and for selecting a lower-dimensional projection of the combined wavelet function with a maximized projection index.
25. The computer of claim 22 wherein the software program for finding a plurality of lower-dimensional projections of the combined wavelet function further comprises: a software program stored in a machine-readable medium for applying concurrently a network of parallel BCM neurons to find a plurality of lower-dimensional projections of the combined wavelet function, a software program stored in a machine-readable medium for using a back- propagation network (BPN) of neurons to find lower-dimensional projections of the combined wavelet function automatically.
26. The computer of claim 22 wherein the software program for matching a plurality of two dimensional circular symmetric daughter wavelets to an original pixel image to produce a single combined matched wavelet further comprises instructions for performing the following steps: applying a number of different-scale two dimensional circular symmetric daughter wavelets to the base image to produce a pool of coefficients, selecting the one coefficient from the pool that has the greatest absolute value, storing the chosen coefficient and the scale of the wavelet that produced it, subtracting from the base image a scaled two dimensional circular symmetric daughter wavelet equal to the chosen coefficient multiplied by a two dimensional circular symmetric daughter wavelet (of the stored scale), and adding the scaled two dimensional circular symmetric daughter wavelet to an accumulator image (the matched wavelet), calculating the conelation between the original image and the accumulator image, until either the maximum number of coefficients is reached or the correlation between the original image and the matched wavelet is above a desired level.
27. The computer of claim 22 wherein the original pixel image comprises a textured portion of a second pixel image.
28. An image processing apparatus for matching a target image to one or more reference images, comprising: means for deriving a two dimensional circular symmetric mother wavelet and a family of two dimensional circular symmetric daughter wavelets from one or more reference images of an object to provide a filter wavelet; means for applying the filter wavelet to a target image to obtain a wavelet transform for the target image; means for constructing a lower-dimensional projection of the combined wavelet function with a maximized projection index, means for finding a plurality of lower-dimensional projections of the combined wavelet function, means for comparing the lower-dimensional projections of the combined wavelet function to one or more lower-dimensional projections of combined wavelet functions for known images, to find one or more known images matching the original unknown image.
29. The image processing apparatus of claim 28 wherein the two dimensional circular symmetric daughter wavelets are derived from a mother wavelet constructed in accordance with the following formula:
2 + y 2 2 ι- ( x \/ 100a .' ha (χ> y) = e -(x2 + <2)/100α2
5 π - a
30 The apparatus of claim 28, wherein the means for constructing a lower-dimensional projection of the combined wavelet function (with a maximized projection index) further comprises : means for reshaping each wavelet transform for a known image to a one-dimensional vector, means for grouping all one-dimensional vectors into a matrix wherein each row in the matrix is the one-dimensional version of the wavelet transform for a single known image, means for selecting a lower-dimensional projection of the combined wavelet function with a maximized projection index.
31. The apparatus of claim 28 wherein the software program for finding a plurality of lower-dimensional projections of the combined wavelet function further comprises: means for applying concurrently a network of parallel BCM neurons to find a plurality of lower-dimensional projections of the combined wavelet function, means for using a back-propagation network (BPN) of neurons to find lower- dimensional projections of the combined wavelet function automatically.
32. The apparatus of claim 28, wherein the means for generating a matching two dimensional circular symmetric daughter wavelet from reference pixel images further comprises: means for applying a number of different-scale two dimensional circular symmetric daughter wavelets to the reference image(s) to produce a pool of coefficients, means for choosing the one coefficient from the pool that has the greatest absolute value, storing the chosen coefficient and the scale of the wavelet that produced it.
33. The apparatus of claim 32 further comprising: means for subtracting from the target image scaled two dimensional circular symmetric daughter filter wavelets equal to the chosen coefficient multiplied by a two dimensional circular symmetric daughter wavelet (of the stored scale), and adding the scaled two dimensional circular symmetric daughter wavelet to an accumulator image (the matched wavelet), means for calculating the conelation between the reference target image and the accumulator image, until either the maximum number of coefficients is reached or the conelation between the original image and the matched wavelet is above a desired level.
34. The apparatus of claim 28, wherein the reference pixel image comprises a textured portion of a second pixel image.
PCT/US2001/008692 2000-03-16 2001-03-16 System and method for data analysis of x-ray images WO2001069286A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2001269679A AU2001269679A1 (en) 2000-03-16 2001-03-16 System and method for data analysis of x-ray images
US10/221,879 US20040022436A1 (en) 2001-03-16 2001-03-16 System and method for data analysis of x-ray images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US18973600P 2000-03-16 2000-03-16
US60/189,736 2000-03-16

Publications (3)

Publication Number Publication Date
WO2001069286A2 true WO2001069286A2 (en) 2001-09-20
WO2001069286A3 WO2001069286A3 (en) 2002-03-14
WO2001069286A9 WO2001069286A9 (en) 2002-12-19

Family

ID=22698561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/008692 WO2001069286A2 (en) 2000-03-16 2001-03-16 System and method for data analysis of x-ray images

Country Status (2)

Country Link
AU (1) AU2001269679A1 (en)
WO (1) WO2001069286A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006074571A1 (en) * 2005-01-17 2006-07-20 Pixartis Sa Temperature mapping on structural data
US7412103B2 (en) 2003-10-20 2008-08-12 Lawrence Livermore National Security, Llc 3D wavelet-based filter and method
CN104933724A (en) * 2015-06-25 2015-09-23 中国计量学院 Automatic image segmentation method of trypetid magnetic resonance image
US9322807B2 (en) 2014-04-16 2016-04-26 Halliburton Energy Services, Inc. Ultrasonic signal time-frequency decomposition for borehole evaluation or pipeline inspection
CN113160080A (en) * 2021-04-16 2021-07-23 桂林市啄木鸟医疗器械有限公司 CR image noise reduction method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598481A (en) * 1994-04-29 1997-01-28 Arch Development Corporation Computer-aided method for image feature analysis and diagnosis in mammography
US5619998A (en) * 1994-09-23 1997-04-15 General Electric Company Enhanced method for reducing ultrasound speckle noise using wavelet transform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598481A (en) * 1994-04-29 1997-01-28 Arch Development Corporation Computer-aided method for image feature analysis and diagnosis in mammography
US5619998A (en) * 1994-09-23 1997-04-15 General Electric Company Enhanced method for reducing ultrasound speckle noise using wavelet transform

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7412103B2 (en) 2003-10-20 2008-08-12 Lawrence Livermore National Security, Llc 3D wavelet-based filter and method
WO2006074571A1 (en) * 2005-01-17 2006-07-20 Pixartis Sa Temperature mapping on structural data
US9322807B2 (en) 2014-04-16 2016-04-26 Halliburton Energy Services, Inc. Ultrasonic signal time-frequency decomposition for borehole evaluation or pipeline inspection
CN104933724A (en) * 2015-06-25 2015-09-23 中国计量学院 Automatic image segmentation method of trypetid magnetic resonance image
CN104933724B (en) * 2015-06-25 2019-07-26 中国计量学院 The Automatic image segmentation method of trypetid magnetic resonance image
CN113160080A (en) * 2021-04-16 2021-07-23 桂林市啄木鸟医疗器械有限公司 CR image noise reduction method, device, equipment and medium
CN113160080B (en) * 2021-04-16 2023-09-22 桂林市啄木鸟医疗器械有限公司 CR image noise reduction method, device, equipment and medium

Also Published As

Publication number Publication date
WO2001069286A9 (en) 2002-12-19
WO2001069286A3 (en) 2002-03-14
AU2001269679A1 (en) 2001-09-24

Similar Documents

Publication Publication Date Title
Berbar Hybrid methods for feature extraction for breast masses classification
Letexier et al. Noise removal from hyperspectral images by multidimensional filtering
Gross et al. Multiscale image texture analysis in wavelet spaces
US20040022436A1 (en) System and method for data analysis of x-ray images
WO2006064239A1 (en) Method of identifying features within a dataset
Dash et al. Multi-resolution Laws’ Masks based texture classification
Celik et al. Bayesian texture classification and retrieval based on multiscale feature vector
Luo et al. Fingerprint classification combining curvelet transform and gray-level cooccurrence matrix
Bala et al. Wavelet and curvelet analysis for the classification of microcalcifiaction using mammogram images
Çelik et al. Multiscale texture classification and retrieval based on magnitude and phase features of complex wavelet subbands
Noiboar et al. Anomaly detection based on wavelet domain GARCH random field modeling
Ramakrishnan et al. Image texture classification using wavelet based curve fitting and probabilistic neural network
Lee et al. ECG-based biometrics using a deep network based on independent component analysis
WO2001069286A2 (en) System and method for data analysis of x-ray images
Sharma et al. Performance evaluation of 2D face recognition techniques under image processing attacks
Kara et al. Using wavelets for texture classification
Meade et al. Comparative performance of principal component analysis, Gabor wavelets and discrete wavelet transforms for face recognition
NAYEBİ et al. Dorsal Hand Veins Based Biometric Identification System Using Deep Learning
Wang et al. Texture image retrieval using dual-tree complex wavelet transform
Aro et al. Enhanced Gabor features based facial recognition using ant colony optimization algorithm
Yifan et al. Contourlet-based feature extraction on texture images
Tang Status of pattern recognition with wavelet analysis
Alterson et al. Object recognition with adaptive Gabor features
AU2021101325A4 (en) Methodology for the Detection of Bone Fracture in Humans Using Neural Network
Hill et al. Rotationally invariant texture classification

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 36-42, CLAIMS, REPLACED BY NEW PAGES 36-42; AFTER RECTIFICATION OF OBVIOUS ERRORS AS AUTHORIZED BY THE INTERNATIONAL SEARCHING AUTHORITY; PAGES 1/21-21/21, DRAWINGS, REPLACED BY NEW PAGES 1/21-21/21

WWE Wipo information: entry into national phase

Ref document number: 10221879

Country of ref document: US

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: RULE 69(1) EPC

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP