GB2496834A - A method of object location in a Hough space using weighted voting - Google Patents

A method of object location in a Hough space using weighted voting Download PDF

Info

Publication number
GB2496834A
GB2496834A GB1114617.2A GB201114617A GB2496834A GB 2496834 A GB2496834 A GB 2496834A GB 201114617 A GB201114617 A GB 201114617A GB 2496834 A GB2496834 A GB 2496834A
Authority
GB
United Kingdom
Prior art keywords
votes
data
features
feature
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1114617.2A
Other versions
GB2496834B (en
GB201114617D0 (en
Inventor
Oliver Woodford
Minh-Tri Pham
Atsuto Maki
Frank Perbet
Bjorn Stenger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1114617.2A priority Critical patent/GB2496834B/en
Publication of GB201114617D0 publication Critical patent/GB201114617D0/en
Priority to US13/408,479 priority patent/US8761472B2/en
Priority to JP2012184227A priority patent/JP5509278B2/en
Publication of GB2496834A publication Critical patent/GB2496834A/en
Application granted granted Critical
Publication of GB2496834B publication Critical patent/GB2496834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/753Transform-based matching, e.g. Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2134Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
    • G06F18/21345Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis enforcing sparsity or involving a domain transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation

Abstract

An object location method comprises: analyzing data comprising a plurality of objects wherein each object comprises a plurality of features, and extracting the features S411 from the data; matching features S413 stored in a database with those extracted from the data, and deriving a prediction of the object, wherein each feature extracted from the data provides a vote for at least one prediction; expressing the prediction to be analyzed in a Hough space, wherein the objects to be analyzed are described by n parameters and each parameter defines a dimension of the Hough space, where n is an integer of at least one; providing a constraint implemented by applying a higher weighting to votes S415 which agree with votes from other features than those votes which do not agree with votes from other features; finding local maxima S419 in the Hough space using the weighted votes; and identifying the predictions associated with the local maxima to locate the objects provided in the data. The constraint may be provided by minimizing an entropy function.

Description

I
Object Location Method and System
FIELD
Embodiments of the present invention as described herein are generally concerned with
the field of object registration.
BACKGROUND
The well known Rough transform was originally used as a method for detecting lines in images. The Rough transform has since been generalized to detecting, as well as recognizing, many other objects: parameterized curves, arbitrary 21) shapes, cars, pedestrians, hands and 3D shapes, to name but a few. This popularity stems from the simplicity and generality of the first step of the Hough transform-the conversion of features, found in the data space, into sets of votes in a Rough space, parameterized by the pose of the object(s) to be found. Various different approaches to learning this feature to-vote conversion function have been proposed.
The second stage of the Rough transform sums the likelihoods of the votes at each location in Rough space, then computes the modes (i.e. the local maxima) in the Hough space. -
BRIEF DESCRIPTION OF THE DRAWTh4GS
Figure 1(a) is a point cloud generated from a captured 3-D image; and figure 1(b) demonstrates the objects recognised from the point cloud of figure 1(a); Figure 2 is a schematic of an apparatus used for capturing a 3-1) image; Figure 3 is a schematic to aid explanation of the system of figure 3; Figures 4(a) to (d) are data showing the operation of the system of figure 2; Figure 5 is a flow diagram demonstrating how to capture features which can be used in a method accordance with an embodiment of the present invention; Figure 6 is a photograph demonstrating a feature; Figure 7(a) is a point cloud generated from a captured 3-D image of an object and figure (b) shows the image of figure 7(a) with the extracted features; Figure 8 is a flow chart of a method in accordance with an embodiment of the present invention; Figure 9(a) is an object to be imaged and the image processed using a method in accordance with an embodiment of the present invention; figure 9(b) is a point cloud of the object of figure 9(a), figure 9(e) is the point cloud of figure 9(b) with detected features superimposed; figure 9(d) is a depiction of the predicted poses generated after comparing the detected features with those of the database, and figure 9(e) is the registered CAD model returned by the system for the objection in figure 9(a); Figure 10(a) to 10(j) show industrial parts which are recognised and registered as an example using a method in accordance with an embodiment present invention; Figure 11 is a confusion matrix for the objects of figure 10; Figure 12 is a plot showing precision against recall for object recognition and registration for a method in accordance with an embodiment of the present invention and a standard method; Figure 13(a) shows the results of posterior distributions over 10 object classes for a standard Hough transform and figure 13(b) shows the corresponding results using a method in accordance with an embodiment of the present invention; and Figure 14(a) shows the results of inference measurements for a standard Hough transform and figure 14(b) shows the corresponding results using a method in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
According to one embodiment, a method of locating an object is provided, the method, comprising: analysing data comprising a plurality of objects wherein each object exhibits a plurality of features, and extracting said features from said data; matching features stored in a database with those extracted from said data, and deriving a prediction of the object, wherein each feature extracted from the data provides a vote for at least one prediction; expressing the prediction to be analysed in a Hough space, wherein the objects to be analysed are described by n parameters and each parameter defines a dimension of the Hough space, where n is an integer of at least one; providing a constraint implemented by applying a higher weighting to votes which agree with votes from other features than those votes which do not agree with votes from other features; finding local maxima in the Hough space using the weighted votes; and identifying the predictions associated with the local maxima to locate the objects provided in the data.
The constraint may be provided by minimising the information entropy of p(yIX,oM) with respect to 9, where y is the prediction of the object in Hough Space H which is the space of all object predictions, X={x}v is a vote cast in H by N features, where i represents a feature, j represents a vote from the th feature and w={ a}is a weight attributed to a feature and 9={ 9jj)vu is a weight attributed to a vote.
In an embodiment, 9 may be given by: o = argmin- p(y1X,w, 0) 1np(y1X,w,O) 0 q(y,) where p(AIB) is the posterior probability that A is observed given B, qC) represents the sampling distribution from which the votes are drawn and the Hough space is sampled at the locations Y={y1}.
In a further embodiment, S is minimised conditioned on the current weight of all other votes and wherein the process is repeated until convergence, wherein the vote weights for a featuref are updated by: Bfk-1, 6p = 0, Vj/tk where k = argmin-Pfk(YiIXw0)lnpfk(ytIxwe) k=l i=1 q(y1) and: ii pfk(yIx,w,e) WJK(xfk,y)+ L W E O,K(x11y) ViEf j=1 In a yet further embodiment, k is simplified by substitution using an iterative conditional mode (1CM) proxy to: Jf k = argmaxpfk(xfk}X, w, 0) k= 1 In the above embodiments, the Hough space is sampled at the locations Y=(yJ. These 1 0 may be regularly spaced intervals. In a further embodiment, the Hough space is sampled only at the locations of the votes. In this embodiment, the above equations may be written such that: N e x x w e" 0 =argmin-.-EL P rJ / 1np(x1X,w,0) e i=1j=1.1/ where p(AIB) is the posterior probability that A is observed given B and qC) represents the sampling distribution from which the votes are drawn.
Here, again, the above equation is minimised conditioned on the current weight of all other votes and wherein the process is repeated until convergence. In an embodiment, this may be achieved by updating the vote weights for a feature f by: PJk(YX,W,O) = (DfK(Xfk.y)+ L coiL 911K(x11,y), V!=f 1=' k = arñn- PfkfrtX,W, ) lflpfk(XIJIX,W,O), k=1 i=1j=1 q( ,) 6fk i, Off = 0, Vjk.
In a further embodiment, k is simplified by substitution to: if k= argmaxpfk(xfklX,w,Q). (8) k= I In one embodiment, the weights were initially updated softly, i.e. the weights were not initially fixed to 0 or 1. This approach also helped to avoid ordering bias and in this way helped to avoid falling into a poor local minimum early on, thus improving the quality of solution found.
To set an initial vote weight for using the above method, [Oq} 11can be set to an initial set of values, for example, those defined by a uniform distribution: In one embodiment, an update rule can be applied to each vote weight either synchronously or asynchronously, such as: p(xjkIX,w,O) EJ_1p(x11IX,w,O) is applied.
Pik(XIkIX, w, 0) = EJ_lpif(xifX,w,0) Where: 1f pi*(yiX,w,9) = wfK(xfk,y)-l-E wE 011K(x11,y) ViØJ jr=l Successive updates may be performed using either of the above rules to obtain an initial estimate of 0. For example, 4 to 6 iterations may be performed.
In a yet further embodiment, the obtained values of 0 are used directly in the Hough transform equation.
In one embodiment, the local maxima may be located by sampling the Hough space at predefined intervals. In a further embodiment, the local maxima are located by sampling the Hough space at the points where votes are cast.
Jn one embodiment, the above method is applied to identifying objects in an image or set of images, wherein the data to be analysed is image data and wherein the object is a physical object captured in the image.
In such an arrangement, the Hough space may be defined by at least 7 dimensions, wherein one dimension represents the ID of the object, 3 represent the translation of the object with respect to a common coordinate system and 3 represent the rotation of the object with respect to the common coordinate system. In a further embodiment, the Hough space is defined by 8 dimensions, where a dimension representing scale is added to the above 7 dimensions.
The Hough space may be defined by:
N I
p(yX, w, 0) = L 0J1 L 013K (x13, y), i='l 3=1 In one embodiment, 0 is optimised by sampling the Rough space only at the location of votes.
In addition to image processing, the method of the present invention can also be used in an optimised search strategy where it is configured to return a list of search results from a plurality of search criteria, wherein the objects to be located are the search results and the features which vote for the objects are the search criteria.
One example of this is where the search results relate to diseases from which a patient may suffer and the search criteria are the symptoms presented by the patient.
According to one embodiment, an apparatus for locating an object is provided said apparatus comprising a processor, said processor being configured to: analyse data comprising a plurality of objects wherein each object comprises a plurality of features, and extracting said features from said data; match features stored in a database with those extracted from said data, and deriving a prediction of the object, wherein each feature extracted fium the data provides a vote for at least one prediction; express the prediction to be analysed in a Hough space, wherein the objects to be analysed are described by n parameters and each parameter defines a dimension of the Hough space, where n is an integer of at least one; provide a constraint implemented by applying a higher weighting to votes which agree with votes from other features than those votes which do not agree with votes from other features; find local maxima in the Hough space using the weighted votes; and identify the predictions associated with the local maxima to locate the objects provided in the data.
Embodiments of the present invention can be implemented either in hardware or in software in a general purpose computer. Further embodiments of the present invention can be implemented in a combination of hardware and software. Embodiments of the present invention can also be implemented by a single processing apparatus or a distributed network of processing apparatus.
Since the embodiments of the present invention can be implemented by software, embodiments of the present invention encompass computer code provided to a general purpose computer on any suitable carrier medium. The carrier medium can comprise any storage medium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal.
A system and method in accordance with a first embodiment will now be described.
Figure 1(a) is a point cloud of a scene comprising four objects 1, 3, 5 and 7. The point cloud is obtained using the apparatus described with reference to any of figures 2 to 4.
The point cloud comprises predicted points on a surface obtained by a 3-D imaging technique.
Methods in accordance with embodiments of the present invention allow recognition and registration of the objects shown in figure 1(a) as shown in Figure 1(b).
Figure 2 shows a possible system which can be used to capture the 3-D data. The system basically comprises a camera 35. an analysis unit 21 and a display 37.
The camera 35 is a standard video camera and can be moved by a user. In operation, the camera 35 is freely moved around an object which is to be imaged. The camera may be simply handheld. However, in further embodiments, the camera is mounted on a tripod or other mechanical support device.
The analysis unit 21 comprises a section for receiving camera data from camera 35.
The analysis unit 21 comprises a processor 23 which executes a program 25. Analysis unit 21 further comprises storage 27. The storage 27 stores data which is used by program 25 to analyse the data received from the camera 35. The analysis unit 21 further comprises an input module 31 and an output module 33. The input module 31 is connected to camera 35. The input module 31 may simply receive data directly from the camera 35 or alternatively, the input module 31 may receive camera data from an external storage medium or a network.
Connected to the output module 33 is a display 37. The display 37 is used for displaying captured 3D data generated from the camera data received by the camera 35.
Instead of a display 27, the output module 33 may output to a file or over the internet etc. In use, the analysis unit 21 receives camera data through input module 31. The program executed on processor 23 analyses the camera data using data stored in the storage 27 to produce 3D data and recognise the objects and their poses. The data is output via the output module 35 to display 37.
The display shows the 3-D data as it is being slowly built up.. The system will determine the depth of many points at once.
As the camera is moved around an object, more and more data is acquired. In this embodiment, as the data is acquired, it is continually processed in real-time and builds up the figure of an object on the screen.
Figure 3 is a schematic which is used to explain how the depth map is constructed using the system of figure 2. A camera 1 is moved between first position 41 which will be referred to the first image position and second position 43 which will be referred to as the further image position. At the first image position I, a pixel p is shown. At pixel p, point x(Z) is shown on the object. The point x(Z) lies at a distance (X) from a reference point. In this specific example, the reference point is the camera in first position 41.
However, the reference point could be any point. The point x which is shown at pixel p lies along epi-polar line 45. From the data in 2D image I, it is impossible to judge the depth Z. However, the position of the line along which Z lies can be determined.
When camera I is moved to section position 43, the image 1' is captured. As it is known that point x lies along line 45, it is possible to project this line onto image space I and therefore one skilled in the art will know that the point x on the object (not shown) will lie somewhere along the projected line 47 in image space I'.
The position of the projected line 47 can be determined once the position of the camera at the first position 41 and the second position 43 are known. Further, as the images are captured by a continually moving video camera, the distance between the position 41 and position 43 is very small. In order to provide a clear diagram, in figure 3, the difference between these two positions has been exaggerated. In reality, this difference is very small therefore the pixel p at which point x shown in the reference image will only move within a small area w from the image taken in the first position I to the image of the second position I'.
This area w when projected onto the second image I' as w' then means that it is only pixels which fall along the projection of epi-polar line 47 within the projection of area w' that need to be processed to look for similarity with the pixel p. A known matching algorithm is then performed to see if the pixels along line 47 match with pixel p. Correspondence scores can be evaluated using systems such as normalised cross correlation (NCC), sum of absolute differences (SAD) or another metric on w and w,.
A plot the matching score or similarity score is shown in figure 4a for distances between band Zmin and Zmax. It has been found that in well textured scenes, the correct depth is typically very close to a local maxima of the matching score. Therefore, just the local maxima which will be denoted as x1... ,xN are considered from hereon.
The distance Z can be projected onto the second image I'. The first approximation of the distance Z will be based on some information concerning the general size of the object.
The system is operating, the camera will then move to a third position (not shown in figure 3) in third position, the same analysis can be performed and a similarity score can be performed in the same manner as described with reference to figure 3 for position 23.
Two similarity scores can then be added together. The scores for both further images are represented in terms of Z along the epi-polar line 45. In figure 4b, the similarity scores from six frames are added together. When there are only very few images taken as is the case of figure 4b, the local maxima is heavily multimodal. This is due to various pathologies in the problem domain such as occlusion, time warping, repetitive texture etc. In figure 4c, further similarity scores, the similarity scores from IS frames are added and in figure 4d, the similarity scores from 60 frames are added. As more and more images are added, the multimodal histogram which is initially shown in figure 4b is seen to move to be more unimodel in character as shown in figure 4d. Here, the data is converged to a clearly defined peak with a significant percentage of uncorrelated outlier data points. The matching score maxima for each incoming video frame reinforces each other gradually removing the ambiguities in pixel depth.
The above has assumed that the object is stationary and that the camera is moving.
However, it is possible for the camera to be fixed and for the object to be moving, e.g for example on an assembly line or the like.
Other systems may be used to capture 3D image data, for example, systems built on photometric stereo principles where an object is illuminated from three different directions. The system is configured such image data captured for the illumination from the three different directions can be isolated. This may be done by either temporally separating the illumination by the three light sources or by using light sources which are capable of emitting radiation of three different colours. For example, the colours red, green and blue may be selected as it is possible to obtain video cameras which can distinguish between these three colours. However, it is possible to use any three lights which can emit colours which can be distinguished between by a video camera. It is also possible to use lights which emit radiation in the non-optical radiation bands. The exact shade of colour or frequency of radiation chosen is dependent on the video camera. In one embodiment, the lights are projectors and filters are provided so that the scene is illuminated with radiation of a particular colour from each projector. In a further embodiment, LEDs are used to illuminate the object.
The above has suggested a technique of capturing 3D object data using multi-view stereo or photometric stereo techniques. However, other methods are possible such as LIDAR sensors, time of flight sensors and active lighting depth sensors, as well as CAT scanners and MRI scanners.
Next, a method for detection of the objects and their poses in the captured 3D data of the scene will be described.
Before object recognition can be performed, the system needs to be trained in order to store information concerning likely objects to be recognised. This will be described with reference to figure 5.
First, in step S40l, an object or objects will be imaged using an apparatus similar to those described with reference to figures 2 and 3 or other system suitable for capturing 3D data.
In this embodiment; a coordinate system is assigned for each object. In one embodiment, the origin of the system is at the center of the object, the directions of the axes of the system correspond to the orientation of the object, and one unit length in this system is equal to the scale of the object. The system is specified by a single 4x4 similarity transformation matrix, which transforms a point from the global coordinate system to the local coordinate system.
Features are extracted from the object. The features are spherical regions which are easily identified. An example of a feature is shown in figure 6.
How to identify features is known and will not be discussed further here. In this embodiment, a local coordinate system will be set for each feature. The origin of the system is at the feature's centre, the directions of the axes correspond to the feature's canonical orientation, and one unit length in the system is equal to the feature's radius.
Again, the system is specified by a 4x4 transformation matrix, which transforms a point from the global coordinate system to the coordinate system of the feature. Within the feature's coordinate system, 31 points at prefixed locations close to the origin are sampled, creating a 31-dimensional descriptor vector. The tuple of (region center, region radius, orientation, descriptor) forms a feature and this is stored in step S405.
Thus, for each feature in the database both the transformation matrix of the feature's local coordinate system and that of the local coordinate system of the object associated to it is known. If the transform matrix for the feature is Fl and the transform matrix for the object is Ml, then multiplying Ml with the inverse of Fl, i.e. computing T = Ml (Fl)A(.l), gives the transformation matrix T which transforms a point from the feature's local coordinate system to the associated object's local coordinate system.
The matrix I is unchanged when the object is transformed by scaling, translation, and rotation. The above process is repeated for all objects specified in the scene. For example, for the object 61 in figure 7(b), figure 7(b) shows the features 63 which have been assigned to the object 61.
During operation, which will be described with reference to figure 8. an image will be captured as explained with reference to figures 2 and 3. Features will then be extracted from this image in step S4 11. The features will be described as explained above with reference to figure 5. Jf there is a match between a descriptor of a feature in the database with a feature extracted from the image, then a prediction is generated.
In an embodiment, there is a match between two descriptors if their Euclidean distance is below a threshold. Once there is a match between a feature extracted from the image and a feature in the database, a prediction is generated in step S415. The prediction is a hypothesis of what object is being recognised and where it is located.
In an embodiment, when a feature on the scene is matched, only the transformation matrix of the feature's local coordinate system is known. When two features are matched, it is assumed that the transformation matrix that transforms a point from the local coordinate system of the feature from the test scene to the local coordinate system of the predicted object is the same as T. Therefore, if the transformation matrix for the matched feature from the global coordinate system is F2, the transformation matrix representing the predicted object's local coordinate system is then given by multiplying T with F2, i.e. M2' = T F2. MT then gives the scale, the centre point, and the orientation of the predicted object pose.
In summary, by matching two descriptors, two corresponding regions are deemed to have the same shape. As the object's identity, location, scale, and orientation in the feature from the database is known, the object can be transformed (by scaling, translating, and rotating) so that the feature from the database is moved, scaled and rotated to the same place with the feature from the scene. This is then used to predict that this object, after being transformed, is present in the scene.
The above method results in many predictions. The above method is just one way of enabling a feature-to-vote conversion process, i.e. the first stage of the process.
However, many other feature-to-vote conversion processes could be used.
The second stage of the Rough transform may be considered to be a discriminative model of the posterior distribution of an object's location, y, in a Hough space, H, which is the space of all object poses (usually real) and, in the case of object recognition tasks, object classes (discrete).
The model is a non-parametric kernel density estimate based on the votes, X = cast in H, by N features, thus N J1 p(yX, w, 0) = L cojL Oj3K(xij,y), (1) k;:1 j1 where i is the number of votes generated by the th feature, K( , ) is a density kernel in Hough space which allows a blob to be formed centred around the point corresponding to a prediction in Hough space a = [w)1 and 0 = are feature and vote weights respectively, such that w1, O,»=O, = and: ii 1 ViE {1,..,N}. (2) For example, in the original Hough transform used for line detection, the features are edgels, votes are generated for a discrete set of lines (parameterized by angle) passing through each edgel. the kernel, KL, _), returns 1 for the nearest point in the discretized Hough space to the input vote, 0 otherwise, and the weights, 0) and B are set to uniform distributions in the standard Hough transform.
The final stage of the Hough transform involves finding, using non-maxima suppression, the modes of this distribution whose probabilities are above a certain threshold value, t.
Finding the modes in H involves sampling that space, the volume of which increases exponentially with its dimensionality, d.
The summing of votes in the above Hough Transform can enable incorrect votes to generate significant modes in H. In a method in accordance with an embodiment, an assumption is made that only one vote per feature is correct. Further, in this method, a vote that is believed to be correct should explain away the other votes from that feature in step S417.
Here, rather than being given 0 a priori, it is optimized over its possible values, giving those votes which agree with votes from other features more weight than those which do not.
In one embodiment this is achieved by minimizing the information entropy of p(yIX,w,0) with respect to 0. A lower entropy distribution contains less information, making it more peaky and hence having more votes in agreement. Since information in Hough space is the location of objects, minimizing entropy constrains features to be generated by as few objects as possible. This can be viewed as enforcing Ocean's razor.
In this particular embodiment, the Shannon entropy, H, is minimised: H =E[-lnp(x)] = -fp(x)lnp(x) dx.
Since computing entropy involves an integration over Hough space (here, very large), importance sampling is used to make this integration tractable.
In an embodiment, entropy is minimized whilst only sampling at the location of votes.
In this case the value of 0 is given by: = argmin- p(x1X,w,0) lnp(xJIX. w,0) (4) j=i=i When determining B according to equation (4), in an embodiment an optimization framework is used. Here, since p(yIX,m,0) is a linear function of 0, and -x ln x is concave, as is a sum of concave functions, the cost function of equation (4) is concave.
Its minimum therefore lies at an extremum of the parameter space, which is constrained by equation (2), such that the optimal value of C = 1 (i.e. the vector of feature i's vote weights) must be an all 0 vector, except for one 1.
The search space for each 0 is therefore a discrete set off1 possible vectors, making the total number of possible solutions, It should be noted that this search space is not uni-modal-for example, if there are only two features and they each identically generate two votes, one for location y and one for location z, then both y and z will be modes. Furthermore, as the search space is exponential in the number of features, an exhaustive search is infeasible for all but the smallest problems.
In a further embodiment, a local approach, iterated conditional modes (1CM), is used to quickly find a local minimum of this optimization problem. This involves updating the vote weights of each feature in turn, by minimizing equation (4) conditioned on the current weights of all other votes, and repeating this process until convergence. The correct update equation for the vote weights of a featuref is as follows: Ii pfk(yx,w,e) = üfK(xfk,y)+ E coj6jK(xjj,y), (5) Vi5bf j=1 = PfkxiJW9lnPfkxiJlX,w,o, (6) kl i=Ij= q( ) = 1, off 0, Vj$k. (7) ) However, since this update not only involves q(.), which is unknown, but is also relatively costly to compute, in an embodiment, it is replaced with a simpler proxy which in practice performs a similar job: Jf k= argrnaxpfk(xfkIX,w,O). (8) k-I In the above embodiment, the entropy is niinimised while only sampling at the location of the votes. However, in a further embodiment, the Hough space is sampled at the locations Y={y). The value of U is therefore given as: o lnp(yx,w,o) qy (9) 3 where q) is the sampling distribution from which the votes are drawn. Once this optimization (described below) is done, the estimated 0 is applied to equation (1) in step S4l9, and inference continues as per the standard Hough transform.
The cost function above is minimized by updating the vote weights of each feature in ) turn, minimizing the equation conditioned on the current weights of all other votes, and repeating this process a number of times, possibly until convergence. The correct update for the vote weights of a featuref is as follows: 0fk-', Of) = 0, Vj$k where 1*x 0' k=argrnin_( \ thpfk(YeIx,w,e) k=1 i=1 (11) 1CM proxy update equation, which can be used in place of the above equation: ff k = argmax pfk(XfkIX, w, 0) k=J (12) Using the above methods, which will be referred to as "minimum-entropy Hough transforms", detection precision may be increased.
In one embodiment, the weights are initially updated softly, i.e. the weights are not initially fixed to 0 or 1. This approach helps to avoid ordering bias and in this way helps to avoid falling into a poor local minimum early on, thus improving the quality of solution found.
Since the optimization is local, a good initialization of 0 is helpful to reach a good minimum.
There are various methods which can be used for initializing vote weights in accordance with embodiments of the present invention.
In one embodiment, can be set to an initial set of values, for example, those defined by a uniform distribution:
I eli -(13)
Next, an update rule can be applied to each vote weight either synchronously or asynchronously. Such an update rule may be applied a number of times, for example, 5 times.
In one embodiment, an update rule: p(xjjJX, w, 0) 01k LL1p(xX,w,O) (14) was applied.
In another embodiment, the value of 0 used in the standard Hough transform is used initially, then the following update was applied to each vote weight simultaneously: -p(xfkX,w,O) 1 0ik-L1pjj(xjX,w,O)' ( Where Pik for the featuref is Ji Pfk(YX,W,O) WfK(Xfk,y)+ E Wi E 011K(xjj,y) (16) Vif j=I Successive updates may be performed using either of the above rules to obtain an initial estimate of 0. In a further embodiment, these initial values of 9 are then used before optimisation takes place using the 1CM method described with reference to equations (6), (7), (8) or (10), (11) and (12). In a yet further embodiment, the obtained values of 9 are used directly in the Bough transform of equation (1) and the above 1CM method is not used.
Although the Hough space increases exponentially with its dimensionality, the number of votes generated in applications using the Bough transform generally do not, implying that higher dimensional Hough spaces are often sparser. This sparsity is exploited by sampling the Hough space only at locations where the probability (given by equation (1)) is likely to be non-zero--at the locations of the votes themselves. By sampling, only at the known locations of the votes (a technique which will be referred to as "the intrinsic Rough transform" since the votes define the distribution), the memory requirements of the Rough transform are changed from O(kt}, (k> 1) to 0(n), making it feasible for high-dimensional Rough spaces such as used for a 3D object registration application.
The minimum-entropy Hough transform explains away incorrect votes, substantially reducing the number of modes in the posterior distribution of class and pose, and improving precision. The following experiments demonstrate that these contributions make the laugh transform not only tractable but also highly accurate for the example application.
To demonstrate the above, an experiment was performed using experimental data consisting of 12 shape classes, for which there was both a physical object and matching CAD model.
The geometry of each object as shown in figure 8(a) was captured in the form of point clouds as shown in figure 8(b) 20 times from a variety of angles. Along with the class label, every shape instance has an associated ground truth pose, computed by first approximately registering the relevant CAl) model to the point cloud manually, then 1 5 using the Iterative Closest Point algorithm to refine the registration.
Given a test point cloud and set of training point clouds (with known class and pose), the computation of input pose votes X is a two stage process. In the first stage, local shape features, consisting of a descriptor and a scale, translation and rotation relative to the object, are computed on all the point clouds as shown in Figure 8(c). This is done by first converting a point cloud to a l28 voxel volume using a Gaussian on the distance of each voxel centre to the nearest point. Then interest points are localized in the volume across 3D location and scale using the Difference of Gaussians operator, and a canonical orientation for each interest point computed, to generate a local feature pose. Finally a basic, 31-dimensional descriptor is computed by simply sampling the volume (at the correct scale) at 31 regularly distributed locations around the interest point.
In the second stage each test feature is matched to the 20 nearest training features, in terms of Eucidean distance between descriptors. Each of these matches generates a vote as shown in Figure 8(d) and Figure 8(e) shows the registered CAD model.
12 classes were used in the evaluation as shown in figure 10, these are: a bearing, a block, a bracket, a car, a cog, a flange, a knob, a pipe and two pistons.
Quantitative results are given in tables I & 2 and figure 11.
Table 1:
Mean Shift Minimum Entropy Hough Recognition 64.9% 98.5% Registration 683% 74.6% Time 1.62 1.59
Table 2:
Bearing Block Bracket Car Cog Flange Knob Pipe Piston Piston 1 2 Mm 83 20 98 91 100 36 91 89 54 84 Ent Hough _______ Mean 77 13 95 75 100 41 88 86 44 64 Shift There is an increase in performance in both registration and recognition moving from the established mean shift technique to the above described technique which will be referred to as Minimum-entropy Hough which shows a significantly improved registration rate, and a hugely improved recognition rate over mean shift (a 96% reduction in misciassifications); only 1.5% of objects are left unrecognized, the majority of those in the car class.
However, because these results only reflect the best detection per test, they do not tell the whole story. It is not possible to tell from the above results how many other (incorrect) detections had competitive weights. To see this, the precision-recall curves shown in figure 12 are generated by varying the detection threshold, t. A correct detection in this test required the class and pose to be correct simultaneously, and allowed only one correct detection per test. The curves show that precision remains high as recall increases for the minimum entropy Bough transform which is able to explain away incorrect votes.
In terms of computation time (table 1), the two methods tested were of a similar speed.
The benefit of explaining away incorrect votes is demonstrated in figures 13(a) and 13(b). While the standard Bough transform shows a great deal of ambiguity as to where and how many objects there are, the minimum entropy Bough transform is able to clear away the "mist" of incorrect votes, leaving six distinct modes corresponding to the objects present; there are some other modes, but these are much less significant, corroborating the results seen in figure 12.
The benefit of having correct and clearly defined modes is demonstrated in figures 14(a) and 14(b), using the same point cloud as in figures 13(a) and 13(b), a challenging dataset containing three pairs of touching objects. The minimum-entropy finds all six objects in the top six detections (though mis-registers the piston lying at a shallow angle), whereas the other method finds not only incorrect objects, but also multiple instances of correct objects (particularly the piston on the cog).
The above explanation has concentrated on the use of the method for image processing and specifically the recognition andlor registration of physical objects in an image.
However, methods in accordance with embodiments of the present invention can also be used to recognise and/or register data objects in order to provide an efficient method of searching a database.
For example, if there are two database lists X = {x1J and Y ={ y}, and a data structure Z = (zvj, where z0 = 1 indicates that x can vote for yj, and z17 = 0 otherwise.
Given a list of observed xis it is possible to use the above method using the minimum entropy Hough transform to estimate the minimal list of y/s present. To do this, each x can be handled as a feature in the above method and each yj as an object. A feature can vote for an object in the same way as described above for image processing. A vote weight can then be applied to each vote and an assumption can be made that each feature can only have one correct vote. This condition is then imposed by calculating the minimum entropy with respect to the applied vote weights and using these vote weights in a Hough transform.
As a practical example of this: The list X is a list of all the possible symptoms of disease a person can have; The list Y is a list of all the possible diseases a person could have; and Z indicates which disease causes which symptoms.
Then, given a list of symptons (xats) from a real patient (the features), Z is used to generate a list of votes for the elements in V (the Hough space). The minimum entropy Hough transform is used to generate the smallest list of yjs (diseases) that could plausibly have caused those xis (symptoms).
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms of modifications as would fall within the scope and spirit of the inventions.
GB1114617.2A 2011-08-23 2011-08-23 Object location method and system Active GB2496834B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1114617.2A GB2496834B (en) 2011-08-23 2011-08-23 Object location method and system
US13/408,479 US8761472B2 (en) 2011-08-23 2012-02-29 Object location method and system
JP2012184227A JP5509278B2 (en) 2011-08-23 2012-08-23 Subject position determination method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1114617.2A GB2496834B (en) 2011-08-23 2011-08-23 Object location method and system

Publications (3)

Publication Number Publication Date
GB201114617D0 GB201114617D0 (en) 2011-10-05
GB2496834A true GB2496834A (en) 2013-05-29
GB2496834B GB2496834B (en) 2015-07-22

Family

ID=44800813

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1114617.2A Active GB2496834B (en) 2011-08-23 2011-08-23 Object location method and system

Country Status (3)

Country Link
US (1) US8761472B2 (en)
JP (1) JP5509278B2 (en)
GB (1) GB2496834B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9002719B2 (en) 2012-10-08 2015-04-07 State Farm Mutual Automobile Insurance Company Device and method for building claim assessment
US8818572B1 (en) 2013-03-15 2014-08-26 State Farm Mutual Automobile Insurance Company System and method for controlling a remote aerial device for up-close inspection
US9082015B2 (en) 2013-03-15 2015-07-14 State Farm Mutual Automobile Insurance Company Automatic building assessment
US8872818B2 (en) 2013-03-15 2014-10-28 State Farm Mutual Automobile Insurance Company Methods and systems for capturing the condition of a physical structure
US8756085B1 (en) * 2013-03-15 2014-06-17 State Farm Mutual Automobile Insurance Company Systems and methods for assessing property damage
GB2523776B (en) * 2014-03-04 2018-08-01 Toshiba Res Europe Limited Methods for 3D object recognition and pose determination
US10176527B1 (en) 2016-04-27 2019-01-08 State Farm Mutual Automobile Insurance Company Providing shade for optical detection of structural features
CN113065546B (en) * 2021-02-25 2022-08-12 湖南大学 Target pose estimation method and system based on attention mechanism and Hough voting

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5638465A (en) * 1994-06-14 1997-06-10 Nippon Telegraph And Telephone Corporation Image inspection/recognition method, method of generating reference data for use therein, and apparatuses therefor
WO2007072391A2 (en) * 2005-12-22 2007-06-28 Koninklijke Philips Electronics N.V. Automatic 3-d object detection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8724865B2 (en) * 2001-11-07 2014-05-13 Medical Metrics, Inc. Method, computer software, and system for tracking, stabilizing, and reporting motion between vertebrae
JP3914864B2 (en) 2001-12-13 2007-05-16 株式会社東芝 Pattern recognition apparatus and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5638465A (en) * 1994-06-14 1997-06-10 Nippon Telegraph And Telephone Corporation Image inspection/recognition method, method of generating reference data for use therein, and apparatuses therefor
WO2007072391A2 (en) * 2005-12-22 2007-06-28 Koninklijke Philips Electronics N.V. Automatic 3-d object detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Pattern Recognition Vol. 13 No. 2, 1981, pp 111-122, Pergamon Press. "Generalizing the Hough transform to detect arbitrary shapes", D H Ballard *

Also Published As

Publication number Publication date
JP2013045468A (en) 2013-03-04
JP5509278B2 (en) 2014-06-04
GB2496834B (en) 2015-07-22
GB201114617D0 (en) 2011-10-05
US20130051639A1 (en) 2013-02-28
US8761472B2 (en) 2014-06-24

Similar Documents

Publication Publication Date Title
US8761472B2 (en) Object location method and system
US9008439B2 (en) Image processing method and system
Woodford et al. Demisting the Hough transform for 3D shape recognition and registration
JP5705147B2 (en) Representing 3D objects or objects using descriptors
US10554957B2 (en) Learning-based matching for active stereo systems
EP2385483B1 (en) Recognition and pose determination of 3D objects in 3D scenes using geometric point pair descriptors and the generalized Hough Transform
CN108229347B (en) Method and apparatus for deep replacement of quasi-Gibbs structure sampling for human recognition
CN112036339B (en) Face detection method and device and electronic equipment
Meger et al. Explicit Occlusion Reasoning for 3D Object Detection.
Li et al. Hierarchical semantic parsing for object pose estimation in densely cluttered scenes
Vaskevicius et al. The jacobs robotics approach to object recognition and localization in the context of the icra'11 solutions in perception challenge
Marton et al. Probabilistic categorization of kitchen objects in table settings with a composite sensor
Wang et al. Textured/textureless object recognition and pose estimation using RGB-D image
Jiang et al. Triangulate geometric constraint combined with visual-flow fusion network for accurate 6DoF pose estimation
Guo et al. A hybrid framework based on warped hierarchical tree for pose estimation of texture-less objects
Zhang et al. A New Inlier Identification Scheme for Robust Estimation Problems.
Alhwarin Fast and robust image feature matching methods for computer vision applications
Seib et al. Object class and instance recognition on rgb-d data
Lin et al. 6D object pose estimation with pairwise compatible geometric features
CN116704587B (en) Multi-person head pose estimation method and system integrating texture information and depth information
Yang et al. SIFT saliency analysis for matching repetitive structures
Goldmann et al. Robust face detection based on components and their topology
Zhifeng Human Body Tracking Method Based on Deep Learning Object Detection
Danciu Method proposal for blob separation in segmented images
Yousif et al. ROS2D: Image feature detector using rank order statistics