GB2203877A - Shape parametrisation - Google Patents

Shape parametrisation Download PDF

Info

Publication number
GB2203877A
GB2203877A GB08622497A GB8622497A GB2203877A GB 2203877 A GB2203877 A GB 2203877A GB 08622497 A GB08622497 A GB 08622497A GB 8622497 A GB8622497 A GB 8622497A GB 2203877 A GB2203877 A GB 2203877A
Authority
GB
United Kingdom
Prior art keywords
data
sinogram
value
point
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB08622497A
Other versions
GB8622497D0 (en
Inventor
Violet Frances Leavers
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB08622497A priority Critical patent/GB2203877A/en
Publication of GB8622497D0 publication Critical patent/GB8622497D0/en
Priority to EP87906194A priority patent/EP0293397A1/en
Priority to PCT/GB1987/000649 priority patent/WO1988002158A1/en
Publication of GB2203877A publication Critical patent/GB2203877A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Digital image data is transformed from an image space into a parametric transform space. Each point Pi is transformed to a sine curve Ci representing the angles ( theta ) and radii (r) of the normals to all possible lines passing through the point Pi. Straight line segments (P1 to P3) in the image space are detected by convolving the transform space with a mask which detects butterfly-shaped distributions of data around maxima (I) in the transform space. Curved segments in the image space are detected by convolving with two masks which detect pairs of ridges at the edges of belts of data in the transform space. The basic parameters of the segments can be determined from the locations of the maxima or belt edges in the transform space and can be efficiently stored or can be compared with a library of stored data. <IMAGE>

Description

Shape Parametrisation This invention relates to a method of and apparatus for shape parametrisation of digital data.
US patent specification No 3069654 describes a method of finding the parameters of straight lines in an image. Each point in the image space is mapped into a gradient, intercept (m,c) parametric transform space to produce lines representing all possible gradients and intercepts of lines passing through that point. Thus, a point (zi,yi) in the image space is mapped into a line satisfying the equation c = Yi - mzi in the parametric transform space.A maximum, or intersection, in the transform space is detected and determined to represent a line in the image space, since an intersection at (mi, cr) of a plurality of lines in the transform space denotes a corresponding plurality of colinear points in the image space all lying on the line having the equation y = ml2 + cl. However, when the image space contains a plurality of lines, additional spurious maxima or intersections are produced in the transform space and are erroneously determined to represent lines in the image space. Furthermore, using the method described above, it is not possible to determine whether or not a maximum in the transform space represents a single line segment or a plurality of disconnected colinear points in the image space.A further practical problem with the method described above is that the m and c parameters are unbounded and therefore some groups of points in the image space cannot be represented in a bounded transform space.
In view of the last-mentioned disadvantage of the method described above, it is knows to map points in the image space into an angle, radius (8, r) normal parametrisation transform space or "sinogram", rather than an (m, c) transform space. Each point in the image space is mapped into a sine curve in the sinogram representing r and a for all possible lines in the image space passing through the point, where r is the algebraic distance from the origin to the line along a normal to the line and 6 is the angle of the normal to the x-axis.Typically, the limits of a are -a < a < r, in which case for a bounded square image space of size L x the limits of r are -JZL 6 r < r s/iL. However, this development of the first-mentioned method still suffers from the problems of spurious maxima and disconnected colinearities.
The prior art may be classified as providing a method of shape parametrisation of digital image data comprising the steps of: transforming the image data into a parametric transform space; and extracting shape characterising parameters from the transform space indicative of a shape in the image represented by the image data.
The present invention seeks to overcome the problems of spurious maxima and disconnected colinearities associated with prior art.
The method provided by a first aspect of the present invention is characterised in that the extraction step includes the step of detecting at least one particular shape indicative distribution of data in the transform space.
Due to the problems associated with the prior art, it was virtually impossible to provide automatic shape parametrisation of the image data even when the data represented very simple shapes, and prior knowledge of the shape was needed, together with human interface to select those of the shape characterising parameters representing a "proper"shape . However, the first aspect of present invention provides a more reliable method which can be performed fully automatically for more complex shapes than has been possible using the prior art methods.
Preferred features of the method and other aspects of the invention are set out in the claims.
There follows a description by way of example of specific embodiments of the present invention, reference being made to the accompanying drawings, in which: Figure 1 is a schematic diagram of an apparatus of one embodiment of the invention; Figure 2 is a perspective view of an object, the processing of an image of which is described below; Figures 3 to 5 are representations of the image of the object after various processing operations; Figures 6 and 7 illustrate the mapping of a single point and three colinear points, respectively, from an image space to a sinogram; Figures 8 and 9 are graphical representations of distributions of curves in the sinogram; Figure 10 is a matrix of mask values used in detecting distributions of data in the sinogram indicative of a line segment in the image;; Figures 11 and 12 are digital representations corresponding to Figures 8 and 9, respectively; Figures 13 and 14 illustrate the mapping of a circle from an image space to a sinogram; Figure 15 is a graphical representation of data intensity across a belt produced in the sinogram at the location indicated by the lines XV - XV in Figure 14; Figure 16 is a digital representation of the data shown in Figure 15; Figures 17 and 19 are mask values for use in detecting data in the sinogram representative of a circle; Figures 18 and 20 show the data of Figure 16 after convolution using the masks of Figures 17 and 19, respectively; and Figures 21 and 22 show the data of Figures 18 and 20, respectively after further processing.
Referring to Figure 1 of the drawings, a camera 10 outputs a digitised video signal representing the object illustrated in Figure 2, which is stored as a frame in a frame store 12.
Typically, the frame size is 256 pixels by 256 pixels, and each pixel has eight bits and so can store values in the range 0 to 255. A parallel processor 16, such as a linear array processor as described in U.K. patent specification No. 2129545B, then performs an edge detection operation on the stored frame using a Sobel-type operator in a known manner to produce an image as represented in Figure 3 which is stored as a frame in the frame store 12. The image is then subjected to a thresholding operation by the parallel processor 16 to produce a binarised image in which each pixel either has a predetermined low value, for example zero, or a predetermined high value, for example 255. The binarised image, which is represented in Figure 4, is stored in the frame store 12.The edges of the image are then thinned, and isolated points are removed by the parallel processor 16, and the resulting binarised image, as represented in Figure 5, is stored in the frame store 12.
Once the binarised image has been formed, the edge points are mapped from the image space into a sinogram or angle, radius normal parametrisation space (6, r) by a host computer 20, and the sinogram is stored as a frame in the frame store 12. Referring to Figure 6, each edge point P at coordinates (zi, yi) in the image space is transformed to a sine curve C,- representing the angles and radii (S, r) of the normals to all possible lines passing through the point (z,, y,) in the image space.Thus, the sine curve Ci satisfies the equation
By way of example, lines li, 12 are shown in the image space of Figure 6 which produce points at (6l, r1) and (02, r2) in the sinogram.
Figure 7 shows how three points Pl, P2, P3 in the image space are transformed into three sine curves Cl, C2, C3 in the sinogram. Since the three points P1, P2, P3 are colinear, the sine curves Gl, C2, C3 intersect at a single point I in the sinogram. The coordinates (6'L rr ) of the intersection I in the sinogram give the angle and length of the normal in the image space which define the line L on which the three points Pi,P2,P3 lie. The line satisfies the equation y = Cot(-OL)T + rLcosecs.
All of the points in the image space are transformed into the sinogram in the manner described above, and the sinogram is stored digitally as a frame in the frame store 12. In the case of the three points shown in figure 7, the pixel in the sinogram corresponding to the intersection I would have a value of 3. Obviously, in practice, many more points can be processed, producing many curves in the sinogram and higher pixel values than occur in the simple example described above.
In the prior art, mere maxima in the sinogram are detected. However, in accordance with this embodiment of the invention, more specific distributions of data in the sinogram are sought. It has been noted that curves in the sinogram representing a continuous line in the image space resemble, at and around the intersection or maximum in the sinogram, a butterfly shape with the wings of the butterfly extending in the B direction, as represented graphically in Figure 8 and digitally in Figure 11, whereas a discontinuous line, although producing a maximum in the sinogram, has a less dense packing of the curves which form the wings of the butterfly, as represented graphically in Figure 9 and digitally in Figure 12.
Thus, by applying an appropriate mask to groups of pixel data in the sinogram, it is possible to discriminate between a maximum having a butterfly shaped distribution surrounding it and other maxima, and therefore it is possible to detect points which, in the binarised image, represent a continuous line.
Figure 10 shows a 3 pixel x 3 pixel mask for detecting butterfly-shaped distributions in the sinogram. The sinogram in the frame store 12 is convolved with the mask using the parallel processor 16, and the results of the convolution operation are stored as a frame in the frame store 12. More specifically, considering the upper three rows of pixels in a frame, if the pixel values In " to I0 I0,255, I1.0 to 11.255 and 12.0 to 12.255, the mask is applied to each 3 x 3 group of pixels, that is I0.i-1, I0,i+1; I1.t, 11.i, j+l; I2.i-i, 12. and 12.i+1, where 1 < i < 254, and the pixel values are multiplied by the corresponding mask values; the products are summed.Preferably, the sum is then divided by (1+ the value of the middle pixel of the group), and then the result is used as a pixel value J1.j at a corresponding location in a further frame store 12. Thus, for the mask shown in Figure 10, the pixel value J.i = { ; x -2) + (Ii.i-i x 1) + (Ir i > c 2) + (fl.i+t x 1) + (I2.i X -2)}/{I1,; + 1). The mask is applied to 254 3 x 3 groups of pixels in the first three rows of the frame section 18 to produce 254 results. The operation is then repeated for the second to fourth rows of the frame section, for the third to fifth rows, and so on, finishing with the 254th to 256th rows.
Referring to the examples of pixel data shown in Figures 11 and 12, the Figure 11 group gives a result after convolution of ((1x 0) + (0 x -2) + (1 x 0) + (7 x 1) + (7 x 2) + (7 x 1) + (0 x 0) + (0 x -2) + (2 x 0))/(7 + 1) = 3 whereas the Figure 12 group gives a result of {(3 > c 0) + (1x -2) + (3 x 0) + (4 x 1) + (7 x 2) + (3 x 1) + (4 > c 0) + (2 x -2) + (3 x O))/(7 + 1) = 1 It will be noted that a 3x3 group of pixels each having the same value will produce a result of zero after convolution.
After the convolution operation, the host computer 20, selects those of the convolution results above a predetermined threshold value, say 2, and representative of a continuous line and, from the location of the pixel in the frame section 18, determines the parameters of the line represented by that pixel. The convolution result of 3 for the group of pixel data shown in Figure 11 produces an indication of a continuous line, as compared with the result of 1 for the group of Figure 12, which is not treated as indicating a continuous line, despite the centre pixel of that group being a maximum.
Starting with the detected line having the highest convolution result, the computer 20 then compares the detected lines with the original image stored in the frame store 12 and determines the locations of the end points of the detected lines.
Thus, the r, 8 or m, c and end point parameters of all the continuous lines in the image are determined.
Whilst the above description has been confined to the parametrisation of straight lines, it is also possible to determine the parameters of circles, part-circles and other conic sections in the image.
Referring to Figure 13, the points on a circle C in the image space are mapped in the same way as described above and produce in the sinogram shown in Figure 14 a belt 26 of sine curves. For an even distribution of points lying on the circle in the image space, the distribution of sine curves across the belt in the sinogram is not even, but rather the curves have maximum intensities at the edges 28, 30 of the belt and the intensity exhibits an inverse square root dependence near the edges. Figure 15 is a plot, by way of example, of the intensity I across the belt 26, and figure 16 is a digital representation of the intensity.
The parameters of the circle in the image space can be determined from the size and phase of the belt in the sinogram. Specifically, the radius p of the circle is equal to one half of the width of the belt in the r direction and the coordinates (z", y,l) of the centre of the circle are equal to the values of r at the centre of the belt at 6 = 0 and 6 = 7r/2, respectively. In Figures 13 and 14, the scale of the r axis is one half of that for the z or y axes and so 2p in Figure 14 appears to be the same distance as p in Figure 13.
In the case of an arc of a circle in the image space, the centre (xo, y") and radius p of curvature can be determined from the values of r and 6 at three locations on the edges of the belt by forming three simultaneous equations from the following two equations: r = X0 cos 8 + ( Sii0 6 + p (1) r = :rf, cos 6 + #0 Sin # - # (2) where equations 1 and 2 are used for values of r and 6 on the upper and lower edges, respectively, of the belt.
Two 1 x 3 masks are used to detect the edges of belts, the mask of Figure 17 having values (2, -1, -1) being used to detect lower edges, and the mask of Figure 19 having values (-1, -1,2) being used to detect upper edges. In the convolution process the upper and lower edge masks are traversed in the -r direction of the sinogram along each column of pixels with a step of one pixel, and at each stage the group of three pixel values in the respective column of the sinogram are multiplied by the corresponding mask values and summed and placed in a location corresponding to the middle pixel of the group in a further frame. The part-column of pixel values shown in Figure 16 after convolution using the lower edge mask are changed to the values shown in Figure 18, and after convolution using the upper edge mask take the values shown in Figure 20. By way of further explanation, referring to Figures 16, 19 and 20, the first three pixels' values (0, 0,0) on the left in Figure 16 are multiplied by the corresponding values (-1-1,2) of the mask and summed to produce a result of (Ox-I) + (0 x - 1) + (Ox 2) = 0 which is stored as the value of second pixel, as shown in Figure 20.The process is repeated for the second to fourth pixel values of Figure 16 and the result of zero is stored as the value of the third pixel, as shown in Figure 20. For the third to fifth pixel values (0,0,1G) in Figure 16, the result is (0 x -1) + (0 x -1) + (-16 x 2) = 32, which is stored as the fourth pixel value as shown in Figure 20. The process is repeated all the way down the pixel column, and similar processes are carried out by the parallel processor 16 simultaneously for all the pixel columns.
It will be noted that negative results of the convolution process, such as obtained when the fifth to seventh pixel values of Figure 16 are operated on by the upper edge mask of Figure 19, are stored as zero.
Detection of belts in the sinogram may be carried out by comparing the convolution results obtained by using the masks of Figures 17 and 19. Comparing the example pixel values in Figures 18 and 20, it can be seen that an upper edge of the belt is indicated by a moderately high value (18) in the Figure 18 and a higher value (32) at a generally corresponding location in Figure 20. Similarly, a lower edge of the belt is indicated by a high value (32) in Figure 18 and a moderately high value (18) at a generally corresponding location in Figure 20.
Alternatively, detection can be carried out after further processing of the values in Figures 18 and 20 by dividing each value by (1 + the corresponding value in Figure 16) to produce the values shown in Figures 21 and 22. It will be seen that in Figure 21, the lower edge of the belt is indicated by a high value, and the other values are all low, whereas in Figure 22, the upper edge of the belt is indicated by a high value amongst low values.
The locations of the pixels indicating the edges of the belt are used by the computer 20 in determining the sizes and phases of any belts in the sinogram indicative of circles, the parameters of which can then be determined in the manner described above with reference to Figures 13 and 14.
Ellipses in the image space can be detected in a similar manner to circles. However in the case of an ellipse, the belt produced in the sinogram has a varying width. The length and width of the ellipse along the major and minor axes are equal to the maximum and minimum widths, respectively, of the belt in the r direction. The coordinates (zn, y") of the centre of the ellipse are equal to the values of rat the centre of the belt 6 = 0 and 6 = , respectively. The tilt of the ellipse, that is the angle of the major axis to the axis, is equal to the value of 6 at the maximum width of the belt.
Other conic sections produce belts or distinctive distributions of curves in the sinogram which can be detected using an appropriate mask and convolution process.
Once the parameters of lines and curves in the image space have been determined, they can be stored away in a library using much less memory than would be required to store a whole frame of image data. Thus the method described above can be used to read, encode and store automatically and efficiently 2-D images such as technical or architectural drawings or electrical circuit diagrams. The method can also be used for example in interferometry to detect and provide the parameters of interference fringes. In robotics, the two dimensional spatial relationships between lines and curves can be determined by the computer and then compared with stored 2-D data.
In a development of the method described above, two cameras are situated to provide a pair of stereoscopic images which are both processed as described above in parallel with each other. The two sets of image data are then matched by the computer so that a three dimensional representation of an object viewed by the cameras can be determined. It is then possible to process the 3-D representation and compare it with stored 3-D data to determine the identity and location of the object, thus providing automatic robotic vision.
By using a parallel processor as described above, it is possible to process the image data one line at a time and thus quickly. If high-speed processing is not required, however, serial processing of the image data may be performed.

Claims (42)

Claims
1. A method of parametrisation of shapes in images represented by image data, comprising the steps of: transforming the image data into a parametric transform space; and extracting parameters from the transform space indicative of a shape in the image represented by the image data; characterised in that: the extraction step includes the step of detecting at least one particular shape indicative distribution of data in the transform space.
2. A method as claimed in claim 1, wherein the transformation step produces bounded parameters in the transform space.
3. A method as claimed in claim 1 or 2, wherein the transform space into which the image data is transformed is a sinogram.
4. A method as claimed in any preceding claim, wherein the extraction step includes the step of convolving the transform space with at least one mask.
5. A method as claimed in any preceding claim, wherein the or one of the particular distributions of data which is detected is a distribution of data around a point in the transform space indicative of a line in the image represented by the image data.
6. A method as claimed in claim 5, when appendant directly or indirectly to claim 3.
wherein the line indicative distribution of data which is detected is a distribution resembling a butterfly shape substantially of the type described in the description.
7. A method as claimed in claim 6 when appendant directly or indirectly to claim 4, wherein the mask operates to add to the value of the data at each point in the sinogram at least a portion of the values of the data to either side of that point in the angle direction of the sinogram and to subtract from that value at least a portion of the values of the data to either side of that point in the radius direction of the sinogram.
8. A method as claimed in claim 7, further comprising the step of dividing the value of the data at each point after operation of the mask with a value related to the value of the data at that point before operation of the mask.
9. A method as claimed in any of claims 6 to 8, wherein the extracting step includes the step of extracting the position in the sinogram of the centre of the butterfly shape.
10. A method as claimed in claim 9, wherein the extracting step further includes the step of determining parameters of a line from the extracted position in the sinogram.
11. A method as claimed in any preceding claim, wherein the or one of the particular distributions of data which is detected is a distribution of data extending across the transform space indicative of a curve in the image represented by the image data.
12. A method as claimed in claim 11, when appendant directly or indirectly to claim 3, wherein the curve indicative distribution of data which is detected is a distribution resembling a ridge at an edge of a belt.
13. A method as claimed in claim 12, wherein the curve indicative distribution of data which is detected is a distribution resembling a pair of ridges at opposite edges of a belt.
14. A method as claimed in claim 13, when appendant directly or indirectly to claim 4, wherein different such masks are used for detecting the two edges of the belt.
15. A method as claimed in claim 14, wherein one of the masks operates to subtract from the value of the data at each point in the sinogram at least a portion of the value of the data to one side of that point in the radius direction of the sinogram and the other mask operates to subtract from the value of the data at each point in the sinogram at least a portion of the value of the data to the other side of that point in the radius direction of the sinogram.
16. A method as claimed in claim 15, further comprising the step of dividing the value of the data at each point in the sinogram after operation of the masks with a value related to the value of the data at that point before operation of the masks.
17. A method as claimed in claim 15 or 16, and further comprising the step of comparing with each other the results obtained by operation of the two masks.
18. A method as claimed in any of the claims 12 to 17, wherein the extracting step includes the step of extracting the positions in the sinogram of at least three points on the edge or edges of the belt.
19. A method as claimed in claim 18, wherein the extracting step further includes the step of determining parameters of a circle or arc of a circle from the extracted positions in the sinogram.
20. A method as claimed in claim 18 or 19, further comprising the step of determining whether the width of the belt in the radius direction of the sinogram is substantially constant.
21. A method as claimed in any of the claims of 12 to 17, wherein the extracting step includes the step of extracting the positions in the sinogram of at least five points on the edge or edges of the belt.
22. A method as claimed in claim 21, wherein the extracting step includes the step of determining parameters of a conic section or part-conic section from the extracted positions in the sinogram.
23. A method as claimed in claim 22, further comprising the step of determining the radius parameters of the centres of the belt in the sinogram at angles of nir and (n+ ,)7r (where n is an integer), the maximum and minimum widths of the belt in the radius direction of the sinogram, and the phase of variations of the width of the bell.
24. A method as claimed in any of claims 7 to 10 and any of claims 14 to 17, wherein butterfly-shaped distributions are not detectable by operation of the masks for detecting edges of belts, and the belt-like distributions are not detectable by operation of the mask for detecting butterfly-shaped distributions.
25. A method as claimed in any preceding claim, and further comprising the step of comparing data representing the shape characterised by the extracted parameters with the image data to determine further parameters of the shape.
26. A method as claimed in any preceding claim, wherein the method is performed on two stereoscopically related sets of image data, and further comprising the step of matching the shape indicative parameters extracted for each set of image data.
27. A method as claimed in any preceding claim, and further comprising the step or steps of performing the edge detection operation and/or a binarising operation and/or a thinning operation on the image data prior to the transformalion step.
28. A method as claimed in any preceding claim, wherein the image data is provided by a digital image signal.
29. A method of shape parametrisation of shapes in images represented by image data, substantially as described in the description with reference to the drawings.
30. An apparatus specially adapted to perform the method of any preceding claim.
31. An apparatus for parametrisation of shapes in images represented by image data, comprising: means to receive image data; means to transform the image data to parametric data; means to detect at least one particular shape indicative distribution of data in the parametric data; and means to output parameters related to such a detected distribution.
32. An apparatus as claimed in claim 31, and including means to store the parametric data in the form of a sinogram.
33. An apparatus as claimed in claim 32, wherein the detection means includes means defining at least one mask and means to convolve the sinogram with the mask.
34. An apparatus as claimed in claim 33, wherein the mask is operable to add to the value of data at each point in the sinogram at least a portion of the values of the data to either side of that point in the angle direction of the sinogram and to subtract from that value at least a portion of the values of the data to either side of that point in the radius direction of the sinogram.
35. An apparatus as claimed in claim 34, wherein the detection means is operable to divide the value of the data at each point after operation of the mask with a value related to the value of the data at that point before operation of the mask.
36. Apparatus as claimed in any of claims 33 to 35, wherein one such mask is operable to subtract from the value of the data at each point in the sinogram at least a portion of the value of the data to one side of that point in the radius direction of the sinogram and another such mask is operable to subtract from the value of the data at each point in the sinogram at least a portion of the value of the data to the other side of that point in the radius direction of the sinogram.
37. An apparatus as claimed in claim 36, wherein the detecting means is operable to divide the value of the data at each point in the sinogram after operation of the masks with a value related to the value of the data at that point before operation of the masks.
38. An apparatus as claimed in claim 36 or 37, and wherein the detecting means is operable to compare with each other the results obtained by operation of the two masks.
39. An apparatus as claimed in any of claims 31 to 38, and further comprising means to store the image data, the detecting means being operable to compare data representing a shape characterised by the parameters related to the detected distribution with the stored image data to determine further parameters of the shape.
40. An apparatus as claimed in any of claims 31 to 39, comprising a further such data receiving means, transforming means, detection means and output means so that the apparatus can parametrise two stereoscopically related sets of image data, and further comprising means for matching the output parameters for each set of image data.
41. An apparatus as claimed in claim 40, further comprising a pair stereoscopically arranged cameras for feeding image data to the data receiving means.
42. An apparatus for shape parametrisation of shapes in images represented by image data, substantially as described in the description with reference to the drawings.
GB08622497A 1986-09-18 1986-09-18 Shape parametrisation Withdrawn GB2203877A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB08622497A GB2203877A (en) 1986-09-18 1986-09-18 Shape parametrisation
EP87906194A EP0293397A1 (en) 1986-09-18 1987-09-17 Shape detection
PCT/GB1987/000649 WO1988002158A1 (en) 1986-09-18 1987-09-17 Shape detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB08622497A GB2203877A (en) 1986-09-18 1986-09-18 Shape parametrisation

Publications (2)

Publication Number Publication Date
GB8622497D0 GB8622497D0 (en) 1986-10-22
GB2203877A true GB2203877A (en) 1988-10-26

Family

ID=10604392

Family Applications (1)

Application Number Title Priority Date Filing Date
GB08622497A Withdrawn GB2203877A (en) 1986-09-18 1986-09-18 Shape parametrisation

Country Status (1)

Country Link
GB (1) GB2203877A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2351826A (en) * 1999-07-05 2001-01-10 Mitsubishi Electric Inf Tech Representing and searching for an object in an image
GB2352075A (en) * 1999-07-05 2001-01-17 Mitsubishi Electric Inf Tech Representing and searching for an object in an image
WO2002050770A1 (en) * 2000-12-21 2002-06-27 Robert Bosch Gmbh Method and device for compensating for the maladjustment of an image producing device
US7013047B2 (en) * 2001-06-28 2006-03-14 National Instruments Corporation System and method for performing edge detection in an image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB885545A (en) * 1956-10-26 1961-12-28 Gen Electric Improvements in form recognition method and system therefor
US3069654A (en) * 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
US3982227A (en) * 1975-06-02 1976-09-21 General Electric Company Pattern recognition machine for analyzing line orientation
GB1566433A (en) * 1975-10-10 1980-04-30 Sangamo Weston Industrial system for inspecting and identifying workpieces
EP0165086A2 (en) * 1984-04-13 1985-12-18 Fujitsu Limited Information extraction by mapping
EP0205628A1 (en) * 1985-06-19 1986-12-30 International Business Machines Corporation Method for identifying three-dimensional objects using two-dimensional images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB885545A (en) * 1956-10-26 1961-12-28 Gen Electric Improvements in form recognition method and system therefor
US3069654A (en) * 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
US3982227A (en) * 1975-06-02 1976-09-21 General Electric Company Pattern recognition machine for analyzing line orientation
GB1566433A (en) * 1975-10-10 1980-04-30 Sangamo Weston Industrial system for inspecting and identifying workpieces
EP0165086A2 (en) * 1984-04-13 1985-12-18 Fujitsu Limited Information extraction by mapping
EP0205628A1 (en) * 1985-06-19 1986-12-30 International Business Machines Corporation Method for identifying three-dimensional objects using two-dimensional images

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2391374B (en) * 1999-07-05 2004-06-16 Mitsubishi Electric Inf Tech Method and apparatus for representing and searching for an object in an image
US7542626B2 (en) 1999-07-05 2009-06-02 Mitsubishi Denki Kabushiki Kaisha Method, apparatus, computer program, computer system, and computer-readable storage medium for representing and searching for an object in an image
GB2351826A (en) * 1999-07-05 2001-01-10 Mitsubishi Electric Inf Tech Representing and searching for an object in an image
GB2391099A (en) * 1999-07-05 2004-01-28 Mitsubishi Electric Inf Tech Method for representing and searching for an object in a image database
GB2391374A (en) * 1999-07-05 2004-02-04 Mitsubishi Electric Inf Tech Method for representing and searching for an object in an image database
GB2391678A (en) * 1999-07-05 2004-02-11 Mitsubishi Electric Inf Tech Method for representing and searching for an object in an image
GB2393839A (en) * 1999-07-05 2004-04-07 Mitsubishi Electric Inf Tech Method for searching for an object in an image
GB2394350A (en) * 1999-07-05 2004-04-21 Mitsubishi Electric Inf Tech Method for representing and searching for an object in an ima ge
GB2394349A (en) * 1999-07-05 2004-04-21 Mitsubishi Electric Inf Tech Method for representing and searching for an object in an image
GB2391678B (en) * 1999-07-05 2004-05-05 Mitsubishi Electric Inf Tech Method and apparatus for representing and searching for an object in an image
KR100431677B1 (en) * 1999-07-05 2004-05-17 미쓰비시덴키 가부시키가이샤 Method and system for displaying for object in image and computer-readable storage medium
GB2351826B (en) * 1999-07-05 2004-05-19 Mitsubishi Electric Inf Tech Method of representing an object in an image
GB2352075B (en) * 1999-07-05 2004-06-16 Mitsubishi Electric Inf Tech Method and Apparatur for Representing and Searching for an Object in an Image
GB2391099B (en) * 1999-07-05 2004-06-16 Mitsubishi Electric Inf Tech Method and apparatus for representing and searching for an object in an image
GB2393839B (en) * 1999-07-05 2004-06-16 Mitsubishi Electric Inf Tech Method and apparatus for representing and searching for an object in an image
GB2394350B (en) * 1999-07-05 2004-06-16 Mitsubishi Electric Inf Tech Method and apparatus for representing and searching for an object in an image
GB2352075A (en) * 1999-07-05 2001-01-17 Mitsubishi Electric Inf Tech Representing and searching for an object in an image
GB2394349B (en) * 1999-07-05 2004-06-16 Mitsubishi Electric Inf Tech Method and apparatus for representing and searching for an object in an image
US6882756B1 (en) 1999-07-05 2005-04-19 Mitsubishi Denki Kabushiki Kaisha Method and device for displaying or searching for object in image and computer-readable storage medium
US6931154B1 (en) 1999-07-05 2005-08-16 Mitsubishi Denki Kabushiki Kaisha Method and device for displaying or searching for object in image and computer-readable storage medium
US7532775B2 (en) 1999-07-05 2009-05-12 Mitsubishi Denki Kabushiki Kaisha Method and device for processing and for searching for an object by signals corresponding to images
US7257277B2 (en) 1999-07-05 2007-08-14 Mitsubishi Electric Information Technology Centre Europe B.V. Method, apparatus, computer program, computer system and computer-readable storage for representing and searching for an object in an image
US7356203B2 (en) 1999-07-05 2008-04-08 Mitsubishi Denki Kabushiki Kaisha Method, apparatus, computer program, computer system, and computer-readable storage medium for representing and searching for an object in an image
US7430338B2 (en) 1999-07-05 2008-09-30 Mitsubishi Denki Kabushiki Kaisha Method and device for processing and for searching for an object by signals corresponding to images
US7483594B2 (en) 1999-07-05 2009-01-27 Mitsubishi Denki Kabushiki Kaisha Method, apparatus, computer program, computer system, and computer-readable storage medium for representing and searching for an object in an image
US7492972B2 (en) 1999-07-05 2009-02-17 Mitsubishi Denki Kabushiki Kaisha Method, apparatus, computer program, computer system, and computer-readable storage medium for representing and searching for an object in an image
US7505628B2 (en) 1999-07-05 2009-03-17 Mitsubishi Denki Kabushiki Kaisha Method and device for processing and for searching for an object by signals corresponding to images
US7505637B2 (en) 1999-07-05 2009-03-17 Mitsubishi Denki Kabushiki Kaisha Method, apparatus, computer program, computer system, and computer-readable storage medium for representing and searching for an object in an image
US7505638B2 (en) 1999-07-05 2009-03-17 Mitsubishi Denki Kabushiki Kaisha Method and device for processing and for searching for an object by signals corresponding to images
WO2002050770A1 (en) * 2000-12-21 2002-06-27 Robert Bosch Gmbh Method and device for compensating for the maladjustment of an image producing device
US7013047B2 (en) * 2001-06-28 2006-03-14 National Instruments Corporation System and method for performing edge detection in an image

Also Published As

Publication number Publication date
GB8622497D0 (en) 1986-10-22

Similar Documents

Publication Publication Date Title
Turney et al. Recognizing partially occluded parts
Donahue et al. On the use of level curves in image analysis
Borgefors Hierarchical chamfer matching: A parametric edge matching algorithm
Reid et al. A semi-automated methodology for discontinuity trace detection in digital images of rock mass exposures
Petitjean A survey of methods for recovering quadrics in triangle meshes
US4618989A (en) Method and system for detecting elliptical objects
CN106485275A (en) A kind of cover-plate glass of realizing positions, with liquid crystal display screen, the method fitted
CN106355577A (en) Method and system for quickly matching images on basis of feature states and global consistency
CN101650784B (en) Method for matching images by utilizing structural context characteristics
CN104835175A (en) Visual attention mechanism-based method for detecting target in nuclear environment
CN109711321B (en) Structure-adaptive wide baseline image view angle invariant linear feature matching method
CN111382658B (en) Road traffic sign detection method in natural environment based on image gray gradient consistency
CN105809173A (en) Bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method
Kim et al. A contour-based stereo matching algorithm using disparity continuity
CN108182705A (en) A kind of three-dimensional coordinate localization method based on machine vision
CN111199558A (en) Image matching method based on deep learning
CN111709426A (en) Diatom identification method based on contour and texture
CN104268550A (en) Feature extraction method and device
GB2203877A (en) Shape parametrisation
WO1988002158A1 (en) Shape detection
CN106780294A (en) A kind of circular arc matching process of feature based descriptor
Rosenfeld et al. Picture recognition and scene analysis
Slater et al. Using a spectral reflectance model for the illumination-invariant recognition of local image structure
Greenspan et al. Projection-based approach to image analysis: Pattern recognition and representation in the position-orientation space
Song et al. A method for measuring particle size in overlapped particle images

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)