GB2371856A - Image processing to obtain data about a predominant feature in a image - Google Patents

Image processing to obtain data about a predominant feature in a image Download PDF

Info

Publication number
GB2371856A
GB2371856A GB0102722A GB0102722A GB2371856A GB 2371856 A GB2371856 A GB 2371856A GB 0102722 A GB0102722 A GB 0102722A GB 0102722 A GB0102722 A GB 0102722A GB 2371856 A GB2371856 A GB 2371856A
Authority
GB
United Kingdom
Prior art keywords
peak
image
data
peaks
amplitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0102722A
Other versions
GB2371856B (en
GB0102722D0 (en
Inventor
Andrew Douglas Bankhead
Paul James Scott
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taylor Hobson Ltd
Original Assignee
Taylor Hobson Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taylor Hobson Ltd filed Critical Taylor Hobson Ltd
Priority to GB0102722A priority Critical patent/GB2371856B/en
Publication of GB0102722D0 publication Critical patent/GB0102722D0/en
Priority to AU2002228199A priority patent/AU2002228199A1/en
Priority to PCT/GB2002/000446 priority patent/WO2002063566A2/en
Publication of GB2371856A publication Critical patent/GB2371856A/en
Application granted granted Critical
Publication of GB2371856B publication Critical patent/GB2371856B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Abstract

A measurement probe (2) obtains image data representing a part of a surface such as a cylinder bore surface (6) and supplies it to a control apparatus (20) which is arranged to determine information about a predominant feature in the image. The control apparatus (20) has a frequency transformer (21) for obtaining a frequency transform of the image data, a processor (21) for processing the frequency transform to provide amplitude data representing amplitude values at different angles values, an identifier (21) for identifying the main peak or main peaks in the amplitude data, a position determiner (21) for determining position data for the or each main peak in the amplitude data; and a display (24) for supplying information derived from position data about the predominant feature to an operator. The feature may comprise the angles of two sets of cross hatched lines in the bore and be indicated as two crossed lines at the determined angles superimposed on a display of the cross-hatching.

Description

IMAGE PROCESSING APPARATUS
This invention relates to image processing apparatus for determining predominant features in an image. In particular, but not exclusively, this invention relates to image processing apparatus for determining the predominate angles in an image of a surface such as the internal surface of a cylinder bore of an internal combustion engine cylinder block.
In order to ensure that a piston moves freely within its cylinder bore, lubrication of the cylinder bore surface is required. As is well known in the lubrication art, a surface needs a certain degree of roughness to enable retention of the lubricant to allow the lubricant to perform its function. This is achieved by scoring the cylinder surface with a cross-hatched groove pattern. As is well known in the art, it is particularly important to ensure that the angles of the cross-hatched grooves (the "honing angles") are correct so as to ensure satisfactory lubrication of the cylinder bore while avoiding introduction of the lubricant oil into the combustion chamber which would have adverse effects on the performance of the internal combustion engine.
Conventional inspection systems use a measurement probe to obtain an image of part of the cylinder bore surface and then display that image to an operator. The operator then determines the angles of cross-hatching by visually inspecting the displayed image. This inspection process can, especially for an inexperienced operator, be a time consuming and not particularly accurate way of determining the cross-hatching angles.
It is an aim of the present invention to provide an apparatus'that enables automatic determination of information relating to predominant features in an image.
In one aspect, the present invention provides apparatus for determining honing angles from an image of a cylinder bore surface.
In one aspect, the present invention provides image processing apparatus for determining information about a predominant feature in an image, which apparatus comprises: position determining means for determining position data for the or each main peak in amplitude data representing the image; and supplying means for supplying information derived from position data about the predominant feature to an operator.
In one aspect, the present invention provides processing apparatus having transform means for obtaining a frequency transform of image data to enable data regarding a predominant feature to be extracted from the image data.
In one aspect, the present invention provides processing apparatus having transform means for obtaining a frequency transform of image data and processing means for converting the frequency transform to polar coordinate data to provide a two dimensional data array with one dimension representing angle and the other representing frequency to enable data regarding a predominant feature to be extracted from the image data.
In one aspect, the present invention provides processing apparatus having transform means for obtaining a frequency transform of image data and summing means for summing amplitude values over frequencies associated with an angle value to generate the amplitude data representing amplitude values at different angles to enable data regarding a predominant feature to be extracted from the image data.
In one aspect, the present invention provides apparatus
for finding peaks in amplitude data representing an image, having the identifying means arranged to identify peaks and valleys in the amplitude data, to remove all peaks that meet a certain criterion, and then to identify the main peak or main peaks.
In one aspect, the present invention provides apparatus for finding peaks in amplitude data representing an image, having the identifying means arranged to identify peak valley pairs and to remove all peak valley pairs for which the difference in height between the peak and valley is less than a predetermined proportion of a maximum peak height in the amplitude data.
A measurement instrument comprising apparatus according to any one of the preceding aspects and a measurement probe adapted to capture an image of a surface. In an embodiment, the measurement probe is movable within a hollow body such as a cylinder bore and the apparatus comprises measurement probe controlling means for controlling movement of the measurement probe.
A cylinder bore angle measurement instrument comprising apparatus according to any one of the preceding aspects and a measurement probe adapted to obtain an image of
part of a cylinder part surface. Embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which: Figure 1 shows a schematic part-sectional view of a measurement apparatus comprising a measurement instrument, a motor control unit and control apparatus with a measurement probe of the measurement instrument positioned within a cylinder bore of a cylinder block shown partly and in cross-section; Figure 2 shows a very diagrammatic cross-sectional view of the measurement probe shown in Figure 1; Figure 3 shows a block diagram of the control apparatus of the measurement apparatus; Figure 4 shows a flowchart for illustrating steps carried out by the control apparatus to determine angles of cross-hatching on the cylinder bore surface from an image acquired by the measurement probe; Figures 5a, 5b and 5c show examples of screens displayed
by a display of the control apparatus to the operator in an angle determination mode of operation ; Figures 6 shows an example of an image obtained by the measurement probe of part of a cross-hatched cylinder bore surface; Figure 7 shows a flowchart for illustrating in greater detail steps carried out by the control apparatus to process an image to determine the cross-hatching angles; Figure 8 shows a flowchart illustrating in greater detail a Fourier analysis procedure step shown in Figure 7; Figure 9 shows a frequency transform image representing the results of Fourier analysis on the image shown in Figure 6; Figure 10 shows a flowchart illustrating in greater detail a peak finding procedure step shown in Figure 7; Figure 11 shows an image resulting from conversion to polar co-ordinates of the frequency transform image shown in Figure 9;
Figure 12 shows a flowchart illustrating in greater detail a step of identifying the two main peaks in the polar co-ordinate data shown in Figure 11; Figure 13 shows a flowchart illustrating in greater detail a peak fitting procedure step shown in Figure 10; Figure 14 shows a plot of the natural logarithm of amplitude against angle generated by the control apparatus and the results of the peak fitting procedure; and Figure 15 shows an image displayed to an operator to show the determined cross-hatching angles.
Figure 1 shows a measurement apparatus 100 comprising a measurement instrument 1 coupled by a cable 17b to a motor control unit MCU itself coupled by a cable 17a to a control apparatus 20. The motor control unit MCU and control apparatus 20 are shown seated on a workbench. The measurement instrument 1 is shown having its main housing la seated on a cylinder block 5 so that a measurement probe 2 of the measurement instrument is received within a cylinder bore 6 of the cylinder block.
It will of course be appreciated that, in the interests of clarity in Figure 1, the various components are not shown to scale.
As shown by Figures 1 and 2, the measurement probe 2 has an image viewing window 3. A number of light sources 4, in this example LEDs (light emitting diodes), are distributed around and inside of the periphery of the window 3 to illuminate the area of the cylinder bore surface 6a opposed to the window 3.
A prism 7 is mounted within the measurement probe 2 to reflect light transmitted through the window 3 via a zoom lens 8 to an image sensor 10 which is, in this embodiment, a two-dimensional CCD (charge-coupled device) sensor array or camera. Operation of the zoom lens 8 is controlled by a zoom lens motor 9 in conventional fashion.
The measurement probe 2 is mounted via a carriage 14 to a belt drive arrangement 11 that enables the measurement probe 2 to be moved up and down within the housing la so that the measurement probe 2 can be moved to different depths within the cylinder bore 6. In this example, the belt drive arrangement comprises an endless toothed belt
12 received around lower and upper pulleys 13a and 13b and the upper pulley 13b is coupled, via a conventional gearing arrangement (not shown), to the drive shaft (not shown) of a drive motor 15.
The motor control unit MCU supplies power from an AC mains supply (not shown) to the various components of the measurement instrument 1 and controls operation of the drive motor 15 and zoom motor 9 in accordance with control signals received from the control apparatus 20 in conventional manner.
The control apparatus 20 consists, in this example, of a personal computer and will be described in greater detail below with reference to Figure 3. Control signals from the control apparatus 20 are supplied to the motor control unit MCU and image data acquired by the CCD image sensor 10 are supplied from the measurement instrument 1 to the control apparatus 20 via coupling cables 17a and 17b with the coupling cable 17b being connected to an output connection 16 of the measurement instrument 1.
In the interests of simplicity, Figure 2 shows the connections of the zoom motor 9, CCD sensor 10 and drive motor 15 to the output connection 16 only very
schematically. It will, of course, be appreciated that these connections will be sufficiently flexible to enable movement of the measurement probe 2 relative to the main housing la throughout the entirety of the intended movement range of the measurement probe 2. In this embodiment, these connections also allow an operator to swivel or rotate the measurement probe 2 manually relative to the housing la to change the region of the circumference of the cylinder bore surface 6a viewed through the window 3.
In the interests of simplicity, no connection between the light sources 4 and the output connection 16 is shown in Figure 2.
Figure 3 shows a block diagram of the control apparatus 20 which consists of a conventional personal computer having a central processing unit (CPU) 21 with an associated memory 22 (ROM and/or RAM), a hard disk drive (HD) 23, a removable disk drive (RDD) 26 for receiving a removable disk RD such as, for example a floppy disk, CDROM or DVD disc, a display 24 such as a CRT or LCD display, a user interface 25 such as a keyboard 25a and a pointing device, for example a mouse, 25b and a communications interface 27 which may be a network
interface enabling the CPU 21 to communicate with other computers on a local or wide area network or may be a MODEM enabling the CPU 21 to communicate with other computers over the Internet, for example.
In addition, the control apparatus 20 comprises an interface board 28 for the measurement instrument 1 and frame capture circuitry 29 for capturing a frame of image data supplied by the CCD sensor 10. The interface board 28 provides motor control interfaces of conventional form for interfacing between the CPU 21 and the motor control unit MCU. The interface board 28 also enables communication with the CCD sensor 10 to enable capture of images and communication of the image data to the frame capture circuitry 29 provided within the control apparatus. The control apparatus is coupled to an output device in the form of a printer 101.
The control apparatus 20 is operable in an angle determination mode, in which the control apparatus 20 is configured by program instructions stored in the memory 22 or on the hard disk drive 23 to acquire an image of a part of the cylinder bore surface 6a and to process that image to determine the honing or cross-hatching angles on the surface 6a as will be described below with reference
to Figures 4 to 15.
Program instructions for configuring the control apparatus 20 to carry out the angle determination procedure may be downloaded by the CPU 21 from a removable disk storage medium RD received in the removable disk drive 26 and/or may be supplied as a signal S via the communications interface 27 from another computer over a network such as the Internet.
In order to acquire and process an image of part of the surface 6a of a cylinder bore 6 to determine the crosshatching or honing angles, an operator first positions the measurement probe 2 within the cylindrical bore 6 so that the main housing la rests on the surface of the cylinder block as shown in Figure 1 and switches the power supply on so that the motor control unit MCU activates the CCD sensor 10 and the light sources 4.
Figures 5a to 5c show a display screen 30 that may be displayed to the operator. The screen has a display window 31 for displaying an image and provides the operator with the facility to move between different pages by clicking on the tabs Tl, T2 or T3. Figure 5a shows an image page 31a selected, Figure 5b a depth page
31b selected and Figure 5c shows an angles page 31c selected. The image page 31 provides the operator with initial instructions (not shown) in a section 33'telling the operator to fit the measurement probe 2 into the cylinder bore, rotate and drive depth to the required position.
The image page 31 also provides zoom functions 33 including buttons 33a and 33b for selecting minimum and maximum view, windows Wla and Wlb for displaying the current field of view and a target field of view and a find button 33c for enabling an operator to instruct the control apparatus 20 to find the target field of view by adjusting the zoom ratio.
As shown the image page 31a also has a button 33d for enabling an operator to select continuous acquisition of images so that the operator can view the images and adjust the lighting conditions appropriately and a button 33e for stopping continuous acquisition. The image page also includes a frame section 34 with acquire, save, read and print buttons 34a, 34b, 34c and 34d for enabling the operator to instruct capture of an image by the frame capture circuitry, saving of the image to memory or hard
disk, reading of stored image data from the memory or hard disk, for example, and printing of the image, respectively.
The depth page 31b shown in Figure 5b also includes the initial instructions section 31'containing instructions (not shown) to tell the operator to fit the measurement probe into the cylinder probe and rotate to the required orientation and has a depth section and a scanning section. The depth section 32a includes a current window W2 for displaying the current depth, a target window W3 for enabling input of a target depth and a find button 32b for enabling an operator to instruct the finding of a particular depth. The scanning section 32c has start, end and increment windows W4, W5 and W6 for enabling the operator to determine start, end and increments of a depth scanning operation, a scan button 32d for initiating a scan and a stop button 32e for stopping a scan. This facility acquires a image at each depth and displays it to the operator. The operator may stop the scan at any depth, in which case the control apparatus 20 will display the last acquired image and its depth.
The angles page 31c shown in Figure 5c also includes an instruction section 31". This instruction section
instructs (the instructions are not shown) the operator to obtain an image of cross-hatching with maximum magnification and edge lighting by the light sources 4.
The angles page has a tolerances section with windows W8 to W11 for setting or showing tolerances for the two angles and a show button 38 for enabling the operator to instruct the control apparatus 20 to show the actual tolerances.
The angles page also has new image and automatic analyse buttons 39 and 35 for enabling the operator to instruct the control apparatus 20 to obtain a new image (for example a previously stored image) and to analyse the displayed image. A results section has windows 36a to 36c for displaying the two angles angle 1 and angle 2 and the included angle and a print button 36d for enabling the operator to instruct the control apparatus 20 to print the results on the printer 101.
A manual section 37 in the angles page enables an operator to determine the angles manually by, in accordance with instructions (not shown) displayed in the manual section 37, clicking a start button 37a and then clicking twice (at respective ends of a visible line representing the first angle, angle 1, on the image) to
define the first angle and then clicking twice (at respective ends of a visible line representing the second angle, angle 2, on the image) to define the second angle.
Windows W12, W13 and W14 show current coordinates.
The control apparatus 20 causes, at step Sl in Figure 4, the display 24 to display one of the screens shown in Figures 5a to 5c.
The operator selects tab T2 to display the depth page 31b to enable entry of a target or required depth in the depth window W3 using the keyboard 25a of the user input device 25 and clicks the find button 32b using the pointing device 25b of the user input device. When the find button is selected, then the control apparatus 20 causes, at step S2 in Figure 4, the drive motor 15 to drive the measurement probe 2 to the required depth within the cylinder bore 6.
Control over the depth to which the measurement probe 2 is driven may be affected in a number of different ways, depending on the type of the drive motor 15. Thus, where the drive motor 15 is a stepper motor, then the control apparatus 20 may cause the motor control unit MCU to
provide the drive motor 15 with sufficient pulses to cause the measurement probe 2 to be moved the required distance. As another possibility, the drive motor 15 may be associated with a shaft encoder or the measurement probe 2 may be associated with a position encoder that enables information regarding the distance over which the measurement probe 2 has moved to be fed back to the motor control unit MCU to the depth to which the measurement probe has been driven to be determined.
The CCD sensor 10 is continually sending back images to the control apparatus 20. When the measurement probe 2 has reached the required depth as displayed in the current depth window W2, then the operator can select to acquire an image by selecting tab Tl to display the image page 31a and clicking on the acquire button 34a (Figure 5a). The control apparatus 20 then causes, at step S3 in Figure 4, the frame capture circuitry 29 to capture a frame of the image data being supplied by the CCD sensor and the display 24 to display the captured image frame in the display window 31. Figure 6 shows an example image 40. In this example, the image is rotated by 90 degrees relative to the actual cylinder bore surface 6a so that the left and right edges of the image represent top and bottom edges of the corresponding region of the cylinder
bore surface 6a. Thus, although this image 40 shows the cross-hatching at about 60 degrees the actual honing angles will be about 30 degrees.
The operator may, if desired, adjust the field of view or zoom using the zoom functions of the image page 31a shown in Figure 5a and then reacquire the image with the new zoom ratio by clicking on the acquire button 34a again. If the operator does this, then the control apparatus 20 will instruct the motor control unit MCU via the interface board 28 to control the zoom motor 9 to drive the zoom lens 8 to provide the required zoom ratio and the CPU 21 will then cause the zoomed image captured by the frame capture circuitry 29 to be displayed in the display window 31.
The operator may also manually swivel the measurement probe 2 to view a different part of the circumference of the bore 6.
When the operator is happy with both the region of the cylinder bore surface 6a that is being imaged and with the zoom ratio, then the operator may instruct the control apparatus 20 to determine the angles of crosshatching (the honing angles) in the acquired image by
clicking on tab T3 to select the angles page 31c and then clicking on the analyse button 35 in the angles page shown in Figure 5c.
When the analyse button 35 is selected by the operator, then, at step S4 in Figure 4, the control apparatus 20 processes the captured image being displayed in the display window 31 to determine the honing or crosshatching angles as will be described in greater detail below and, at step S5, displays the determined angles graphically on the image in the angles page display window 31 and also numerically in windows 36a to 36c in the results section of the angles page 31c.
Figure 15 shows an image 45 displayed to the operator in the display window 31 after determination of the angles by the control apparatus 20. This image corresponds to the image 40 shown in Figure 6 except that the image has been overlain by lines Al and A2 to illustrate the crosshatching angles. In this embodiment, the line Al and A2 are displayed by changing the colour of the pixels of the image along the path of the lines so that the lines show up as a different colour from the remainder of the image.
As mentioned above, the operator may manually determine
the cross-hatching angles using the manual section 37 of the angles page 31c shown in Figure 5c. When the operator does this, then the control apparatus 20 enables the operator to input lines corresponding to the lines Al and A2 shown in Figure 15 by clicking at opposite ends of lines in the image using the pointing device in known manner to define the required lines. Where the operator selects this manual option, then the control apparatus 20 will display the angles defined by the manually input lines in the results section.
The steps carried out by the control apparatus 20 to process a captured image to determine the cross hatching angles (step S4 in Figure 4) will now be described in greater detail with reference to Figures 7 to 14.
As shown in Figure 7, in order to carry out step S4 in Figure 4, the control apparatus 20 first carries out a Fourier analysis procedure at step S6a and then carries out a peak finding procedure at step S6b.
Figure 8 shows a flowchart illustrating the Fourier analysis procedure. Each pixel in the image frame captured by the frame capture circuitry 29 can have a value between 0 and 255. Accordingly as a first step in
the Fourier analysis procedure the control apparatus 20 averages the signal level so that the average is 0 (step S7) and removes any DC offset.
In this embodiment, the control apparatus 20 then, at step S8, masks the acquired image data using a radial Gaussian mask window so that the image signal amplitude drops to 1/e4 at the edges of the image. This avoids the discontinuity at the edges of the image introducing spurious horizontal and vertical frequency lines when the Fourier analysis is carried out. A Gaussian masking window is used in this embodiment because Gaussians are well-defined in Fourier terms. However, other known masking windows may be used.
Then, at step S9 the control apparatus 20 performs a 2D Fourier analysis using a conventional 2 dimensional Fast Fourier Transform (FFT) algorithm and stores the frequency transform data as a 2D array in memory 22 or on the hard disk 23.
Figure 9 shows an image 41 representing the frequency transform of the cross hatched image 40 shown in Figure 6. As can be seen from Figure 9, a predominant feature
of the frequency transform image 41 is two crossing lines 41a and 41b. These lines 41a and 41b are, effectively, frequency vectors corresponding to the cross-hatching angles and are so at 900 to the corresponding angles shown in Figure 6. The frequency transform image 41 thus clearly identifies the cross-hatching angles.
Having obtained the frequency transform, the control apparatus 20 then carries out the peak finding procedure step S6b in Figure 7. Figure 10 shows a flowchart illustrating this step in greater detail.
Thus, in this embodiment, at step S10 the control apparatus 20 converts the frequency transform data represented by the image 41 shown in Figure 9 to polar co-ordinates to generate a plot of frequency f against angle 6. Figure 11 shows an image 42 representing the data after this conversion where the angle 6 varies from
-n/2 to n/2 and the central line CL represents the angle axis which is at frequency zero. As can be seen from Figure 11, the frequency lines 41a and 41b in the frequency transform image 41 have been converted to lines 42a and 42b extending perpendicularly of the angle axis e at angles Al and A2, respectively.
The control apparatus 20 then, at step Sll, sums, for each angle along the angle axis e, the amplitudes values at points lying within a range R in the direction of the frequency axis. The range R is chosen so as to be offset from the angle axis 6 because the different amplitude points are very close together near the angle axis. Typically, the range will be 1/16 to of the maximum frequency (taken from the centre, f=0, line CL to the top in Figure 6).
The control apparatus 20 provides a data array of summed amplitude against angle. Figure 14 shows a plot or graph 43 representing the summed amplitude against angle. For convenience, the summed amplitude is plotted as the natural logarithm (log r) of the summed amplitude.
As can be seen from Figure 14, the plot 43 has two main peaks 43a and 43b. These correspond to the lines 42a and 42b in Figure 11.
Then, at step S12, the control apparatus 20 identifies these two main peaks 43a and 43b and at step S13 carries out a peak fitting procedure to identify the positions of the peaks. Figure 14 shows a line 46 representing the results of the peak fitting procedure. As can be seen
from Figure 14, the peaks 43a and 43b in the plot 43 have been identified and fitted to peaks PI and P2 by the peak fitting procedure.
The precise positions of the peaks PI and P2 are then identified and the control apparatus 20 then causes the display 24 to display the peak positions as the angle lines Al and A2 as shown in Figure 15 by, as explained above, changing the colours of the pixels lines along the lines Al and A2 to a colour different from the colours of the cross-hatched image.
The step S12 of identifying the two main peaks 43a and 43b in the plot 43 will now be described in greater detail with reference to the flowchart shown in Figure 12. Thus, the control apparatus 20 first of all, at step S15, determines the maximum and minimum values from the plot 43 and sets a threshold as a proportion of the difference between the minimum and maximum values.
Typically this threshold will be 20% of the difference between the maximum and minimum summed amplitude values represented in the plot 43.
The use of the threshold takes advantage of the fact that the two main peaks 43a and 43b are expected to be much
higher than the remaining peaks or noise and so removes the need to have to consider each and every single summed amplitude point within the plot 43.
The control apparatus 20 thus now has an array of the above threshold summed amplitude points against angle.
Then, at step S16, the control apparatus 20 compares the value of each summed amplitude point above the threshold with the values of the two neighbouring points on either side of that point. If the value of the point under consideration is higher than the values of the points on either side, then the control apparatus 20 identifies that point as a peak whereas if the values of the points on either side of the point under consideration are higher than the value of the point under consideration then the control apparatus 20 identifies that point as a valley. The control apparatus 20 thus now has a set of peak-valley pairs.
Then, at step S17, the control apparatus 20 filters the peak-valley pairs, removes any pairs for which the difference in height between the peak and valley is less than a threshold (again typically 20% of the maximumminimum difference), then determines from the remaining
peak-valley pairs which two peaks have the greatest height and identifies these as the two main peaks 43a and 43b. The peak-valley pair removal process reduces the number of peaks that have to be considered in order to identify the to main peaks 43a and 43b.
Figure 13 shows in greater detail the peak fitting procedure step S13 shown in Figure 10. Thus, at step S18, the control apparatus 20 fits a Gaussian to the points on the two main peaks 43a and 43b having a value above a threshold value. Generally, this threshold value will be half the difference between the maximum and minimum values in the plot 43. Any conventional Gaussian fitting procedure may be used.
As shown by line 46 in Figure 14, the Gaussian fitting procedure fits Gaussian peaks PI and P2 to the main peaks 43a and 43b.
At step S19, the control apparatus 20 takes the natural logarithm of the points in the Gaussian fit to provide log values and at step S20 fits a quadratic (Ax2 +Bx +C) to the log values.
Then, at step S21, the control apparatus 20 determines
and stores the height, width and position of the two peaks using the following formulas :
By = '' Width = 1) (nA pub Position =-2A
At step S22 the control apparatus 20 returns the determined positions as the angles for the two main peaks. The control apparatus 20 then displays the peak positions as angles Al and A2 on the cross-hatched image displayed in display window 31 as shown in Figure 15 and as described above with reference to step S14 in Figure 10. The control apparatus 20 also causes the actual numeric values of the two angles Al and A2 to be displayed in the corresponding angle windows 36a and 36b of the display screen 30 shown in Figure 5.
The above described procedure thus enables the crosshatching angles Al and A2 of the cross-hatched pattern on
the surface 6a of the cylinder bore 6 to be determined automatically. This results in a quicker determination of the angles than could be achieved if the angles were determined by an operator visually inspecting the image 40 (Figure 6) and manually drawing the angle lines Al and A2 on the image using the pointing device of the user interface 25. Also, the above described automatic procedure should be more accurate than a manual determination of the angles.
It will be appreciated that the images shown in Figures 9 and 11 and the plot 43 shown in Figure 14 will not generally be displayed to the operator. Rather, when the operator selects the analyse button 35 in the angles page 31c shown in Figure 5c, the processing described above with reference to Figures 7 to 14 will be carried out in a matter of a few seconds and, at the end of the processing, the control apparatus 20 will draw the determined angle lines Al and A2 on the cross-hatched image to produce the image 45 shown in Figure 15 without displaying any intervening images to the operator. optionally, the plot 43, which may provide information helpful for the operator, may be displayed.
In the above described embodiments, the light sources 4
are LEDs. However any other suitable form of light source may be used. Also, the prism 7 shown in Figure 3 may be replaced by another form of optical reflector such as a mirror. In addition, the movement of the measurement probe 2 up and down within the main housing la may be achieved by a different form of drive arrangement, for example a rack and pinion or ball and screw drive arrangement.
In the angle determination procedure described above, where the data permits, the averaging procedure set out in step S7 in Figure 8 may be omitted. Similarly, it may be possible to omit the Gaussian or other windowing step S8 in Figure 8 when the angles are away fro the horizontal and vertical. Other windowing functions that may be used include Bartlett, Hann and Welch windowing functions.
In the above described embodiment, a Fourier transform is used to transform the data into frequency space. Other frequency transforms may be used, for example a discrete cosine transform (DCT) may be used.
Also, the display screens shown in Figures 5a to 5c are only examples and different ways of displaying the
information to the operator may be used. Similarly other peak finding routines such as known smoothing techniques which smooth out smaller peaks may be used.
Also, although in the above described embodiments, the CCD sensor is continually imaging the surface, the CCD sensor could be controlled so as to only image the surface with the operator instructs that an image is to be acquired. Also, a different form of image sensor may be used such as a 2 dimensional photodiode array.

Claims (38)

1. Image processing apparatus for determining information about a predominant feature in an image, which apparatus comprises: receiving means for receiving image data representing an image comprising a predominant feature; transform means for obtaining a frequency transform of the image data; processing means for processing the frequency transform to provide amplitude data representing amplitude values at different angles values; identifying means for identifying the main peak or main peaks in the amplitude data; position determining means for determining position data for the or each main peak in the amplitude data; and supplying means for supplying information derived from position data about the predominant feature to an operator.
2. Apparatus according to Claim 1, wherein the transform means is arranged to obtain a Fourier transform of the image data.
3. Apparatus according to Claim 1 or 2, wherein the frequency transform means is arranged to mask the image data using a radial Gaussian mask function to define masked image data and to obtain the frequency transform from the masked image data.
4. Apparatus according to Claim 1,2 or 3, wherein the frequency transform means is arranged to average a signal level of the image data and to remove any DC offset before obtaining the frequency transform.
5. Apparatus according to Claim 1,2, 3 or 4, wherein the processing means is arranged to convert the frequency transform to polar coordinate data to provide a two dimensional data array with one dimension representing angle and the other representing frequency.
6. Apparatus according to Claim 5, wherein the processing means comprises summing means for, for each of at least some of the angle values, summing amplitude values over frequencies associated with the angle value to generate the amplitude data representing amplitude values at different angles.
7. Apparatus according to Claim 6, wherein the summing means is arranged, for each of at least some of the angle values, to sum amplitude values over a range of frequencies offset from an angle axis of the polar coordinate data
8. Apparatus according to any one of the preceding Claims, wherein the identifying means is arranged to identify peaks and valleys in the amplitude data, and to remove all peaks that meet a certain criterion, and then to identify the main peak or main peaks.
9. Apparatus according to Claim 8, wherein the identifying means is arranged to identify peak valley pairs and to remove all peak valley pairs for which the difference in height between the peak and valley is less than a predetermined proportion of a maximum peak height in the amplitude data.
10. Apparatus according to any one of the preceding Claims, wherein the position determining means comprises peak fitting means for fitting a fitting function to the main peak or main peaks and parameter determining means for determining from the peak or peaks of the function
fitted to the main peak or peaks the position of the main peak or main peaks.
11. Apparatus according to Claim 10, wherein the peak fitting means is arranged to use a Gaussian as the fitting function.
12. Apparatus according to Claim 10 or 11, wherein the parameter determining means is arranged to determine the position by fitting a polynomial to the peak or peaks of a logarithm of the fitted function.
13. Apparatus according to Claim 12, wherein the parameter determining means is arranged to use a quadratic function as the polynomial.
14. Apparatus according to Claim 12, wherein the parameter determining means is arranged to use a quadratic function Ax2 + Bx +C as the polynomial and to
determine the position of the or a main peak in accordance with :
B Position---* 2A
15. Apparatus according to any one of the preceding Claims, further comprising display causing means for
causing the information about the predominant feature to be displayed to the operator on a display.
16. Apparatus according to any one of the preceding Claims, wherein the receiving means is operable to receive an image having features defining a predominant angle or angles and the supplying means is arranged to supply as said information data representing the angle or angles.
17. A measurement instrument comprising apparatus according to any one of Claims 1 to 16 and a measurement probe adapted to capture an image of a surface and to supply it to the receiving means.
18. A measurement instrument according to claim 17, wherein the measurement probe is movable within a hollow body such as a cylinder bore and the apparatus comprises measurement probe controlling means for controlling movement of the measurement probe.
19. A cylinder bore angle measurement instrument comprising apparatus according to any one of Claims 1 to 16 and a measurement probe adapted to obtain an image of
part of a cylinder bore surface and to supply it to the receiving means.
20. Apparatus for finding main peaks in data, which apparatus comprises: means for identifying peak and valley pairs in the data; means for determining whether the difference between the peak and valley of a pair is below a threshold; and means for receiving peak valley pairs below the threshold.
21. An image processing method for determining information about a predominant feature in an image, which comprises the steps of: receiving image data representing an image comprising a predominant feature; obtaining a frequency transform of the image data; processing the frequency transform to provide amplitude data representing amplitude values at different angles values; identifying the main peak or main peaks in the amplitude data; determining position data for the or each main peak in the amplitude data; and
supplying information derived from position data about the predominant feature to an operator.
22. A method according to Claim 21, wherein the transform step obtains a Fourier transform of the image data.
23. A method according to Claim 21 or 22, wherein the frequency transform step masks the image data using a radial Gaussian mask function to define masked image data and obtains the frequency transform from the masked image data.
24. A method according to Claim 21,22 or 23, wherein the frequency transform step averages a signal level of the image data and removes any DC offset before obtaining the frequency transform.
25. A method according to Claim 21,22, 23, or 24, wherein the processing step converts the frequency transform to polar coordinate data to provide a two dimensional data array with one dimension representing angle and the other representing frequency.
26. A method according to Claim 25, wherein the processing step comprises, for each of at least some of the angle values, a summing step of summing amplitude values over frequencies associated with the angle value to generate the amplitude data representing amplitude values at different angles.
27. A method according to Claims 26, wherein the summing step sums, for each of at least some of the angle values, amplitude values over a range of frequencies offset from an angle axis of the polar coordinate data.
28. A method according to any one of Claims 21 to 27, wherein the identifying step identities peaks and valleys in the amplitude data, and removes all peaks that meet a certain criterion, and then identifies the main peak or main peaks.
29. A method according to Claim 28, wherein the identifying step identifies peak valley pairs and removes all peak valley pairs for which the difference in height between the peak and valley is less than a predetermined proportion of a maximum peak height in the amplitude data.
30. A method according to any one of Claims 21 to 29, wherein the position determining step comprises a peak fitting step fitting a fitting function to the main peak or main peaks and a parameter determining step determining from the peak or peaks of the function fitted to the main peak or peaks the position of the main peak or main peaks.
31. A method according to Claim 30, wherein the peak fitting step uses a Gaussian as the fitting function.
32. A method according to Claim 30 or 31, wherein the parameter determining step determines the position by fitting a polynomial to the peak or peaks of a logarithm of the fitted function.
33. A method according to Claim 32 wherein the parameter determining step uses a quadratic function as the polynomial.
34. A method according to Claim 32, wherein the parameter determining step uses a quadratic function AX2 + Bx + C as the polynomial and determines the position of the or a main peak in accordance with:
B Position =-2A.
2A
35. A method according to any one of Claims 21 to 34, further comprising a display causing step causing the information about the predominant feature to be displayed to the operator on a display.
36. A method according-to any one of Claims 21 to 34, wherein the receiving step receives an image having features defining a predominant angle or angles and the supplying step supplies as said information data representing the angle or angles.
37. A signal carrying processor implementable instructions for causing a processor to carry out a method in accordance with any one of Claims 21 to 36.
38. A storage medium carrying processor implementable instructions for causing a processor to carry out a method in accordance with any one of Claims 21 to 36.
GB0102722A 2001-02-02 2001-02-02 Image processing apparatus Expired - Fee Related GB2371856B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0102722A GB2371856B (en) 2001-02-02 2001-02-02 Image processing apparatus
AU2002228199A AU2002228199A1 (en) 2001-02-02 2002-02-01 Image processing apparatus
PCT/GB2002/000446 WO2002063566A2 (en) 2001-02-02 2002-02-01 Image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0102722A GB2371856B (en) 2001-02-02 2001-02-02 Image processing apparatus

Publications (3)

Publication Number Publication Date
GB0102722D0 GB0102722D0 (en) 2001-03-21
GB2371856A true GB2371856A (en) 2002-08-07
GB2371856B GB2371856B (en) 2005-03-30

Family

ID=9908052

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0102722A Expired - Fee Related GB2371856B (en) 2001-02-02 2001-02-02 Image processing apparatus

Country Status (3)

Country Link
AU (1) AU2002228199A1 (en)
GB (1) GB2371856B (en)
WO (1) WO2002063566A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440116B2 (en) * 2003-05-23 2008-10-21 Taylor Hobson Limited Surface profiling apparatus with reference calibrator and method of calibrating same
WO2014065966A1 (en) * 2012-10-23 2014-05-01 General Electric Company Ultrasonic measurement apparatus and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2076964A (en) * 1980-05-28 1981-12-09 Fiat Auto Spa Inspecting workpiece surface by diffraction patterns
WO1999046583A1 (en) * 1998-03-09 1999-09-16 Daimlerchrysler Ag Method and device for determining an angled structure in the surface of a precision machined cylindrical work piece

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0395481A3 (en) * 1989-04-25 1991-03-20 Spectra-Physics, Inc. Method and apparatus for estimation of parameters describing chromatographic peaks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2076964A (en) * 1980-05-28 1981-12-09 Fiat Auto Spa Inspecting workpiece surface by diffraction patterns
WO1999046583A1 (en) * 1998-03-09 1999-09-16 Daimlerchrysler Ag Method and device for determining an angled structure in the surface of a precision machined cylindrical work piece

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440116B2 (en) * 2003-05-23 2008-10-21 Taylor Hobson Limited Surface profiling apparatus with reference calibrator and method of calibrating same
WO2014065966A1 (en) * 2012-10-23 2014-05-01 General Electric Company Ultrasonic measurement apparatus and method
US9032801B2 (en) 2012-10-23 2015-05-19 General Electric Company Ultrasonic measurement apparatus and method

Also Published As

Publication number Publication date
WO2002063566A3 (en) 2003-05-15
GB2371856B (en) 2005-03-30
WO2002063566A2 (en) 2002-08-15
GB0102722D0 (en) 2001-03-21
AU2002228199A1 (en) 2002-08-19

Similar Documents

Publication Publication Date Title
JP6101706B2 (en) Focusing operation using multiple lighting settings in machine vision system
US6519359B1 (en) Range camera controller for acquiring 3D models
US20030118245A1 (en) Automatic focusing of an imaging system
CA2538162C (en) High speed multiple line three-dimensional digitization
US20050089208A1 (en) System and method for generating digital images of a microscope slide
DE102007004122A1 (en) Method for measuring a three-dimensional shape
US20060088201A1 (en) Smear-limit based system and method for controlling vision systems for consistently accurate and high-speed inspection
EP1171996B1 (en) Fast focus assessment system and method for imaging
JP2007033216A (en) White interference measuring instrument, and white interference measuring method
US20220101511A1 (en) System and method utilizing multi-point autofocus to align an optical axis of an optical assembly portion to be normal to a workpiece surface
CA2231225C (en) Global mtf measurement system
GB2371856A (en) Image processing to obtain data about a predominant feature in a image
US5619031A (en) Variable magnification apparatus for reticle projection system
WO2013185936A1 (en) Apparatus and method for estimating a property of a surface using speckle imaging
CN110514589B (en) Method, electronic device, and computer-readable medium for focusing
Young Locating industrial parts with subpixel accuracies
JP2002516203A (en) Engraving system and method including improved imaging
JP2861723B2 (en) Confocal microscope
JP6991600B2 (en) Image measurement system, image measurement method, image measurement program and recording medium
León An automated system for the macroscopic acquisition of images of firearm bullets
KR101157791B1 (en) Optic microscope with automatic focusing system
CN110907470A (en) Optical filter detection device and optical filter detection method
JP2003255217A (en) Focusing method and focusing device
US11215449B2 (en) Three-dimensional shape measuring apparatus
CA2247288A1 (en) Determination of gloss quality

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20050630