GB2150384A - Locating a feature of an object - Google Patents

Locating a feature of an object Download PDF

Info

Publication number
GB2150384A
GB2150384A GB08423693A GB8423693A GB2150384A GB 2150384 A GB2150384 A GB 2150384A GB 08423693 A GB08423693 A GB 08423693A GB 8423693 A GB8423693 A GB 8423693A GB 2150384 A GB2150384 A GB 2150384A
Authority
GB
United Kingdom
Prior art keywords
signal
video signal
feature
filtering
crosses
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB08423693A
Other versions
GB8423693D0 (en
Inventor
Robert Albert Boie
Gabriel Lorimer Miller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone and Telegraph Co Inc filed Critical American Telephone and Telegraph Co Inc
Publication of GB8423693D0 publication Critical patent/GB8423693D0/en
Publication of GB2150384A publication Critical patent/GB2150384A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/142Edging; Contouring

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Image Processing (AREA)

Abstract

The video output of a camera 120 viewing an object 110 is connected to an edge finding circuit 130 and a centroid finding circuit 140. Each of these produces an output which crosses zero at a time corresponding to the edges of the object occurring in the video signal (130) or to the average of these (140). An edge is assumed to occur when the video signal crosses its median value, it is alleged that problems due to variations in illumination and gradual changes in signal level are overcome by this technique. <IMAGE>

Description

SPECIFI-CATION Apparatus for and methods of locating a feature of an object This invention relates to apparatus for and methods of locating a feature of an object.
Feature extraction refers to an early stage in machine vision systems in which parameters representative of objects within a scene of interest are determined. Features include, for example, the edge locations, areas, and centroids of the objects. In computerized image processing systems, features are the elemental data for object recognition. Object recognition has promising applications in many fields, for example, feedback control of robots, and the automatic sorting and classification of objects in factory assembly lines.
In a typical computer vision system a video camera produces a signal corresponding to the image of the workpiece. The amplitude of the video signal varies with time in proportion to the brightness of consecutive points in the scene. A common first step in feature extraction involves digitizing the video signal, typically into 256 grey levels of intensity, and storing the image of the scene on an array of 512 x 512 pixels each of 8 bits representing the local brightness of the image. High speed arithmetic algorithms are then applied to this large party of digital data to extract object features such as edges, areas, and centroids.
The Marr-Hildreth algorithm is an example of an inherently digital process for the detection of abrupt intensity changes in a video signal. In an article by E.C. Hildreth entitled "Edge Detection for Computer Vision System", Mechanical Engineering, August 1982, pp.48-53, edges are detected by locating zero crossings in the output of the convolution of the image with a radially symmetric weighting function. This method provides excellent performance but is computationally intensive; each of the roughly 250,000 pixels in the processed image requires about one thousand multiply and add operations. The result is a system that involves a critical trade-off of speed for complexity and cost. Such trade-offs are inherent in systems that start by digitizing the entire scene.
Analog feature extraction arrangements are also known. Atypical method for edge detection utilizes a threshold circuit. In U.S. patent 4,017,721, for example, the output of the camera is applied to a circuit which detects the passage of the video signal through a particular voltage selected as representative of the light level intermediate between white and black. The output of the threshold circuit indicates a transition, corresponding to an edge of an object in the image. Such amplitude threshold techniques, however, are subject to inaccuracies because the slope of the signal corresponding to an edge changes according to the maximum amplitude of the signal. Thus, the detected edge location may vary or "walk" as the incident illumination, geometry or reflectance of the object changes.Such changes are commonplace in many practical applications, such as robotics, and may be undesirable where high precision is required.
The invention seeks to provide for feature location which is high speed, high precision and low cost.
According to one aspect of this invention apparatus for locating a feature of an object in a video scene includes means for providing a video signal representative of the scene, and feature detecting means responsive to the video signal for producing a signal crossing through a predetermined level at a time related to the location of the object feature substantially independent of the slope and magnitude of the video signal corresponding to the feature.
According to another aspect of this invention, a method of locating a feature of an object in a video scene includes generating a video signal representative of the scene, and deriving from the video signal a signal which crosses through a predetermined level at a time related to the location of the object feature substantially independent of the slope and magnitude of the video signal corresponding to the feature.
The video signal may be applied to an edge finding filter having a three lobe weighting function. The edge finding filter output signal crosses a predetermined level at a time correlated to a fixed fraction of the video signal magnitude.
The video signal may be applied to a centroid finding filter having a two lobe weighting function. The centroid finding filter output signal crosses a predetermined level at a time correlated to the location of the centroid of the video signal.
In either case the feature filter output signal may be applied to a crossing detector circuit whose output is a pulse correlated to the location of the feature in the scene. The pulse may then be applied to a utilization device, such as a television monitor or a computer.
The invention will now be described by way of example with reference to the accompanying drawings, in which: Figure 1 is a general block diagram of a feature detector embodying the invention; Figure 2 is a schematic diagram of an edge finding filter for use in the feature detector of Figure 1; Figure 3 is a schematic diagram of a bipolar crossing detector for use in the feature detector of Figure 1; and Figure 4 is a schematic diagram of a first moment filter for use in the feature detecor of Figure 1.
Referring now to feature detector 100 of Figure 1, an object 110 is shown within the field of view of a camera 120. Camera 120 may be, for example, a vidicon camera in which the scene of interest is scanned in a raster pattern. Graph 125 represents the waveform of the video output signal from the camera for one scan line. The graph corresponds to a scan across object 110 following the path 126. Object 110 is assumed to be light coloured in relation to the background. Thus, at time t+, the video signal makes a positive transition from a lower voltage level to a higher voltage level corresponding to the time at which the scan line 126 crosses the left edge of object 110. From time te to time t- the voltage remains at the higher level corresponding to the time at which scan line 126 crosses object 110.At time t-, at the right edge of object 110, the video signal makes a negative transition backto a lower voltage level corresponding to the dark background.
The video signal is applied to feature extraction filters. In Figure 1 two exemplary filters are shown: an edge finding filter 130 and a centroid finding filter 140. The impulse response of the filters determines what kind of feature information will be extracted. The output of a feature extraction filter is a signal which crosses zero at a time accurately correlated with the location of the feature in the scene over a wide range of scene and object illumination.
An edge finding filter 130 which meets the preceding criteria may have been an impulse response h(t) as follows: -t O t < T1 (1) t-2T1 T1 S t < 4T1 h(t)= 4T1t < 7T1 t-8T1 7T1 S t < 8T1 A filter with this impulse response produces a zero crossing time measure which corresponds to a 50 percent fixed fraction of the light intensity change at the edge of the object. In other words, the output of this filter will cross zero at a fixed time after the time at which the input signal attains 50 percent of its maximum amplitude. This type of filter may be implemented advantageously as shown with reference to Figure 2.
In Figure 2 operational amplifiers 210 and 220 are connected for negative feedback operation through networks comprising resistor 232-capacitor 263 and resistor 242-capacitor 264. The video signal, shown in graph 225, is supplied via capacitor 262, resistor 251 delay line 250 and resistor 252 to the negative input of amplifier 210. The video signal is also applied via inverting amplifier 230 and resistor 231 to the negative input of amplifier 210. The output of amplifier 210, shown in graph 215, is applied via resistor 217, delay line 260 and resistor 261 to the negative input of operational amplifier 220. The output of amplifier 210 is also applied via inverting amplifier 240 and resistor 241 to the negative input of amplifier 220.Amplifiers 210 and 220 may be the type HA-5195 operational amplifiers made by Harris Semiconductor Products Division, Harris Corporation, Newport Beach, California. The above-described filter may be further understood with reference to the article by R. Boie et al. entitled "High Resolution Proportional Detectors with Delay Line Position Sensing for High Counting Rates", Nuclear Instruments and Methods, 201, 1982, pp. 93-115.
The output waveform from amplifier 220, is shown in graph 255.
The waveform crosses the zero voltage level at a time t+, corresponding to a fixed delay time after the time at which the video signals in graph 225 reach 50 percent of the total amplitude excursion. Similarly, the waveform in graph 255 again crosses zero at a time t-corresponding to the same fixed delay time after the time at which the video signal drops back through 50 percent of its total amplitude excursion. This output signal is applied to bipolar crossing detector 150 in Figure 1.
The bipolar crossing detector is shown in detail in Figure 3. Detector 300 comprises four analog comparators 310, 320, 330, and 340, which may be type LM361 circuits made by National Semiconductor, Santa Clara, California. Comparators 310 and 320 are connected to detect a zero crossing from the positive direction, that is, a signal which starts out positive, crosses zero and ends with a netative lobe. Comparators 330 and 340 are connected to detect a negative zero crossing, that is, the signal which starts out negative and ends with a positive lobe. The input signal is applied via capacitor 322 to the minus input terminal of comparator 310 and via resistor 323 to ground potential. The capacitor-resistor network differentiates the input signal, as shown in graph 365.
Comparator 310 is used to test the slope of the input signal. The plus input of comparator 310 is connected to a voltage source at voltage -VB. -VB is a threshold set to define the minimum negative input signal slope.
When the minus input node is less than the -VB level at the plus input, the output of comparator 310 is true.
The true output signal, shown in waveform 395, is applied to one input of AND gate 311.
The input signal shown in graph 355 is also applied to the minus input of comparator 320. The plus input of the comparator is connected via multiplexer 343 to a voltage +V8 when the output of comparator 320 is true, and to ground potential when the output is false. +V8 is a threshold set to exclude firing on noise. When the input signal in graph 355 crosses zero at time t+, the minus input of comparator 320 becomes less than the plus input (then at ground potential). The output of the comparator therefore goes true at time t+, as shown in graph 385. The output from comparator 320 is applied to the input of positive edge-triggered "D" flip-flop 342, which responds to the leading edge of the signal from comparator 320, if the output of AND gate 311 is true. A true output from AND gate 311 indicates the proper signal slop and that the signals are outside the sync intervals. The output of flip-flop 342 triggers one shot 321 whose output is a binary pulse FURST+ at time t+, as shown in graph 351.
When the input pulse shown in graph 355 crosses zero at time t-. comparators 330 and 340, multiplexer 346, AND gate 34l,flip-flop 343 and one shot 321 are operative to produce a binary pulse FIRST- at time tin a manner analogous to the above circuits. The outputs from one shots 321 and 331 may be combined at OR gate 377 which thereby provides a pulse signal at times t+ and t-. Referring to Figure 1, this OR gate signal is combined at summer 186 with the original video signal to form a signal, shown in graph 185, useful for driving a video monitor 187. The original signal is applied to summer 186 from camera 120 via a delay 121 which compensates for the delay inherent in the feature extraction process.
A centroid finding filter 140 is shown in greater detail with reference to Figure 4. Centroid finding filter 400 has a ramp-shaped impulse response g(t) as follows: 0 two g(t) = t + T1 0t2T1 (2) t > 2T1 The video signal, shown in graph 402, is applied to the base line restore circuit comprising capacitor 464, amplifier 462, analog multiplexer 461 and current source 460. The base line is restored during the sync intervals. The action of a base line restorer of this kind is described in the article by R. Boie et al. entitled "High Precision Readout for Large Area Neutron Detectors", IEEE Transactions of Nuclear Science, NS-27, No. 1, February 1980.The signal from inverting amplifier 463 is applied via resistor 405 and delay line 410 to the input of inverting amplifier 420. The video signal is also applied via resistor 415 to the negative input of operational amplifier 425. The video signal is further applied via resistor 417 to the negative input of operational amplifier 427. The output of inverting amplifier 420 is connected to the negative input of amplifier 425. Amplifier 425 is connected for negative feedback through a capacitor 431 and transconductance amplifier 430. Amplifier 430, in conjunction with multiplexer 451 and current source 450, serves to reset the operational integrator 425 during the sync interval. The output of amplifier 425 and the input of inverting amplifier 420 are connected to the negative input of amplifier 427 via resistors 419 and 418, respectively.
Amplifier 427 is connected for negative feedback via capacitor 428 and transconductance amplifier 426.
Operational integrator 427 is reset in a manner similar to integrator 425 during the sync interval. The output of amplifier 427 is a signal which crosses zero at a time corresponding to the location of the centroid of the object in the camera image. Amplifiers 463, 420, 425 and 427 may be type HA-5195 circuits made by Harris Corporation. Amplifiers 462, 430 and 426 may be type CA-3080 circuits made by RCA, Incorporated, Somerville, New Jersey. Centroid finding filter arrangements are further described in detail in the article by V. Radiha et al. entitled "Centroid Finding Method for Position Sensitive Detector", IEEE Transactions of Nuclear Science, NS-27, No. 1, February 1980.
The bipolar crossing detector 150 in Figure 1 produces a binary output signal responsive to the zero crossing signal from the centroid finding filter in a manner similar to that described previously with respect to the edge finding filter. FIRST+ signals are centroid measures of bright objects. FIRST- signals are centroid measures of dark objects.
Various modifications and changes will occur to those skilled in the art. Alternative circuits may be used to implement the fixed fraction detection function of the edge finding filter. For example, a delayed and inverted version of an input signal may be subtracted from an attenuated form of the input signal itself. The resulting waveform crosses zero at a time corresponding to a fixed fraction of the original input signal amplitude. The crossing time is amplitude invariant.

Claims (13)

1. Apparatus for locating a feature of an object in a video scene, including means for providing a video signal representative of the scene, and feature detecting means responsive to the video signal for producing a signal crossing through a predetermined level at a time related to the location of the object feature substantially independent of the slope and magnitude of the video signal corresponding to the feature.
2. Apparatus as claimed in claim 1 wherein the feature is an edge of the object, and the feature detecting means includes edge filtering means for filtering the video signal to produce a signal which crosses through the predetermined level at a time corresponding to a fixed fraction of the amplitude excursion of the video signal corresponding to the edge.
3. Apparatus as claimed in claim 2 wherein the fixed fraction is substantially 50% of the amplitude excursion of the video signal, and the filtering means provides a three lobe weighting function characterized by an impulse response h(t) given by the following equations in which T1 is a predetermined constant: -t 0 > tsT1 t-2T1 T1 S t < 4T1 h(t) = - t + 4T1 4T1 ts t < 7T1 t-8T1 7T1 S t < 8T1.
4. Apparatus as claimed in any preceding claim wherein the feature is a centroid of the object, and the feature detecting means includes centroid filtering means for filtering the video signal to produce a signal which crosses through the predetermined level at a time corresponding to the portion of the video signal representative of the centroid.
5. Apparatus as claimed in claim 4 wherein the centroid filtering means provides a two lobe weighting function characterised by an impulse response g(t) given by the following equations in which T1 is a predetermined constant: 0 t < 0 g(t)= -t + T1 0t2T1 0 t > 2T1.
6. Apparatus as claimed in claim 2, 3,4 or 5 wherein the feature detecting means includes means responsive to the signal produced by the filtering means for producing a signal indicative of the time the output signal crosses through the predetermined level.
7. Apparatus as claimed in claim 6 wherein the time-indicative signal producing means includes means for producing a positive leading lobe transition signal at the time the signal produced by the filtering means crosses through the predetermined level from a higher level to a lower level, and means for producing a negative leading lobe transition signal at the time the signal produced by the filtering means crosses through the predetermined level from a lower level to a higher level.
8. A method of locating a feature of an object in a video scene, including generating a video signal representative of the scene, and deriving from the video signal a signal which crosses through a predetermined level at a time related to the location of the object feature substantially independent of the slope and magnitude of the video signal corresponding to the feature.
9. A method as claimed in claim 8 wherein the feature is an edge of the object, and the signal is derived from the video signal by filtering the video signal to produce a signal which crosses through the predetermined level at a time corresponding to a fixed fraction of the amplitude excursion of the video signal corresponding to the edge.
10. A method as claimed in claim 9 wherein the fixed fraction is 50% of the amplitude excursion of the video signal, and the filtering provides a three lobe weighting function characterised by an impulse response h(t) given by the following equations in which T1 is a predetermined constant: -t 0 # t 0st < T1 t-2T1 T1 s t < 4T1 h(t) -t+4T1 4T1 s t < 7T1 t-8T1 7T1 s t < 8T1.
11. A method as claimed in claim 8,9 or 10 wherein the feature is a centroid of the object, and the signal is derived from the video signal by filtering the video signal to produce a signal which crosses through the predetermined level at a time corresponding to the portion of the video signal representative of the centroid.
12. A method as claimed in claim 11 wherein the filtering provides a two lobe weighting function characterised by an impulse response g(t) given by the following equations in which T1 is a predetermined constant: 0 t < 0 g(t) = t + T1 0t2T1 0 t > 2T1.
13. Apparatus for locating a feature of an object substantially as herein described with reference to Figure 1, or to Figure 1 with one or more of Figures 2 to 4 of the accompanying drawings.
GB08423693A 1983-09-23 1984-09-19 Locating a feature of an object Withdrawn GB2150384A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US53536283A 1983-09-23 1983-09-23

Publications (2)

Publication Number Publication Date
GB8423693D0 GB8423693D0 (en) 1984-10-24
GB2150384A true GB2150384A (en) 1985-06-26

Family

ID=24133841

Family Applications (1)

Application Number Title Priority Date Filing Date
GB08423693A Withdrawn GB2150384A (en) 1983-09-23 1984-09-19 Locating a feature of an object

Country Status (7)

Country Link
JP (1) JPS6091485A (en)
DE (1) DE3434624A1 (en)
FR (1) FR2552608A1 (en)
GB (1) GB2150384A (en)
IT (1) IT1176771B (en)
NL (1) NL8402881A (en)
SE (1) SE8404576L (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0245871A2 (en) * 1986-05-16 1987-11-19 Asahi Denka Kogyo Kabushiki Kaisha Surfactant composition having improved functions
GB2212905A (en) * 1987-11-30 1989-08-02 Atomic Energy Authority Uk Displacement monitoring
GB2216743A (en) * 1988-03-11 1989-10-11 Canon Kk Image sensing apparatus
US4950898A (en) * 1987-11-30 1990-08-21 United Kingdom Atomic Energy Authority Method of position monitoring and apparatus therefor

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2004435A (en) * 1977-09-13 1979-03-28 Secr Defence Improvements in or relating to Image Discriminators
GB2013065A (en) * 1978-01-09 1979-08-01 Hell R Gmbh Conversion of a video signal into a two-level signal
GB2020942A (en) * 1978-05-12 1979-11-21 Cit Alcatel Self-adaptive two valve converter of an analogue image analysis signal
GB2022956A (en) * 1978-04-06 1979-12-19 Hell R Gmbh Black-white facsimile reproduction
GB2030818A (en) * 1978-09-21 1980-04-10 Cit Alcatel Pulse forming circuit for on/off conversion of an image analysis signal
GB2030816A (en) * 1978-08-21 1980-04-10 Hell R Dr Ing Gmbh Arrangement for reducing information loss in a facsimile signal
GB2065299A (en) * 1979-12-13 1981-06-24 Philips Electronic Associated Object measuring arrangement
EP0055338A1 (en) * 1980-12-31 1982-07-07 International Business Machines Corporation Eye controlled user-machine communication
GB2103045A (en) * 1981-07-31 1983-02-09 Omer Mohamed Abdel Gadir Microprocessor-based eye motion analysis system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2004435A (en) * 1977-09-13 1979-03-28 Secr Defence Improvements in or relating to Image Discriminators
GB2013065A (en) * 1978-01-09 1979-08-01 Hell R Gmbh Conversion of a video signal into a two-level signal
GB2022956A (en) * 1978-04-06 1979-12-19 Hell R Gmbh Black-white facsimile reproduction
GB2020942A (en) * 1978-05-12 1979-11-21 Cit Alcatel Self-adaptive two valve converter of an analogue image analysis signal
GB2030816A (en) * 1978-08-21 1980-04-10 Hell R Dr Ing Gmbh Arrangement for reducing information loss in a facsimile signal
GB2030818A (en) * 1978-09-21 1980-04-10 Cit Alcatel Pulse forming circuit for on/off conversion of an image analysis signal
GB2065299A (en) * 1979-12-13 1981-06-24 Philips Electronic Associated Object measuring arrangement
EP0055338A1 (en) * 1980-12-31 1982-07-07 International Business Machines Corporation Eye controlled user-machine communication
GB2103045A (en) * 1981-07-31 1983-02-09 Omer Mohamed Abdel Gadir Microprocessor-based eye motion analysis system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0245871A2 (en) * 1986-05-16 1987-11-19 Asahi Denka Kogyo Kabushiki Kaisha Surfactant composition having improved functions
EP0245871A3 (en) * 1986-05-16 1988-08-03 Asahi Denka Kogyo Kabushiki Kaisha Surfactant composition having improved functions
GB2212905A (en) * 1987-11-30 1989-08-02 Atomic Energy Authority Uk Displacement monitoring
US4950898A (en) * 1987-11-30 1990-08-21 United Kingdom Atomic Energy Authority Method of position monitoring and apparatus therefor
GB2212905B (en) * 1987-11-30 1992-04-15 Atomic Energy Authority Uk Displacement monitoring
GB2216743A (en) * 1988-03-11 1989-10-11 Canon Kk Image sensing apparatus
GB2256988A (en) * 1988-03-11 1992-12-23 Canon Kk Image sensing apparatus
GB2216743B (en) * 1988-03-11 1993-03-03 Canon Kk Image sensing apparatus
GB2256988B (en) * 1988-03-11 1993-04-07 Canon Kk Image sensing apparatus

Also Published As

Publication number Publication date
NL8402881A (en) 1985-04-16
SE8404576L (en) 1985-03-24
JPS6091485A (en) 1985-05-22
IT8422786A0 (en) 1984-09-21
GB8423693D0 (en) 1984-10-24
IT8422786A1 (en) 1986-03-21
IT1176771B (en) 1987-08-18
FR2552608A1 (en) 1985-03-29
DE3434624A1 (en) 1985-04-04
SE8404576D0 (en) 1984-09-12

Similar Documents

Publication Publication Date Title
CA1318971C (en) Wide dynamic range camera
CA2151079C (en) Differential motion detection method using background image
Haddon Generalised threshold selection for edge detection
US4908875A (en) Adaptive thresholding technique
EP0703703B1 (en) Movement vector detecting device
Ready et al. Automatic noise floor spectrum estimation in the presence of signals
US5245677A (en) Directional running average segmentation
US3653014A (en) Signal variation enhancement system
US5013154A (en) System for remotely reading an analog meter
GB2150384A (en) Locating a feature of an object
US5485222A (en) Method of determining the noise component in a video signal
US3838347A (en) Apparatus for transforming a deteriorated input signal into a binary signal
US4310850A (en) Solid-state video camera having a video signal processor
GB2029153A (en) Focus detecting device
US5245424A (en) Defect detection method
US4240107A (en) Apparatus and method for pattern information processing
US6741652B1 (en) Movement vector detecting device
JPH0636033A (en) Automatic target detecting method
JPH0514898A (en) Image monitor device
US4337455A (en) Apparatus for processing video signals received from an optical scanner
EP0083213A2 (en) Digital background normalizer using two dimensional integration techniques
Ioan et al. Laser based displacement measurements using image processing
JPS60211583A (en) Area identification system
Wells The mode filter: a nonlinear image processing operator
WO1995011460A1 (en) Method and apparatus for detecting and removing corrupted data

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)