EP2431706B1 - Device for measuring displacement of pantograph and method for detecting hard spot of trolley wire - Google Patents
Device for measuring displacement of pantograph and method for detecting hard spot of trolley wire Download PDFInfo
- Publication number
- EP2431706B1 EP2431706B1 EP10774953.3A EP10774953A EP2431706B1 EP 2431706 B1 EP2431706 B1 EP 2431706B1 EP 10774953 A EP10774953 A EP 10774953A EP 2431706 B1 EP2431706 B1 EP 2431706B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- pantograph
- marker
- image
- template
- input image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000006073 displacement reaction Methods 0.000 title claims description 79
- 238000000034 method Methods 0.000 title claims description 40
- 238000012545 processing Methods 0.000 claims description 152
- 239000003550 marker Substances 0.000 claims description 96
- 230000001133 acceleration Effects 0.000 claims description 20
- 238000001914 filtration Methods 0.000 claims description 11
- 238000009499 grossing Methods 0.000 claims description 6
- 238000000638 solvent extraction Methods 0.000 description 33
- 238000010586 diagram Methods 0.000 description 24
- 230000015654 memory Effects 0.000 description 12
- 230000014509 gene expression Effects 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000004069 differentiation Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 238000010891 electric arc Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000013341 scale-up Methods 0.000 description 2
- 238000004904 shortening Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000004020 conductor Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60L—PROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
- B60L5/00—Current collectors for power supply lines of electrically-propelled vehicles
- B60L5/18—Current collectors for power supply lines of electrically-propelled vehicles using bow-type collectors in contact with trolley wire
- B60L5/22—Supporting means for the contact bow
- B60L5/26—Half pantographs, e.g. using counter rocking beams
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60M—POWER SUPPLY LINES, AND DEVICES ALONG RAILS, FOR ELECTRICALLY- PROPELLED VEHICLES
- B60M1/00—Power supply lines for contact with collector on vehicle
- B60M1/12—Trolley lines; Accessories therefor
- B60M1/28—Manufacturing or repairing trolley lines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60L—PROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
- B60L2200/00—Type of vehicles
- B60L2200/26—Rail vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
Definitions
- the present invention relates to a device for measuring displacement of a pantograph and a method for detecting a hard spot of a trolley wire, and particularly relates to a device for measuring displacement of a pantograph and a method for detecting a hard spot of a trolley wire in which a hard spot of a trolley wire is measured through pattern-matching processing using an image, captured by a line sensor camera, of a pantograph equipped with a marker designed for measuring a hard spot.
- a trolley wire is in a state of being suspended from a messenger wire via a hanger, for example.
- the weight of the trolley wire increases locally at a spot where the hanger is placed and at some spots such as where the trolley wire is connected to another trolley wire and where there is a pull-off, as compared to the other spots. These spots are called "hard spots of a trolley wire.”
- a pantograph which is a current collector placed on the roof of a car and configured to slide on the trolley wire, sinks abruptly in some cases due to the weight of the trolley wire.
- the trolley wire loses its contact with the pantograph and causes a discharge phenomenon called arc discharge.
- the trolley wire is locally worn away due to the heat produced by the arc discharge.
- the trolley wire is thought to be worn away faster at hard spots than at the other spots.
- the pantograph accelerates greatly in the vertical direction at hard spots of the trolley wire. For this reason, the vertical acceleration of the pantograph whose displacement is equivalent to that of the trolley wire shall be monitored in order to detect the hard spots of the trolley wire.
- the acceleration of the pantograph can be found by measuring the displacement of the pantograph and performing second order differentiation on the displacement.
- pantograph displacement measuring methods The following methods have conventionally been known as pantograph displacement measuring methods.
- This method is a method in which the displacement of a pantograph is measured by: scanning the pantograph with laser by using a mirror or the like; and studying the phase difference between reflected waves, the deformation of the shape of reflected laser, or the like.
- This method is a method in which the displacement of a pantograph is measured by: projecting stripe-pattern light onto the pantograph; and receiving light with zigzag streaks which correspond to the shape of the pantograph.
- This method is a method in which the displacement of a pantograph is measured by: capturing an image of the pantograph by use of a line sensor camera placed on the roof of a car; and then performing processing such as model matching or pattern matching on the captured image by use of a processing computer (see Patent Documents 1 and 2, for example).
- the image processing method is such that: from the image of the pantograph captured by the line sensor camera, a pixel position on the image is extracted at which a beforehand-prepared model of the pantograph finds a match; then, the actual height of the pantograph is calculated from the pixel position on the image on the basis of the distance from the line sensor camera to the pantograph, the focal length of the lens of the image capturing unit, and the like.
- a pixel position at which the pre-acquired model of the pantograph finds a match is detected from the captured image of the pantograph as the position of the pantograph.
- a marker in a black and white stripe pattern is attached to the pantograph placed on the roof of a car, and pattern matching is performed to detect the position of the marker, i.e., the position of the pantograph, from an image captured by the line sensor camera.
- Patent Document 2 discloses a displacement measuring device and method for a pantograph having in common with the present invention the features in the pre-characterizing portions of the independent claims.
- a line sensor camera 2 is placed on the roof of a car 1 in such a posture as to face obliquely upward in order to capture a marker 4 attached to a pantograph 1a.
- a camera elevation angle ⁇ A of the line sensor camera 2 is small when the distance from the line sensor camera 2 to the pantograph 1a is long.
- a camera elevation angle ⁇ B of the line sensor camera 2 is large when the distance from the line sensor camera 2 to the pantograph 1a is short.
- the line sensor camera illustrated with a solid line and the line sensor camera 2 illustrated with a broken line in Fig. 13 will be referred to as line sensor cameras 2A and 2B, respectively.
- Parts (a) and (b) of Fig. 14 show example input images of the marker 4 captured by the line sensor cameras 2A and 2B, respectively.
- the small camera elevation angle ⁇ A allows the width of a trace M of the marker to remain substantially the same in an input image 6A as shown in Part (a) of Fig. 14 , showing that different heights of the pantograph 1a will cause almost no resolution difference.
- the large camera elevation angle ⁇ B makes the trace M of the marker appear differently depending on the height as shown in Part (b) of Fig. 14 , showing that different heights of the pantograph 1a cause a resolution difference in an input image 6B.
- performing pattern matching processing on an image 6A captured by the line sensor camera 2A is highly likely to result in successful pattern matching as shown in Part (a) of Fig. 14 .
- performing pattern matching processing on an image 6B captured by the line sensor camera 2B can possibly cause a problem of finding no match to the size of the template 7 as shown in Part (b) of Fig. 14 and failing the pattern matching.
- the present invention is characterized by providing a device for measuring displacement of a pantograph and a method for detecting a hard spot of a trolley wire which can improve the accuracy of pattern-matching processing.
- the device for measuring displacement of a pantograph and the method for detecting a hard spot of a trolley wire according to the present invention are defined in the independent claims. Further advantageous features are set out in the dependent claims.
- calibration means refers to means using a calibration method in Japanese Patent Application No. 2009-011648 , for example, to find the resolution of each pixel.
- the device for measuring displacement of a pantograph of the invention allows the pixel position of the marker to be detected highly accurately, and therefore a hard spot of a trolley wire can be found.
- the pattern-matching processing means could perform pattern-matching processing only on a predetermined range of the input image on the basis of a result of pattern-matching processing having been performed on an immediately previous line. Accordingly, it is possible to shorten the processing time and also to lower the probability of detecting noises and the like.
- the device for measuring displacement of a pantograph of the present invention it is possible to efficiently perform the pattern-matching processing by setting the shifting pitch of the template lower in a case of a higher correlation value than in a case of a lower correlation value.
- the pattern-matching processing means may automatically correct positions of the sections such that the trace of the marker is included within one of the sections on the basis of the position of the marker detected in an immediately previous line. Accordingly, it is possible to efficiently perform the pattern-matching processing by preventing the occurrence of the switching of the template within the processing range of the pattern matching attributable to the presence of a partitioning position within the processing range.
- the pattern-matching processing means may detect a rough center position of a trace of the marker, calculate an average luminance value of a range starting from the center position and having half a width of the template, and extract the trace of the marker by using the average luminance value as a threshold. Accordingly, it is possible to avoid an error at a partitioning position in the image by preventing the offset between the pattern matching results by several pixels attributable to the changing of the template size at the partitioning position. It is also possible to perform stable edge extraction even when the luminance of the whole image changes.
- the method for detecting a hard spot of a trolley wire of the present invention allows the pixel position of the marker to be detected highly accurately, and therefore a hard spot of a trolley wire can be found.
- the fifth step could be performed only on a predetermined range of the input image on the basis of a result of pattern-matching processing having been performed on an immediately previous line. Accordingly, it is possible to shorten the processing time and also to lower the probability of detecting noises and the like.
- the method for detecting a hard spot of a trolley wire of the present invention it is possible to efficiently perform the pattern-matching processing by setting the shifting pitch of the template lower in a case of a higher correlation value but setting the shifting pitch of the template higher in a case of a lower correlation value.
- the fifth step may be performed by automatically correcting positions of the sections such that the trace of the marker is included within one of the sections on the basis of a position of the marker detected in the immediately previous line. Accordingly, it is possible to efficiently perform the pattern-matching processing by preventing the occurrence of the switching of the template within the processing range of the pattern matching attributable to the presence of a partitioning position within the processing range.
- the fifth step may be performed by detecting a rough center position of a trace of the marker, calculating an average luminance value of a range starting from the center position and having half a width of the template, and using the average luminance value as a threshold. Accordingly, it is possible to avoid an error at a partitioning position in the image by preventing the offset between the pattern matching results by several pixels attributable to the changing of the template size at the partitioning position. It is also possible to perform stable edge extraction even when the luminance of the whole image changes.
- FIG. 1 is an explanatory diagram showing an example of the placement of a device for measuring displacement of a pantograph of Embodiment 1 of the present invention.
- Fig. 2 is a front view of a marker of the device for measuring displacement of a pantograph of Embodiment 1 of the present invention.
- Fig. 3 is a block diagram showing a schematic configuration of the device for measuring displacement of a pantograph of Embodiment 1 of the present invention.
- Fig. 4 is an explanatory diagram showing an example of an input image in Embodiment 1 of the present invention.
- Fig. 1 is an explanatory diagram showing an example of the placement of a device for measuring displacement of a pantograph of Embodiment 1 of the present invention.
- Fig. 2 is a front view of a marker of the device for measuring displacement of a pantograph of Embodiment 1 of the present invention.
- Fig. 3 is a block diagram showing a schematic configuration of the device for measuring displacement of a pantograph
- FIG. 5 is an explanatory diagram showing an example of a template in Embodiment 1 of the present invention.
- Fig. 6 is an explanatory diagram showing an example of dividing an image in Embodiment 1 of the present invention.
- Fig. 7 is a flowchart showing the flow of pantograph measurement processing of Embodiment 1 of the present invention.
- the pantograph height measuring device in this embodiment includes a line sensor camera 2 as image capturing means fixed to the roof of a car 1, a lighting device 3, a marker 4, and a processing computer 5 placed inside the car 1.
- the line sensor camera 2 is placed on the roof of the car 1 in such a way as to capture images of a pantograph 1a. Specifically, the orientation of the line sensor camera 2 is set such that: the optical axis thereof can be directed obliquely upward; and the scanning-line direction thereof can be orthogonal to the longitudinal direction of the pantograph 1a. Image signals acquired by this line sensor camera 2 are inputted into the processing computer 5.
- the orientation and illuminating angle of the lighting device 3 are set such that a spot to be captured by the line sensor camera 2 can be illuminated with light.
- the marker 4 is formed of a light-reflective material and a non-light-reflective material, and may be placed at any position on the line sensor camera 2-side end surface of the pantograph 1a within a range within which the line sensor camera 2 can capture the marker 4.
- the marker 4 used in this embodiment is formed by alternately arranging two white portions 4w made of the light-reflective material and three black portions 4b made of the non-light-reflective material. Any size can be selected for the marker 4.
- the processing computer 5 detects the vertical displacement of the pantograph 1a by analyzing an image inputted from the line sensor camera 2, and includes an arithmetic processing unit 5A as arithmetic processing means and a monitor 5B.
- the arithmetic processing unit 5A includes an input image creating unit 5a, a template setting unit 5b, an image dividing unit 5c, a template enlarging/reducing unit 5d, a pattern matching unit 5e, a pantograph displacement calculating unit 5f, a filtering unit 5g, an acceleration output unit 5h, and memories m1 and m2.
- the input image creating unit 5a as input image creating means creates an input image 6 as shown in Fig. 4 in which image signals inputted from the line sensor camera 2 are arranged in chronological order. As shown in Fig. 4 , since the marker 4 reflects light of the lighting device 3, the traces of the white portions of the marker 4 are displayed in the input image 6 as strip-shaped white regions 6a in a black region (a portion indicated with dotted lines in the drawing) 6b.
- the input image 6 is sent to the template setting unit 5b or the image dividing unit 5c through the memories m1 and m2 as needed.
- the template setting unit 5b as template setting means acquires in advance a marker pattern as shown in Fig. 5 as a matching template (hereinafter, referred to as reference template) 7A from an input image 6 as shown in Fig. 4 .
- the template setting unit 5b acquires in advance a marker pattern as the reference template 7A to be used for the extraction of the marker 4 in the input image 6 in processing of the pattern matching unit 5e, and then registers the marker pattern to the memory m2.
- the reference template 7A is sent to the template enlarging/reducing unit 5d through the memory m2.
- the reference template 7A is one-dimensional luminance data of white regions 7a and black regions 7b obtained by extracting the marker portion from an image acquired in advance for the purpose of creating the reference template 7A. It is desirable to cut the image in such a way that the reference template 7A partially includes a black portion 4b of the marker 4 on the outer side of each white portion 4w as shown in Fig. 5 , rather than cutting the image at the boundary of the white portion 4w and the black portion 4b on the outer side. Doing so increases the feature amount of the reference template 7A and therefore reduces erroneous detections.
- the template setting unit 5b registers the reference template 7A and also an offset width W OS and a template size W T (see Fig. 4 ) at the same time.
- the image dividing unit 5c as image-division processing means provides partitioning positions 8 as shown in Fig. 6 in the input image 6 inputted from the input image creating unit 5a to thereby divide the input image 6 into a predetermined number of sections A 1 , A 2 , ..., A N (hereinafter, a given section (s) will be referred to as a section(s) A i ).
- Information on all the sections A i is sent to the template enlarging/reducing unit 5d through the memory m2.
- the number N of sections is automatically calculated based on the resolution of each pixel found in advance by use of a calibration method in Japanese Patent Application No. 2009-011648 , for example.
- the number N of sections is set such that the section with the lowest resolution will not have a resolution 1.1 or more times larger than that of the section with the highest resolution. In this way, the resolutions can be calculated accurately.
- the number N of sections is set such that the section with the lowest resolution will not have a resolution 1.1 or more times larger than that of the section with the highest resolution, for a reason that is based on the result of a verification test on the template size W T .
- the number N of sections is based on the result of a test in which the size W T of the reference template 7A acquired from an image capturing the marker 4 is varied to find out to what extent the reference template 7A is allowed to be scaled up and down before failing to get a successful match in pattern matching performed on the image from which the reference template 7A is acquired.
- the template enlarging/reducing unit 5d as template scaling processing means performs processing to scale up or down the reference template 7A to change its size W T for each section A i on the basis of the reference template 7A inputted from the template setting unit 5b and the information on the section A i inputted from the image dividing unit 5c.
- Data on each template 7B i with its size W T thus changed for the corresponding section A i (hereinafter, referred to as scaled template) is sent to the pattern matching unit 5e through the memory m2.
- the template enlarging/reducing unit 5d creates the scaled temples 7B i corresponding to the sections A i by: calculating a factor by which the reference template 7A is scaled (hereinafter, referred to as scale factor) for each section A i ; and scaling up or down the reference template 7A through bilinear interpolation which is a common technique for scaling an image. Since the size W T of the reference template 7A is registered at the time of registering the reference template 7A, a size W Ti of the scaled template 7B i can be found by multiplying the size W T of the reference template 7A by the corresponding scale factor.
- scale factor a factor by which the reference template 7A is scaled
- each scale factor is found from the following expressions (1) to (3).
- an expression obtained by a calibration method in Japanese Patent Application No. 2009-011648 is used as an approximate expression (4) for converting a pixel position in an image into an actual height.
- P n a n + 1 2 + b n + 1 + c ⁇ a n 2 + bn + c
- "a,” "b,” and “c” are coefficients of the approximate expression (4) for finding actual displacement from a pixel position
- p n is a resolution [mm/pix] at a pixel position n to be scaled up or down
- p ori is a resolution [mm/pix] at a pixel position ori on the reference template 7A
- scale is the scale factor.
- the resolution can be found as a height [mm] per pixel.
- the resolution [mm/pix] can be found by finding a height [mm] at a pixel position n and a height [mm] at a pixel position n+1 next thereto and then subtracting the height at the pixel position n from the height at the pixel position n+1.
- the scale factor is set to 1 when the size W Ti of the scaled template 7B i is equal to the size W T of the reference template 7A.
- the pattern matching unit 5e as pattern-matching processing means detects the pixel position of the marker 4 in the input image 6 by performing pattern-matching processing for each section A i on the basis of the information on the section A i inputted from the image dividing unit 5c and the data on the corresponding scaled template 7B i inputted from the template enlarging/reducing unit 5d.
- the pixel position of the marker 4 obtained by the pattern matching unit 5e is sent to the pantograph displacement calculating unit 5f through the memory m2.
- the pantograph displacement calculating unit 5f as pantograph displacement calculating means converts the displacement of the marker 4 in the input image 6 into the actual displacement of the pantograph 1a on the basis of the pixel position of the marker 4 in the input image 6 inputted from the pattern matching unit 5e.
- an approximate expression obtainable for example from Japanese Patent Application No. 2009-011648 or the like is found in advance and used as a calculation expression for converting the displacement of the trace of the pantograph 1a in the input image 6 into the actual displacement of the pantograph 1a.
- Data on the actual displacement of the pantograph 1a obtained by the pantograph displacement calculating unit 5f is sent to the filtering unit 5g through the memory m2.
- the filtering unit 5g as filtering processing means performs smoothing processing on the displacement data inputted from the pantograph displacement calculating unit 5f.
- the actual displacement of the pantograph 1a is in a state of containing quantization errors of the image.
- the actual displacement data is subjected to filtering processing to smooth the displacement data.
- the displacement data after the smoothing (hereinafter, referred to as smoothed displacement data) is sent to the acceleration output unit 5h through the memory m2.
- the acceleration output unit 5h as acceleration outputting means performs second order differentiation on the smoothed displacement data inputted from the filtering unit 5g to calculate the acceleration of the marker 4, i.e., the pantograph 1a, in the vertical direction.
- the acceleration is found by performing second order differentiation on the displacement data smoothed by the filtering processing and then outputted to the monitor 5B.
- a point where the acceleration of the pantograph 1a is 20 G or greater, for example, is detected as a hard spot in this embodiment.
- the calculated acceleration data is outputted to the monitor 5B through the memory m2 and displayed on the monitor 5B.
- the template setting unit 5b first performs the processing to register a reference template 7A (step P1). Then, the input image creating unit 5a performs the processing to create an input image 6 in which image signals outputted from the line sensor camera 2 are arranged in chronological order (step P2). Thereafter, as shown in Fig. 6 , the image dividing unit 5c performs the processing to divide the input image 6 into a predetermined number N of sections A 1 , A 2 , ..., A N (step P3).
- the template enlarging/reducing unit 5d performs the processing to scale up or down the reference template 7A registered in step P1 for a given section A i (step P4).
- the pattern matching unit 5e performs the pattern-matching processing to compare a scaled template 7B i , obtained by scaling up or down the reference template 7A for the section A i of the input image 6, with the input image 6 in attempt to detect the position (pixel position) of the marker 4 in the input image 6 (step 5). Thereafter, it is judged whether or not the pattern matching for the section A i is completed (step P6).
- the processing proceeds to step P7 if the judgment result shows that the pattern-matching processing for the section A i is not yet completed (NO). On the other hand, the processing returns to step P4 if the pattern-matching processing for the section A i is completed (YES).
- step P7 it is judged whether or not the pattern-matching processing is completed for the entire data of the input image.
- the processing proceeds to step S8 if the judgment result shows that the pattern-matching processing is completed for the entire data of the input image (YES).
- the processing returns to step P5 if the pattern-matching processing is not yet completed for the entire data of the input image (NO).
- step P8 the pantograph displacement calculating unit 5f performs the processing to convert the pixel position of the marker 4 in the input image 6 into the actual displacement of the pantograph 1a on the basis of the detected marker position, for the entire input image 6.
- the filtering unit 5g performs the filtering processing (step P9).
- the acceleration output unit 5h performs the processing to output the acceleration of the pantograph (step P10).
- an input image 6 captured at a large camera elevation angle ⁇ B as shown in Part (b) of Fig. 13 is divided into the predetermined number N of sections A 1 , A 2 , ..., A N as shown in Fig. 6 , and then the pattern-matching processing is performed using the scaled templates 7B i obtained by scaling up or down the reference template 7A on the basis of the resolutions of the respective sections A i . Accordingly, highly accurate pattern-matching processing can be performed. Moreover, utilizing a calibration result allows accurate calculation of the resolution of each pixel of parts where the calibration is performed.
- a second embodiment of the device for measuring displacement of a pantograph of the present invention will be described.
- This embodiment differs from Embodiment 1 in the processing of the pattern matching unit 5e.
- the other configurations are substantially the same as those described in Embodiment 1.
- the processing units providing the same effects will be denoted by the same reference numerals, and overlapping descriptions will be omitted. The differences will be mainly described.
- step P5 shown in Fig. 7 .
- the same pattern-matching processing as Embodiment 1 is performed on the first line of an input image 6, and the detected marker position is stored in the memory m2. Thereafter, the pattern-matching processing is performed on the second and subsequent lines but only within a range of ⁇ N P [pix] from the pixel position of the marker obtained as a result of the pattern-matching processing on the immediately previous line.
- the next line is subjected to the pattern-matching processing only within a predetermined range from the pixel position of the marker 4.
- the range ⁇ N P [pix] to be subjected to the pattern-matching processing is determined by taking into account of the distance of movement of the marker (the distance of vertical displacement of the pantograph) per unit time in the image capturing using the line sensor camera 2.
- Conventional railway Structure Regulation Section 62 states that in an area where a train travels at a speed faster than 50 km/h, the inclination of a trolley wire must be 5/1000 or smaller in a case where the trolley wire is suspended from a catenary or an overhead rigid conductor line, and be 15/1000 or smaller otherwise.
- An inclination of 5/1000 means a 5-m change in height over a distance of 1000 m.
- the sampling frequency of the line sensor camera 2 is set to 1000 Hz (an image containing 1000 lines is captured in 1 second (1-ms intervals)).
- the car 1 travels at a speed of 50 km/h, for example, the car 1 advances a distance of approximately 13. 888 m per second, which is approximately 0.013888 m per microsecond.
- the height of the pantograph changes by approximately 0.00021 mm per unit time (1 ms).
- a reference acceleration for detecting a hard spot is set to 20 G. This is an acceleration for a case assuming a 0.1-mm change per unit time (1 ms). Given "Conventional Railway Structure Regulation Section 62" mentioned above, a 10-mm change per unit time should be large enough. So, with reference to the pixel position of the marker 4 detected in the immediately previous line, a pixel width N P [pix] is calculated which assumes a 10-mm change per microsecond based on the image resolution, and the corresponding range is set as the range to be subjected to the pattern matching.
- This embodiment shows an example assuming a 10-mm change per unit time as a condition for calculating the pixel width N P [pix]. Note, however, that the condition for calculating the pixel width N P [pix] is not limited thereto. Any condition may be set when necessary.
- the pattern-matching processing is performed only within a range of ⁇ N P [pix] from the pixel position of the marker 4 in the immediately previous line detected by pattern matching. Accordingly, the processing time can be shortened as compared to Embodiment 1. Further, narrowing the range to be subjected to the pattern-matching processing can lower the possibility of detecting noises and the like.
- FIG. 8 is an explanatory diagram showing levels of a value of correlation to the template in an input image.
- Embodiment 1 differs from Embodiment 1 in the processing of the pattern matching unit 5e.
- the other configurations are substantially the same as those described in Embodiment 1.
- the processing units providing the same effects as the foregoing configurations shown in Figs. 1 to 7 will be denoted by the same reference numerals, and overlapping descriptions will be omitted. The differences will be mainly described.
- correlation values R are first calculated as indexes each representing the degree of resemblance to the registered reference template 7A, and then the pattern-matching processing is performed while the pitch at which to shift the scaled template 7B i is changed in accordance with the corresponding correlation value R.
- step S5 shown in Fig. 7 the following processing is performed as the processing in step S5 shown in Fig. 7 at timing before performing the processing in step P5 described in Embodiment 1.
- the correlation value R to the reference template 7A is calculated for each pixel position i in the input image 6.
- the correlation value R can be found through calculation using the following expression (6). Note that the calculation is targeting one-dimensional correlation because the line sensor camera 2 is a camera to capture one-dimensional images.
- R is the correlation value
- L is the width of the template image (set to be smaller than the width of the search image)
- W i is a luminance value at the pixel position i in the search image
- T i is a luminance value at the pixel position i in the template image.
- the pitch at which to shift the scaled template 7B i during the pattern-matching processing is set in accordance with the correlation value R.
- the shifting pitch of the scaled template 7B i is set such that the scaled template 7B i will shift shorter during the pattern-matching processing on a part having a higher correlation value R than during the pattern-matching processing on a part having a lower correlation value R.
- the correlation value is 1 at the maximum and 0 at the minimum.
- the correlation value R may be about 0.8 in a part having a low correlation and about 0.99 in a part having a high correlation, for example.
- the shifting pitch of the scaled template 7B i during the pattern-matching processing can be set on the basis of the correlation value R in the following way.
- the shifting pitch of the scaled template 7B i is increased by 1 [pix] upon increase of the correlation value R by 0.05 so that the shifting pitch is 1 [pix] when the correlation value R is 0.95 or higher while the shifting pitch is 2 [pix] when the correlation value R is 0.9 to 0.85.
- step P5 described in Embodiment 1 is performed after the shifting pitch of the scaled template 7B i is set as mentioned above.
- a threshold of the correlation value R (0.05 in this embodiment) for changing the shifting pitch is set manually.
- the shifting pitch of the scaled template 7B i is not limited to those mentioned above and may be set to any values as needed.
- the shifting pitch is changed on the basis of the level of the correlation value R, and therefore the pattern-matching processing can be performed more efficiently than Embodiment 1 in which the pattern-matching processing is performed on a pixel basis.
- FIG. 9 is an explanatory diagram showing an example of a case where a partitioning position is present within a processing range of pattern matching.
- Fig. 10 is an explanatory diagram showing an example of re-setting the partitioning position shown in Fig. 9.
- Fig. 10 is an explanatory diagram showing an example of re-setting the partitioning position shown in Fig. 9 .
- Embodiments 1 and 2 differs from Embodiments 1 and 2 in a method of the processing in step P5 shown in Fig. 7 .
- the other configurations are substantially the same as those described in Embodiments 1 and 2.
- the processing units providing the same effects as the foregoing configurations described in Embodiments 1 and 2 will be denoted by the same reference numerals, and overlapping descriptions will be omitted. The differences will be mainly described.
- Embodiment 2 attempts to improve the processing efficiency by limiting the processing range of the pattern matching on the basis of the pixel position of the marker 4 detected by the pattern-matching processing on the immediately previous line.
- a partitioning position 8 may be present within a processing range B of the pattern matching in the input image 6 as shown in Fig. 9 (a range of ⁇ N P [pix] from the pixel position of the marker 4 obtained through the pattern-matching processing on the previous line).
- the scale factor of the scaled template 7B i needs to be switched for the two sections (the sections A i and A i+1 in this embodiment).
- a partitioning position 8 is included in the processing range B of the patter matching as shown in Fig. 9 at the time of performing the pattern-matching processing, processing is performed so that: all the partitioning positions 8 can be automatically re-set to exclude the partitioning position 8 from the processing range B; and the size of each scaled template 7B i can be re-set to be adjusted to its corresponding newly-set section A i .
- the following processing is performed as the processing in the foregoing step P5 shown in Fig. 7 .
- the same pattern-matching processing as Embodiment 1 is performed on the first line of an input image 6, and the pixel position of the detected marker 4 is stored in the memory m2.
- a range of ⁇ N P [pix] from the pixel position of the marker 4 detected through the pattern-matching processing on the immediately previous line is set as the processing range B of the pattern matching.
- new scaled templates 7B i are set by: re-setting the sections A i on the basis of the pixel position of the marker 4 obtained from the immediately previous line; and re-calculating the scale factors of the scaled templates 7B i for the re-set sections A i .
- the pattern-matching processing described in Embodiment 1 is performed after the above processing is performed. Note that the scale factors of the scaled templates 7B i and the partitioning positions 8 are calculated by use of the methods described in Embodiment 1.
- the reference template 7A is set manually, but the scaled templates 7B i are set automatically after this manual setting on the basis of the resolutions of the corresponding image positions.
- FIG. 11 is an explanatory diagram showing an example of template sizes set to their respective sections.
- Fig. 12 is an explanatory diagram showing an example of extracting an edge of one of white regions 6a representing traces of white portions 4w of the marker 4 in an input image.
- a trace M of the marker 4 crosses the partitioning position 8 between the sections A i and A i+1 as shown in Fig. 11
- performing the pattern matching by use of any of the methods of Embodiments 1 to 4 requires the single marker trace M to be subjected to the pattern-matching processing using two scaled templates 7B i and 7B i+1 for the respective two sections A i and A i+1 .
- the pattern matching results may possibly be offset from each other by several pixels at the partitioning position 8. Offset by several pixels leads to a large error in the calculation of the acceleration.
- the pattern-matching processing is performed in advance by using the scaled template 7B i to detect a rough pixel position of the marker trace M; and then an edge of the marker trace M is extracted on the basis of the detected pixel position of the marker trace M. This prevents the occurrence of an error attributable to the marker trace M crossing the partitioning position 8 between the sections A i and A i+1 .
- any threshold can be used to extract the edge of the marker trace M, setting a constant as the threshold may prevent accurate edge extraction when the input image 6 appears dark or bright in whole. For this reason, it is preferable to calculate the average value of luminance in the processing range and set the average value as the threshold. In this way, stable edge extraction can be performed even when the luminance values of the image change in whole.
- step P5 shown in Fig. 7 .
- a rough marker center position P C is detected through the pattern-matching processing using the scaled template 7B i .
- an average value B A of luminance in a range starting from the marker center position P C and having half the current template size W T is calculated.
- the last pixel position (the highest edge) P E in a region with higher luminance values than the average luminance value B A is found, and that position is extracted as the pixel position of the marker trace M.
- the term "rough" used in this embodiment refers to a range within which the pattern-matching processing results in the detection of a marker center position that will not cause an error in the matching results.
- the allowable range of error in the template size W T is about ⁇ 10% experimentally.
- a rough marker position is detected through the pattern matching, and then the edge of the marker 4 is extracted on the basis of the detected pixel position of the marker 4.
- the template size is not changed at the partitioning position set in the input image 6. This makes it possible to avoid the offset between the pattern matching results attributable to the changing of the template size W T at the partitioning position in the image, and hence improve the accuracy.
- the average luminance value of the range starting from the marker center position, detected through the pattern matching, and having half the current template size W T is calculated, and that value is used as the threshold for the edge extraction. Accordingly, stable edge extraction can be performed even when the luminance of the whole input image 6 changes.
- the present invention is applicable to devices for measuring displacement of a pantograph and methods for detecting a hard spot of a trolley wire, and is preferably applicable particularly to devices for measuring displacement of a pantograph and methods for detecting a hard spot of a trolley wire in which a hard spot of a trolley wire is measured through pattern-matching processing using an image, captured by a line sensor camera, of a pantograph equipped with a marker designed for measuring a hard spot.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Transportation (AREA)
- Power Engineering (AREA)
- Manufacturing & Machinery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Current-Collector Devices For Electrically Propelled Vehicles (AREA)
- Image Processing (AREA)
Description
- The present invention relates to a device for measuring displacement of a pantograph and a method for detecting a hard spot of a trolley wire, and particularly relates to a device for measuring displacement of a pantograph and a method for detecting a hard spot of a trolley wire in which a hard spot of a trolley wire is measured through pattern-matching processing using an image, captured by a line sensor camera, of a pantograph equipped with a marker designed for measuring a hard spot.
- In electric railway facilities, measurement of hard spots of trolley wires can be cited as one of inspection items. A trolley wire is in a state of being suspended from a messenger wire via a hanger, for example. The weight of the trolley wire increases locally at a spot where the hanger is placed and at some spots such as where the trolley wire is connected to another trolley wire and where there is a pull-off, as compared to the other spots. These spots are called "hard spots of a trolley wire."
- When passing these hard spots of a trolley wire, a pantograph, which is a current collector placed on the roof of a car and configured to slide on the trolley wire, sinks abruptly in some cases due to the weight of the trolley wire. In this case, the trolley wire loses its contact with the pantograph and causes a discharge phenomenon called arc discharge. In this event, the trolley wire is locally worn away due to the heat produced by the arc discharge. Thus, the trolley wire is thought to be worn away faster at hard spots than at the other spots.
- In view of the above, detecting hard spots of trolley wires has been an important matter in maintaining, operating, and managing electric railway facilities.
- As mentioned above, the pantograph accelerates greatly in the vertical direction at hard spots of the trolley wire. For this reason, the vertical acceleration of the pantograph whose displacement is equivalent to that of the trolley wire shall be monitored in order to detect the hard spots of the trolley wire. The acceleration of the pantograph can be found by measuring the displacement of the pantograph and performing second order differentiation on the displacement.
- The following methods have conventionally been known as pantograph displacement measuring methods.
- This method is a method in which the displacement of a pantograph is measured by: scanning the pantograph with laser by using a mirror or the like; and studying the phase difference between reflected waves, the deformation of the shape of reflected laser, or the like.
- This method is a method in which the displacement of a pantograph is measured by: projecting stripe-pattern light onto the pantograph; and receiving light with zigzag streaks which correspond to the shape of the pantograph.
- This method is a method in which the displacement of a pantograph is measured by: capturing an image of the pantograph by use of a line sensor camera placed on the roof of a car; and then performing processing such as model matching or pattern matching on the captured image by use of a processing computer (see
Patent Documents - Among the methods listed above, the image processing method is such that: from the image of the pantograph captured by the line sensor camera, a pixel position on the image is extracted at which a beforehand-prepared model of the pantograph finds a match; then, the actual height of the pantograph is calculated from the pixel position on the image on the basis of the distance from the line sensor camera to the pantograph, the focal length of the lens of the image capturing unit, and the like.
- In this image processing method, a pixel position at which the pre-acquired model of the pantograph finds a match is detected from the captured image of the pantograph as the position of the pantograph. Alternatively, a marker in a black and white stripe pattern is attached to the pantograph placed on the roof of a car, and pattern matching is performed to detect the position of the marker, i.e., the position of the pantograph, from an image captured by the line sensor camera.
- Then, after the position of the pantograph in the image is detected, the pixel position in the image is converted into the actual displacement of the pantograph on the basis of the distance to the pantograph, the focal length of the lens, and the like. Then, by performing second order differentiation on the displacement of the pantograph thus found, the acceleration is calculated. Meanwhile, as described in
Patent Document 2, using a line sensor camera can increase the spatial resolution and thus improve the accuracy. This method allows the device to be smaller than those in the laser sensor method and the light section sensor method, hence bringing about an advantage that the device can be mounted not only on inspection cars manufactured exclusively for the measurement but also on business cars. -
Patent Document 2 discloses a displacement measuring device and method for a pantograph having in common with the present invention the features in the pre-characterizing portions of the independent claims. -
- PATENT DOCUMENT 1: Japanese Patent Application Publication No.
2006-250774 - PATENT DOCUMENT 2: Japanese Patent Application Publication No.
2008-104312 - Note that in the method using a line sensor camera, as shown in
Fig. 13 , aline sensor camera 2 is placed on the roof of acar 1 in such a posture as to face obliquely upward in order to capture amarker 4 attached to apantograph 1a. - Here, as shown in
Fig. 13 , a camera elevation angle θA of theline sensor camera 2 is small when the distance from theline sensor camera 2 to thepantograph 1a is long. On the other hand, a camera elevation angle θB of theline sensor camera 2 is large when the distance from theline sensor camera 2 to thepantograph 1a is short. In the following, the line sensor camera illustrated with a solid line and theline sensor camera 2 illustrated with a broken line inFig. 13 will be referred to asline sensor cameras - Parts (a) and (b) of
Fig. 14 show example input images of themarker 4 captured by theline sensor cameras line sensor camera 2A, the small camera elevation angle θA allows the width of a trace M of the marker to remain substantially the same in an input image 6A as shown in Part (a) ofFig. 14 , showing that different heights of thepantograph 1a will cause almost no resolution difference. In contrast, in a case of using theline sensor camera 2B, the large camera elevation angle θB makes the trace M of the marker appear differently depending on the height as shown in Part (b) ofFig. 14 , showing that different heights of thepantograph 1a cause a resolution difference in an input image 6B. - Note that in
Fig. 14 , an image corresponding to a time range TI shows thepantograph 1a at a I position shown inFig. 13 ; an image corresponding to a time range TII shows thepantograph 1a at a II position shown inFig. 13 ; and an image corresponding to a time range TIII shows thepantograph 1a at a III position shown inFig. 13 . - Assume for example that an image of the
marker 4 as indicated by broken-line circles inFig. 14 is acquired as a pattern-matchingtemplate 7 when thepantograph 1a is at the position denoted by II (= time range TII). In this case, performing pattern matching processing on an image 6A captured by theline sensor camera 2A is highly likely to result in successful pattern matching as shown in Part (a) ofFig. 14 . On the other hand, performing pattern matching processing on an image 6B captured by theline sensor camera 2B can possibly cause a problem of finding no match to the size of thetemplate 7 as shown in Part (b) ofFig. 14 and failing the pattern matching. - In view of the above, the present invention is characterized by providing a device for measuring displacement of a pantograph and a method for detecting a hard spot of a trolley wire which can improve the accuracy of pattern-matching processing.
- The device for measuring displacement of a pantograph and the method for detecting a hard spot of a trolley wire according to the present invention are defined in the independent claims. Further advantageous features are set out in the dependent claims.
- Note that the "calibration means" mentioned in the independent claims refers to means using a calibration method in Japanese Patent Application No.
2009-011648 - Note that the term "rough" used in the claims refers to such an extent that the marker center position can be detected in the pattern-matching processing within a range within which no error occurs in the matching results.
- By automatically finding an appropriate number of divided image sections and performing appropriate pattern-matching processing, the device for measuring displacement of a pantograph of the invention allows the pixel position of the marker to be detected highly accurately, and therefore a hard spot of a trolley wire can be found.
- Moreover, according to a device for measuring displacement of a pantograph related to the present invention, the pattern-matching processing means could perform pattern-matching processing only on a predetermined range of the input image on the basis of a result of pattern-matching processing having been performed on an immediately previous line. Accordingly, it is possible to shorten the processing time and also to lower the probability of detecting noises and the like.
- Further, according to the device for measuring displacement of a pantograph of the present invention, it is possible to efficiently perform the pattern-matching processing by setting the shifting pitch of the template lower in a case of a higher correlation value than in a case of a lower correlation value.
- Furthermore, according to the device for measuring displacement of a pantograph of the present invention, if a trace of the marker in the input image lies over any two adjacent ones of the sections, the pattern-matching processing means may automatically correct positions of the sections such that the trace of the marker is included within one of the sections on the basis of the position of the marker detected in an immediately previous line. Accordingly, it is possible to efficiently perform the pattern-matching processing by preventing the occurrence of the switching of the template within the processing range of the pattern matching attributable to the presence of a partitioning position within the processing range.
- In addition, according to the device for measuring displacement of a pantograph of the present invention, the pattern-matching processing means may detect a rough center position of a trace of the marker, calculate an average luminance value of a range starting from the center position and having half a width of the template, and extract the trace of the marker by using the average luminance value as a threshold. Accordingly, it is possible to avoid an error at a partitioning position in the image by preventing the offset between the pattern matching results by several pixels attributable to the changing of the template size at the partitioning position. It is also possible to perform stable edge extraction even when the luminance of the whole image changes.
- By automatically finding an appropriate number of divided image sections and performing appropriate pattern-matching processing, the method for detecting a hard spot of a trolley wire of the present invention allows the pixel position of the marker to be detected highly accurately, and therefore a hard spot of a trolley wire can be found.
- Besides, according to a method for detecting a hard spot of a trolley wire related to the present invention, the fifth step could be performed only on a predetermined range of the input image on the basis of a result of pattern-matching processing having been performed on an immediately previous line. Accordingly, it is possible to shorten the processing time and also to lower the probability of detecting noises and the like.
- Moreover, according to the method for detecting a hard spot of a trolley wire of the present invention, it is possible to efficiently perform the pattern-matching processing by setting the shifting pitch of the template lower in a case of a higher correlation value but setting the shifting pitch of the template higher in a case of a lower correlation value.
- Further, according to the method for detecting a hard spot of a trolley wire of the present invention, if a trace of the marker in the input image lies over any two adjacent ones of the sections, the fifth step may be performed by automatically correcting positions of the sections such that the trace of the marker is included within one of the sections on the basis of a position of the marker detected in the immediately previous line. Accordingly, it is possible to efficiently perform the pattern-matching processing by preventing the occurrence of the switching of the template within the processing range of the pattern matching attributable to the presence of a partitioning position within the processing range.
- Furthermore, according to the method for detecting a hard spot of a trolley wire of the present invention, the fifth step may be performed by detecting a rough center position of a trace of the marker, calculating an average luminance value of a range starting from the center position and having half a width of the template, and using the average luminance value as a threshold. Accordingly, it is possible to avoid an error at a partitioning position in the image by preventing the offset between the pattern matching results by several pixels attributable to the changing of the template size at the partitioning position. It is also possible to perform stable edge extraction even when the luminance of the whole image changes.
-
- [
Fig. 1] Fig. 1 is an explanatory diagram showing an example of the placement of a device for measuring displacement of a pantograph ofEmbodiment 1 of the present invention. - [
Fig. 2] Fig. 2 is a front view of a marker of the device for measuring displacement of a pantograph ofEmbodiment 1 of the present invention. - [
Fig. 3] Fig. 3 is a block diagram showing a schematic configuration of the device for measuring displacement of a pantograph ofEmbodiment 1 of the present invention. - [
Fig. 4] Fig. 4 is an explanatory diagram showing an example of an input image inEmbodiment 1 of the present invention. - [
Fig. 5] Fig. 5 is an explanatory diagram showing an example of a template inEmbodiment 1 of the present invention. - [
Fig. 6] Fig. 6 is an explanatory diagram showing an example of dividing an image inEmbodiment 1 of the present invention. - [
Fig. 7] Fig. 7 is a flowchart showing the flow of pantograph measurement processing ofEmbodiment 1 of the present invention. - [
Fig. 8] Fig. 8 is an explanatory diagram showing levels of a value of correlation to the template in an input image inEmbodiment 3 of the present invention. - [
Fig. 9] Fig. 9 is an explanatory diagram showing an example of a case where a partitioning position is present within a processing range of pattern matching inEmbodiment 4 of the present invention. - [
Fig. 10] Fig. 10 is an explanatory diagram showing an example of re-setting the partitioning position shown inFig. 9 . - [
Fig. 11] Fig. 11 is an explanatory diagram showing an example of template sizes set to their respective sections inEmbodiment 5 of the present invention. - [
Fig. 12] Fig. 12 is an explanatory diagram showing an example of extracting an edge of one of white regions representing traces of white portions of the marker in an input image inEmbodiment 5 of the present invention. - [
Fig. 13] Fig. 13 is an explanatory diagram showing an example of the placement of a device for measuring displacement of a pantograph. - [
Fig. 14 ] Part (a) ofFig. 14 is an explanatory diagram showing an example of an input image in a case where the elevation angle of a line sensor camera is small, and Part (b) ofFig. 14 is an explanatory diagram showing an example of an input image in a case where the elevation angle of the line sensor camera is large. - Hereinbelow, by referring to the drawings, description will be given of details of a method using image processing of the present invention to improve the accuracy of pattern matching in measurement of a hard spot of a trolley wire.
- A first embodiment of a device for measuring displacement of a pantograph of the present invention will be described using
Figs. 1 to 7 .Fig. 1 is an explanatory diagram showing an example of the placement of a device for measuring displacement of a pantograph ofEmbodiment 1 of the present invention.Fig. 2 is a front view of a marker of the device for measuring displacement of a pantograph ofEmbodiment 1 of the present invention.Fig. 3 is a block diagram showing a schematic configuration of the device for measuring displacement of a pantograph ofEmbodiment 1 of the present invention.Fig. 4 is an explanatory diagram showing an example of an input image inEmbodiment 1 of the present invention.Fig. 5 is an explanatory diagram showing an example of a template inEmbodiment 1 of the present invention.Fig. 6 is an explanatory diagram showing an example of dividing an image inEmbodiment 1 of the present invention.Fig. 7 is a flowchart showing the flow of pantograph measurement processing ofEmbodiment 1 of the present invention. - As shown in
Fig. 1 , the pantograph height measuring device in this embodiment includes aline sensor camera 2 as image capturing means fixed to the roof of acar 1, alighting device 3, amarker 4, and aprocessing computer 5 placed inside thecar 1. - The
line sensor camera 2 is placed on the roof of thecar 1 in such a way as to capture images of apantograph 1a. Specifically, the orientation of theline sensor camera 2 is set such that: the optical axis thereof can be directed obliquely upward; and the scanning-line direction thereof can be orthogonal to the longitudinal direction of thepantograph 1a. Image signals acquired by thisline sensor camera 2 are inputted into theprocessing computer 5. - The orientation and illuminating angle of the
lighting device 3 are set such that a spot to be captured by theline sensor camera 2 can be illuminated with light. - The
marker 4 is formed of a light-reflective material and a non-light-reflective material, and may be placed at any position on the line sensor camera 2-side end surface of thepantograph 1a within a range within which theline sensor camera 2 can capture themarker 4. As shown inFig. 2 , themarker 4 used in this embodiment is formed by alternately arranging twowhite portions 4w made of the light-reflective material and threeblack portions 4b made of the non-light-reflective material. Any size can be selected for themarker 4. - The
processing computer 5 detects the vertical displacement of thepantograph 1a by analyzing an image inputted from theline sensor camera 2, and includes anarithmetic processing unit 5A as arithmetic processing means and amonitor 5B. - As shown in
Fig. 3 , thearithmetic processing unit 5A includes an inputimage creating unit 5a, atemplate setting unit 5b, animage dividing unit 5c, a template enlarging/reducingunit 5d, apattern matching unit 5e, a pantographdisplacement calculating unit 5f, afiltering unit 5g, anacceleration output unit 5h, and memories m1 and m2. - The input
image creating unit 5a as input image creating means creates aninput image 6 as shown inFig. 4 in which image signals inputted from theline sensor camera 2 are arranged in chronological order. As shown inFig. 4 , since themarker 4 reflects light of thelighting device 3, the traces of the white portions of themarker 4 are displayed in theinput image 6 as strip-shapedwhite regions 6a in a black region (a portion indicated with dotted lines in the drawing) 6b. Theinput image 6 is sent to thetemplate setting unit 5b or theimage dividing unit 5c through the memories m1 and m2 as needed. - The
template setting unit 5b as template setting means acquires in advance a marker pattern as shown inFig. 5 as a matching template (hereinafter, referred to as reference template) 7A from aninput image 6 as shown inFig. 4 . To be specific, thetemplate setting unit 5b acquires in advance a marker pattern as the reference template 7A to be used for the extraction of themarker 4 in theinput image 6 in processing of thepattern matching unit 5e, and then registers the marker pattern to the memory m2. The reference template 7A is sent to the template enlarging/reducingunit 5d through the memory m2. - As shown in
Fig. 5 , the reference template 7A is one-dimensional luminance data ofwhite regions 7a andblack regions 7b obtained by extracting the marker portion from an image acquired in advance for the purpose of creating the reference template 7A. It is desirable to cut the image in such a way that the reference template 7A partially includes ablack portion 4b of themarker 4 on the outer side of eachwhite portion 4w as shown inFig. 5 , rather than cutting the image at the boundary of thewhite portion 4w and theblack portion 4b on the outer side. Doing so increases the feature amount of the reference template 7A and therefore reduces erroneous detections. Note that thetemplate setting unit 5b registers the reference template 7A and also an offset width WOS and a template size WT (seeFig. 4 ) at the same time. - The
image dividing unit 5c as image-division processing means providespartitioning positions 8 as shown inFig. 6 in theinput image 6 inputted from the inputimage creating unit 5a to thereby divide theinput image 6 into a predetermined number of sections A1, A2, ..., AN (hereinafter, a given section (s) will be referred to as a section(s) Ai). Information on all the sections Ai is sent to the template enlarging/reducingunit 5d through the memory m2. In this event, the number N of sections is automatically calculated based on the resolution of each pixel found in advance by use of a calibration method in Japanese Patent Application No.2009-011648 - Meanwhile, in this embodiment, the number N of sections is set such that the section with the lowest resolution will not have a resolution 1.1 or more times larger than that of the section with the highest resolution, for a reason that is based on the result of a verification test on the template size WT. Specifically, the number N of sections is based on the result of a test in which the size WT of the reference template 7A acquired from an image capturing the
marker 4 is varied to find out to what extent the reference template 7A is allowed to be scaled up and down before failing to get a successful match in pattern matching performed on the image from which the reference template 7A is acquired. - The template enlarging/reducing
unit 5d as template scaling processing means performs processing to scale up or down the reference template 7A to change its size WT for each section Ai on the basis of the reference template 7A inputted from thetemplate setting unit 5b and the information on the section Ai inputted from theimage dividing unit 5c. Data on each template 7Bi with its size WT thus changed for the corresponding section Ai (hereinafter, referred to as scaled template) is sent to thepattern matching unit 5e through the memory m2. - To be specific, the template enlarging/reducing
unit 5d creates the scaled temples 7Bi corresponding to the sections Ai by: calculating a factor by which the reference template 7A is scaled (hereinafter, referred to as scale factor) for each section Ai; and scaling up or down the reference template 7A through bilinear interpolation which is a common technique for scaling an image. Since the size WT of the reference template 7A is registered at the time of registering the reference template 7A, a size WTi of the scaled template 7Bi can be found by multiplying the size WT of the reference template 7A by the corresponding scale factor. - Here, each scale factor is found from the following expressions (1) to (3). Note that an expression obtained by a calibration method in Japanese Patent Application No.
2009-011648 - In this embodiment, the resolution can be found as a height [mm] per pixel. Specifically, the resolution [mm/pix] can be found by finding a height [mm] at a pixel position n and a height [mm] at a pixel position n+1 next thereto and then subtracting the height at the pixel position n from the height at the pixel position n+1.
- Note that, needless to say, the scale factor is set to 1 when the size WTi of the scaled template 7Bi is equal to the size WT of the reference template 7A.
- The
pattern matching unit 5e as pattern-matching processing means detects the pixel position of themarker 4 in theinput image 6 by performing pattern-matching processing for each section Ai on the basis of the information on the section Ai inputted from theimage dividing unit 5c and the data on the corresponding scaled template 7Bi inputted from the template enlarging/reducingunit 5d. The pixel position of themarker 4 obtained by thepattern matching unit 5e is sent to the pantographdisplacement calculating unit 5f through the memory m2. - The pantograph
displacement calculating unit 5f as pantograph displacement calculating means converts the displacement of themarker 4 in theinput image 6 into the actual displacement of thepantograph 1a on the basis of the pixel position of themarker 4 in theinput image 6 inputted from thepattern matching unit 5e. Note that an approximate expression obtainable for example from Japanese Patent Application No.2009-011648 pantograph 1a in theinput image 6 into the actual displacement of thepantograph 1a. Data on the actual displacement of thepantograph 1a obtained by the pantographdisplacement calculating unit 5f is sent to thefiltering unit 5g through the memory m2. - The
filtering unit 5g as filtering processing means performs smoothing processing on the displacement data inputted from the pantographdisplacement calculating unit 5f. The actual displacement of thepantograph 1a is in a state of containing quantization errors of the image. Hence, the actual displacement data is subjected to filtering processing to smooth the displacement data. As a result, the quantization errors contained in the displacement data are reduced. The displacement data after the smoothing (hereinafter, referred to as smoothed displacement data) is sent to theacceleration output unit 5h through the memory m2. - The
acceleration output unit 5h as acceleration outputting means performs second order differentiation on the smoothed displacement data inputted from thefiltering unit 5g to calculate the acceleration of themarker 4, i.e., thepantograph 1a, in the vertical direction. To be specific, the acceleration is found by performing second order differentiation on the displacement data smoothed by the filtering processing and then outputted to themonitor 5B. In this event, a point where the acceleration of thepantograph 1a is 20 G or greater, for example, is detected as a hard spot in this embodiment. The calculated acceleration data is outputted to themonitor 5B through the memory m2 and displayed on themonitor 5B. - Hereinbelow, based on
Fig. 7 , a brief description will be given of the flow of trolley-wire hard-spot detection processing performed in theprocessing computer 5 of this embodiment. - As shown in
Fig. 7 , in theprocessing computer 5, thetemplate setting unit 5b first performs the processing to register a reference template 7A (step P1). Then, the inputimage creating unit 5a performs the processing to create aninput image 6 in which image signals outputted from theline sensor camera 2 are arranged in chronological order (step P2). Thereafter, as shown inFig. 6 , theimage dividing unit 5c performs the processing to divide theinput image 6 into a predetermined number N of sections A1, A2, ..., AN (step P3). - Then, the template enlarging/reducing
unit 5d performs the processing to scale up or down the reference template 7A registered in step P1 for a given section Ai (step P4). Then, thepattern matching unit 5e performs the pattern-matching processing to compare a scaled template 7Bi, obtained by scaling up or down the reference template 7A for the section Ai of theinput image 6, with theinput image 6 in attempt to detect the position (pixel position) of themarker 4 in the input image 6 (step 5). Thereafter, it is judged whether or not the pattern matching for the section Ai is completed (step P6). The processing proceeds to step P7 if the judgment result shows that the pattern-matching processing for the section Ai is not yet completed (NO). On the other hand, the processing returns to step P4 if the pattern-matching processing for the section Ai is completed (YES). - In step P7, it is judged whether or not the pattern-matching processing is completed for the entire data of the input image. The processing proceeds to step S8 if the judgment result shows that the pattern-matching processing is completed for the entire data of the input image (YES). On the other hand, the processing returns to step P5 if the pattern-matching processing is not yet completed for the entire data of the input image (NO).
- In step P8, the pantograph
displacement calculating unit 5f performs the processing to convert the pixel position of themarker 4 in theinput image 6 into the actual displacement of thepantograph 1a on the basis of the detected marker position, for theentire input image 6. - Then, the
filtering unit 5g performs the filtering processing (step P9). Lastly, theacceleration output unit 5h performs the processing to output the acceleration of the pantograph (step P10). - In the device for measuring displacement of a pantograph of this embodiment configured as above, an
input image 6 captured at a large camera elevation angle θB as shown in Part (b) ofFig. 13 is divided into the predetermined number N of sections A1, A2, ..., AN as shown inFig. 6 , and then the pattern-matching processing is performed using the scaled templates 7Bi obtained by scaling up or down the reference template 7A on the basis of the resolutions of the respective sections Ai. Accordingly, highly accurate pattern-matching processing can be performed. Moreover, utilizing a calibration result allows accurate calculation of the resolution of each pixel of parts where the calibration is performed. - A second embodiment of the device for measuring displacement of a pantograph of the present invention will be described. This embodiment differs from
Embodiment 1 in the processing of thepattern matching unit 5e. The other configurations are substantially the same as those described inEmbodiment 1. In the following, the processing units providing the same effects will be denoted by the same reference numerals, and overlapping descriptions will be omitted. The differences will be mainly described. - In this embodiment, the following processing is performed as the processing in step P5 shown in
Fig. 7 . - First, the same pattern-matching processing as
Embodiment 1 is performed on the first line of aninput image 6, and the detected marker position is stored in the memory m2. Thereafter, the pattern-matching processing is performed on the second and subsequent lines but only within a range of ±NP[pix] from the pixel position of the marker obtained as a result of the pattern-matching processing on the immediately previous line. - In sum, once the trace of the
marker 4 is detected in theinput image 6, the next line is subjected to the pattern-matching processing only within a predetermined range from the pixel position of themarker 4. Here, the range ±NP[pix] to be subjected to the pattern-matching processing is determined by taking into account of the distance of movement of the marker (the distance of vertical displacement of the pantograph) per unit time in the image capturing using theline sensor camera 2. - Specifically, "Conventional Railway Structure Regulation Section 62" states that in an area where a train travels at a speed faster than 50 km/h, the inclination of a trolley wire must be 5/1000 or smaller in a case where the trolley wire is suspended from a catenary or an overhead rigid conductor line, and be 15/1000 or smaller otherwise. An inclination of 5/1000 means a 5-m change in height over a distance of 1000 m.
- Now, let us explain this while assuming that the sampling frequency of the
line sensor camera 2 is set to 1000 Hz (an image containing 1000 lines is captured in 1 second (1-ms intervals)). When thecar 1 travels at a speed of 50 km/h, for example, thecar 1 advances a distance of approximately 13. 888 m per second, which is approximately 0.013888 m per microsecond. Then, in a case where thepantograph 1a is displaced vertically with an inclination of 15/1000, the height of the pantograph changes by approximately 0.00021 mm per unit time (1 ms). - In this embodiment, a reference acceleration for detecting a hard spot is set to 20 G. This is an acceleration for a case assuming a 0.1-mm change per unit time (1 ms). Given "Conventional Railway Structure Regulation Section 62" mentioned above, a 10-mm change per unit time should be large enough. So, with reference to the pixel position of the
marker 4 detected in the immediately previous line, a pixel width NP[pix] is calculated which assumes a 10-mm change per microsecond based on the image resolution, and the corresponding range is set as the range to be subjected to the pattern matching. This embodiment shows an example assuming a 10-mm change per unit time as a condition for calculating the pixel width NP[pix]. Note, however, that the condition for calculating the pixel width NP[pix] is not limited thereto. Any condition may be set when necessary. - When the
line sensor camera 2 captures images of thepantograph 1a, the distance of the vertical displacement of thepantograph 1a per unit time of the image capturing is small, and therefore the distance of the movement of themarker 4 is small as well. Thus, once the pixel position of themarker 4 detected through the pattern-matching processing, the subsequent pattern-matching processing should be performed within a range of ±NP[pix] from that position. Given that the width of theinput image 6 captured by theline sensor camera 2 is "WIDTH," a time t required for completing the pattern-matching processing in this embodiment can be expressed in the following expression (5) using a time to required for completing the pattern-matching processing inEmbodiment 1.
[Formula 1] - As described, in the device for measuring displacement of a pantograph of this embodiment, the pattern-matching processing is performed only within a range of ±NP[pix] from the pixel position of the
marker 4 in the immediately previous line detected by pattern matching. Accordingly, the processing time can be shortened as compared toEmbodiment 1. Further, narrowing the range to be subjected to the pattern-matching processing can lower the possibility of detecting noises and the like. - A third embodiment of the device for measuring displacement of a pantograph of the present invention will be described by use of
Fig. 8. Fig. 8 is an explanatory diagram showing levels of a value of correlation to the template in an input image. - This embodiment differs from
Embodiment 1 in the processing of thepattern matching unit 5e. The other configurations are substantially the same as those described inEmbodiment 1. In the following, the processing units providing the same effects as the foregoing configurations shown inFigs. 1 to 7 will be denoted by the same reference numerals, and overlapping descriptions will be omitted. The differences will be mainly described. - As shown in
Fig. 8 , a value R of correlation between theinput image 6 and the scaled template 7Bi is highest (R = RH) in a part including the trace of the marker captured in theinput image 6 and is low (R = RL) in the other parts. Moreover, the part in which themarker 4 is captured dominates merely one part of theactual input image 6. - Thus, in this embodiment, in pattern matching on the
input image 6 shown inFig. 8 , correlation values R are first calculated as indexes each representing the degree of resemblance to the registered reference template 7A, and then the pattern-matching processing is performed while the pitch at which to shift the scaled template 7Bi is changed in accordance with the corresponding correlation value R. - Specifically, in this embodiment, the following processing is performed as the processing in step S5 shown in
Fig. 7 at timing before performing the processing in step P5 described inEmbodiment 1. - First, the correlation value R to the reference template 7A is calculated for each pixel position i in the
input image 6. The correlation value R can be found through calculation using the following expression (6). Note that the calculation is targeting one-dimensional correlation because theline sensor camera 2 is a camera to capture one-dimensional images.
[Formula 2] - Here, R is the correlation value; L is the width of the template image (set to be smaller than the width of the search image) ; Wi is a luminance value at the pixel position i in the search image; and Ti is a luminance value at the pixel position i in the template image.
- Next, the pitch at which to shift the scaled template 7Bi during the pattern-matching processing is set in accordance with the correlation value R. To be specific, the shifting pitch of the scaled template 7Bi is set such that the scaled template 7Bi will shift shorter during the pattern-matching processing on a part having a higher correlation value R than during the pattern-matching processing on a part having a lower correlation value R.
- The correlation value is 1 at the maximum and 0 at the minimum. In the pattern-matching processing of this embodiment, the correlation value R may be about 0.8 in a part having a low correlation and about 0.99 in a part having a high correlation, for example. In this respect, the shifting pitch of the scaled template 7Bi during the pattern-matching processing can be set on the basis of the correlation value R in the following way. The shifting pitch of the scaled template 7Bi is increased by 1 [pix] upon increase of the correlation value R by 0.05 so that the shifting pitch is 1 [pix] when the correlation value R is 0.95 or higher while the shifting pitch is 2 [pix] when the correlation value R is 0.9 to 0.85.
- In this embodiment, the processing in step P5 described in
Embodiment 1 is performed after the shifting pitch of the scaled template 7Bi is set as mentioned above. - Note that a threshold of the correlation value R (0.05 in this embodiment) for changing the shifting pitch is set manually. In addition, the shifting pitch of the scaled template 7Bi is not limited to those mentioned above and may be set to any values as needed.
- In the device for measuring displacement of a pantograph of this embodiment configured as above, the shifting pitch is changed on the basis of the level of the correlation value R, and therefore the pattern-matching processing can be performed more efficiently than
Embodiment 1 in which the pattern-matching processing is performed on a pixel basis. - A fourth embodiment of the device for measuring displacement of a pantograph of the present invention will be described by use of
Figs. 9 and 10. Fig. 9 is an explanatory diagram showing an example of a case where a partitioning position is present within a processing range of pattern matching.Fig. 10 is an explanatory diagram showing an example of re-setting the partitioning position shown inFig. 9. Fig. 10 is an explanatory diagram showing an example of re-setting the partitioning position shown inFig. 9 . - This embodiment differs from
Embodiments Fig. 7 . The other configurations are substantially the same as those described inEmbodiments Embodiments -
Embodiment 2 mentioned above attempts to improve the processing efficiency by limiting the processing range of the pattern matching on the basis of the pixel position of themarker 4 detected by the pattern-matching processing on the immediately previous line. However, inEmbodiment 2, apartitioning position 8 may be present within a processing range B of the pattern matching in theinput image 6 as shown inFig. 9 (a range of ±NP[pix] from the pixel position of themarker 4 obtained through the pattern-matching processing on the previous line). In this case, the scale factor of the scaled template 7Bi needs to be switched for the two sections (the sections Ai and Ai+1 in this embodiment). - To solve this, in this embodiment, if a
partitioning position 8 is included in the processing range B of the patter matching as shown inFig. 9 at the time of performing the pattern-matching processing, processing is performed so that: all thepartitioning positions 8 can be automatically re-set to exclude thepartitioning position 8 from the processing range B; and the size of each scaled template 7Bi can be re-set to be adjusted to its corresponding newly-set section Ai. - Specifically, in this embodiment, the following processing is performed as the processing in the foregoing step P5 shown in
Fig. 7 . First, the same pattern-matching processing asEmbodiment 1 is performed on the first line of aninput image 6, and the pixel position of the detectedmarker 4 is stored in the memory m2. Thereafter, for the second and subsequent lines, a range of ±NP[pix] from the pixel position of themarker 4 detected through the pattern-matching processing on the immediately previous line is set as the processing range B of the pattern matching. - Then, it is checked for a pixel position P [pix] of the
marker 4 obtained from the immediately previous line as to whether or not apartitioning position 8 is included in the processing range B of P±NP[pix]. If nopartitioning position 8 is included, the pattern matching is performed using the same scaled templates 7Bi as those of the immediately previous line. - On the other hand, if a
partitioning position 8 is included in the range of P±NP [pix], new scaled templates 7Bi are set by: re-setting the sections Ai on the basis of the pixel position of themarker 4 obtained from the immediately previous line; and re-calculating the scale factors of the scaled templates 7Bi for the re-set sections Ai. The pattern-matching processing described inEmbodiment 1 is performed after the above processing is performed. Note that the scale factors of the scaled templates 7Bi and thepartitioning positions 8 are calculated by use of the methods described inEmbodiment 1. - Note that in this embodiment, the reference template 7A is set manually, but the scaled templates 7Bi are set automatically after this manual setting on the basis of the resolutions of the corresponding image positions.
- In the device for measuring displacement of a pantograph of this embodiment mentioned above, if a partitioning position is included in the processing range of the pattern matching, new scaled templates 7Bi and
partitioning positions 8 are set automatically on the basis of the pixel position of themarker 4 obtained from the immediately previous line so that nopartitioning position 8 may be included in the processing range. Accordingly, in addition to the effect ofEmbodiment 2, this embodiment makes it possible to perform highly efficient pattern-matching processing. - A fifth embodiment of the device for measuring displacement of a pantograph of the present invention will be described by use of
Figs. 11 and12 .Fig. 11 is an explanatory diagram showing an example of template sizes set to their respective sections.Fig. 12 is an explanatory diagram showing an example of extracting an edge of one ofwhite regions 6a representing traces ofwhite portions 4w of themarker 4 in an input image. - If a trace M of the marker 4 (hereinafter, referred to as marker trace) crosses the
partitioning position 8 between the sections Ai and Ai+1 as shown inFig. 11 , performing the pattern matching by use of any of the methods ofEmbodiments 1 to 4 requires the single marker trace M to be subjected to the pattern-matching processing using two scaled templates 7Bi and 7Bi+1 for the respective two sections Ai and Ai+1. As a consequence, the pattern matching results may possibly be offset from each other by several pixels at thepartitioning position 8. Offset by several pixels leads to a large error in the calculation of the acceleration. - To solve this, in this embodiment, if the marker trace M crosses the
partitioning position 8 between the sections Ai and Ai+1, the pattern-matching processing is performed in advance by using the scaled template 7Bi to detect a rough pixel position of the marker trace M; and then an edge of the marker trace M is extracted on the basis of the detected pixel position of the marker trace M. This prevents the occurrence of an error attributable to the marker trace M crossing thepartitioning position 8 between the sections Ai and Ai+1. - Note that although any threshold can be used to extract the edge of the marker trace M, setting a constant as the threshold may prevent accurate edge extraction when the
input image 6 appears dark or bright in whole. For this reason, it is preferable to calculate the average value of luminance in the processing range and set the average value as the threshold. In this way, stable edge extraction can be performed even when the luminance values of the image change in whole. - Specifically, in this embodiment, the following processing is performed as the processing in step P5 shown in
Fig. 7 . - First, if the marker trace M crosses the
partitioning position 8 between the sections Ai and Ai+1, a rough marker center position PC is detected through the pattern-matching processing using the scaled template 7Bi. Thereafter, as shown inFig. 12 , an average value BA of luminance in a range starting from the marker center position PC and having half the current template size WT is calculated. Then, the last pixel position (the highest edge) PE in a region with higher luminance values than the average luminance value BA is found, and that position is extracted as the pixel position of the marker trace M. - Note that the term "rough" used in this embodiment refers to a range within which the pattern-matching processing results in the detection of a marker center position that will not cause an error in the matching results. The allowable range of error in the template size WT is about ±10% experimentally.
- In the device for measuring displacement of a pantograph of this embodiment configured as above, a rough marker position is detected through the pattern matching, and then the edge of the
marker 4 is extracted on the basis of the detected pixel position of themarker 4. Thus, the template size is not changed at the partitioning position set in theinput image 6. This makes it possible to avoid the offset between the pattern matching results attributable to the changing of the template size WT at the partitioning position in the image, and hence improve the accuracy. Moreover, the average luminance value of the range starting from the marker center position, detected through the pattern matching, and having half the current template size WT is calculated, and that value is used as the threshold for the edge extraction. Accordingly, stable edge extraction can be performed even when the luminance of thewhole input image 6 changes. - The present invention is applicable to devices for measuring displacement of a pantograph and methods for detecting a hard spot of a trolley wire, and is preferably applicable particularly to devices for measuring displacement of a pantograph and methods for detecting a hard spot of a trolley wire in which a hard spot of a trolley wire is measured through pattern-matching processing using an image, captured by a line sensor camera, of a pantograph equipped with a marker designed for measuring a hard spot.
-
- 1 car
- 2 line sensor camera
- 3 lighting device
- 4 marker
- 4w white portion
- 4b black portion
- 5 processing computer
- 5A arithmetic processing unit
- 5B monitor
- 5a input image creating unit
- 5b template setting unit
- 5c image dividing unit
- 5d template enlarging/reducing unit
- 5e pattern matching unit
- 5f pantograph displacement calculating unit
- 5g filtering unit
- 5h acceleration output unit
- 6 input image
- 6a white region
- 6b black region
- 7 template
- 7a white region
- 7b black region
- 8 partitioning position
- A1, A2, ..., Ai, ..., AN section
- WT template size
Claims (6)
- A device for measuring displacement of a pantograph including image capturing means (2) adapted to be placed on a roof of a car (1) for capturing an image of a pantograph (1a), and image processing means (5) for acquiring displacement of the pantograph by performing image processing on an input image captured by the image capturing means,
wherein the image processing means comprise:input image creating means (5a) for creating an input image by using an image signal inputted from the image capturing means;template setting means (5b) for acquiring, as a template, a marker pattern obtained by extracting a marker portion from an image acquired in advance;image-division processing means (5c) for dividing the input image into a predetermined number of sections on the basis of a resolution of each of pixels obtained by calibration means;template scaling processing means (5d) for scaling up or down the template for each of the sections of the input image on the basis of the resolution of the section;pattern-matching processing means (5e) for detecting a pixel position of a marker (4) in the input image by using the template and the input image; andpantograph displacement calculating means (5f) for calculating actual displacement of the pantograph on the basis of the pixel position of the marker;characterized in that the image processing means further comprise:filtering processing means (5g) for performing smoothing processing on data on the displacement of the pantograph; andacceleration outputting means (5h) for outputting acceleration of the pantograph calculated on the basis of the data on the displacement of the pantograph smoothed by the smoothing processing means;wherein the pattern-matching processing means are adapted to set a pitch at which to shift the template, in accordance with a value representing correlation to the template and detected from the input image. - The device for measuring displacement of a pantograph according to claim 1, wherein, if a trace of the marker (4) lies over any two adjacent ones of the sections, the pattern-matching processing means (5e) are adapted to automatically corrects positions of the sections such that the trace of the marker is included within one of the sections on the basis of the position of the marker detected in an immediately previous line.
- The device for measuring displacement of a pantograph according to claim 1, wherein the pattern-matching processing means (5e) are adapted to detect a rough center position of a trace of the marker (4), to calculate an average luminance value of a range starting from the center position and having half a width of the template, and to extract the trace of the marker by using the average luminance value as a threshold.
- A method for detecting a hard spot of a trolley wire by using a device for measuring displacement of a pantograph including image capturing means (2) placed on a roof of a car (1) for capturing an image of a pantograph (1a), and image processing means (5) for acquiring displacement of the pantograph by performing image processing on an input image captured by the image capturing means, the method comprising:a first step of acquiring, as a template, a marker pattern obtained by extracting a marker portion from an image acquired in advance;a second step of creating an input image by using an image signal inputted from the image capturing means;a third step of dividing the input image into a predetermined number of sections on the basis of a resolution of each of pixels obtained by calibration means;a fourth step of scaling up or down the template for each of the sections of the input image on the basis of the resolution of the section;a fifth step of detecting a pixel position of a marker (4) in the input image through pattern-matching processing of the template and the input image; anda sixth step of calculating actual displacement of the pantograph on the basis of the pixel position of the marker;the method characterized by:a seventh step of performing smoothing processing on data on the displacement of the pantograph; andan eighth step of outputting acceleration of the pantograph calculated on the basis of the data on the displacement of the pantograph smoothed by the smoothing processing means;wherein the fifth step is performed by setting a pitch at which to shift the template, on the basis of a value representing correlation to the template and detected from the input image for each pixel position.
- The method for detecting a hard spot of a trolley wire according to claim 4, wherein, if a trace of the marker (4) lies over any two adjacent ones of the sections, the fifth step is performed by automatically correcting positions of the sections such that the trace of the marker is included within one of the sections on the basis of a position of the marker detected in the immediately previous line.
- The method for detecting a hard spot of a trolley wire according to claim 4, wherein the fifth step is performed by detecting a rough center position of a trace of the marker (4), calculating an average luminance value of a range starting from the center position and having half a width of the template, and using the average luminance value as a threshold.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009118285A JP5321235B2 (en) | 2009-05-15 | 2009-05-15 | Pantograph displacement measuring device and trolley wire hard spot detection method |
PCT/JP2010/058083 WO2010131696A1 (en) | 2009-05-15 | 2010-05-13 | Device for measuring displacement of pantograph and method for detecting hard spot of trolley wire |
Publications (3)
Publication Number | Publication Date |
---|---|
EP2431706A1 EP2431706A1 (en) | 2012-03-21 |
EP2431706A4 EP2431706A4 (en) | 2014-04-02 |
EP2431706B1 true EP2431706B1 (en) | 2017-08-16 |
Family
ID=43085069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP10774953.3A Active EP2431706B1 (en) | 2009-05-15 | 2010-05-13 | Device for measuring displacement of pantograph and method for detecting hard spot of trolley wire |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP2431706B1 (en) |
JP (1) | JP5321235B2 (en) |
KR (1) | KR101292897B1 (en) |
CN (1) | CN102428341B (en) |
WO (1) | WO2010131696A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11019548B2 (en) | 2017-11-24 | 2021-05-25 | Samsung Electronics Co., Ltd. | Electronic device and communication method thereof |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5861318B2 (en) * | 2011-08-26 | 2016-02-16 | 株式会社明電舎 | Trolley wire data comparison device |
JP5534058B1 (en) | 2013-02-19 | 2014-06-25 | 株式会社明電舎 | Wear measuring apparatus and method |
WO2016040997A1 (en) * | 2014-09-15 | 2016-03-24 | Dti Group Limited | Arcing filtering using multiple image capture devices |
CN104833904B (en) * | 2015-04-13 | 2017-09-29 | 东莞市诺丽电子科技有限公司 | Straddle-type monorail pantograph arcing detection method |
CN105158257B (en) * | 2015-05-21 | 2018-07-20 | 苏州华兴致远电子科技有限公司 | Slide plate measurement method and device |
SG11201803516QA (en) * | 2015-12-15 | 2018-06-28 | Mitsubishi Electric Corp | Trolley-wire measurement device and trolley-wire measurement method |
CN105539206B (en) * | 2015-12-24 | 2017-10-17 | 湖南华宏铁路高新科技开发有限公司 | A kind of acquisition methods of electrification railway contact net bar position information |
ITUA20162698A1 (en) * | 2016-04-19 | 2017-10-19 | Mer Mec S P A | OPTICAL SYSTEM FOR THE MEASUREMENT OF THE CONTACT FORCE BETWEEN THE PANTOGRAPH AND THE CATENARY |
CN111091525A (en) * | 2018-10-18 | 2020-05-01 | 株洲中车时代电气股份有限公司 | Contact net hard spot detection system and method thereof |
CN109186469B (en) * | 2018-10-18 | 2019-11-15 | 北京华开领航科技有限责任公司 | Bow net dynamic monitoring system |
JP6858742B2 (en) * | 2018-12-17 | 2021-04-14 | 株式会社フジタ | Displacement measuring device |
JP6669294B1 (en) * | 2019-03-07 | 2020-03-18 | 株式会社明電舎 | Pantograph displacement measuring device and trolley wire hard point detection method |
CN112837260A (en) * | 2019-11-22 | 2021-05-25 | 株洲中车时代电气股份有限公司 | Contact net hard spot detection method, electronic device and readable storage medium |
CN110849885B (en) * | 2019-11-27 | 2022-08-23 | 苏州华兴致远电子科技有限公司 | Hard spot monitoring method, device and system in bow net system |
CN113320445B (en) * | 2020-02-28 | 2022-12-30 | 中铁二院工程集团有限责任公司 | Online monitoring and intelligent hidden danger and fault distinguishing and early warning system for contact network |
JP2021181893A (en) * | 2020-05-18 | 2021-11-25 | シャープ株式会社 | Railway facility measurement device, control method of railway facility measurement device, railway facility measurement program and recording medium |
KR102276634B1 (en) | 2020-09-15 | 2021-07-13 | 엠아이엠테크 주식회사 | System for detecting abnormality of pantograph on electric train installed on vehicle and method for processing thereof |
CN112161577B (en) * | 2020-09-21 | 2021-05-25 | 北京运达华开科技有限公司 | Contact net hard spot detection method and system |
JP7505419B2 (en) | 2021-02-25 | 2024-06-25 | 株式会社明電舎 | Pantograph displacement measuring device and contact wire hard point detection method |
CN113256723B (en) * | 2021-06-29 | 2023-03-21 | 西南交通大学 | Automatic detection method for pantograph lifting time and pantograph head displacement curve |
DE102022208846A1 (en) | 2022-08-26 | 2024-02-29 | Siemens Mobility GmbH | Road vehicle with a pantograph |
CN117309875B (en) * | 2023-09-20 | 2024-04-09 | 北京运达华开科技有限公司 | Non-contact type bow net contact hard point detection device and method |
KR102649465B1 (en) * | 2023-11-22 | 2024-03-20 | 주식회사 미래건설안전 | A system and a method for determining displacement of object based on image analysis regarding the object, and a marker module |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08336204A (en) * | 1995-06-06 | 1996-12-17 | Hitachi Electron Eng Co Ltd | Motion analyzer for pantograph |
JP3198267B2 (en) * | 1997-04-30 | 2001-08-13 | 株式会社東芝 | Image processing device and image forming device |
JP3406195B2 (en) * | 1997-08-26 | 2003-05-12 | 本田技研工業株式会社 | Vehicle distance measuring device |
JP4085588B2 (en) * | 2001-03-22 | 2008-05-14 | 株式会社明電舎 | Pantograph measuring device |
JP4258340B2 (en) * | 2003-10-15 | 2009-04-30 | 株式会社明電舎 | Pantograph detection device |
JP4690749B2 (en) * | 2005-03-11 | 2011-06-01 | 株式会社明電舎 | Pantograph motion measuring device by image processing |
JP4635657B2 (en) * | 2005-03-11 | 2011-02-23 | 株式会社明電舎 | Trolley wire wear measuring device by image processing |
JP4923942B2 (en) * | 2006-10-20 | 2012-04-25 | 株式会社明電舎 | Pantograph measuring device by image processing |
JP2009011648A (en) | 2007-07-06 | 2009-01-22 | Panasonic Electric Works Co Ltd | Attaching structure for wash basin |
JP5097596B2 (en) * | 2008-03-31 | 2012-12-12 | 公益財団法人鉄道総合技術研究所 | Measuring device using line sensor |
-
2009
- 2009-05-15 JP JP2009118285A patent/JP5321235B2/en active Active
-
2010
- 2010-05-13 KR KR1020117027069A patent/KR101292897B1/en active IP Right Grant
- 2010-05-13 EP EP10774953.3A patent/EP2431706B1/en active Active
- 2010-05-13 WO PCT/JP2010/058083 patent/WO2010131696A1/en active Application Filing
- 2010-05-13 CN CN201080021166.1A patent/CN102428341B/en active Active
Non-Patent Citations (1)
Title |
---|
None * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11019548B2 (en) | 2017-11-24 | 2021-05-25 | Samsung Electronics Co., Ltd. | Electronic device and communication method thereof |
US11218938B2 (en) | 2017-11-24 | 2022-01-04 | Samsung Electronics Co., Ltd. | Electronic device and communication method thereof |
Also Published As
Publication number | Publication date |
---|---|
KR101292897B1 (en) | 2013-08-02 |
EP2431706A4 (en) | 2014-04-02 |
JP2010266341A (en) | 2010-11-25 |
EP2431706A1 (en) | 2012-03-21 |
CN102428341B (en) | 2014-05-28 |
KR20120022943A (en) | 2012-03-12 |
CN102428341A (en) | 2012-04-25 |
WO2010131696A1 (en) | 2010-11-18 |
JP5321235B2 (en) | 2013-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2431706B1 (en) | Device for measuring displacement of pantograph and method for detecting hard spot of trolley wire | |
JP5494286B2 (en) | Overhead position measuring device | |
EP2960620A1 (en) | Wear measuring device and method for same | |
JP4832321B2 (en) | Camera posture estimation apparatus, vehicle, and camera posture estimation method | |
EP3199910B1 (en) | Line measurement device and method | |
JP2010184527A (en) | Train stop detection system and train travel speed and position detection system | |
EP3936369B1 (en) | Pantograph displacement measuring device, and trolley-wire hard-spot detection method | |
EP2821747B1 (en) | Pantograph measurement method, and pantograph measurement device | |
EP2966400A1 (en) | Overhead line position measuring device and method | |
US20200250806A1 (en) | Information processing apparatus, information processing method, and storage medium | |
JP2008299458A (en) | Vehicle monitoring apparatus and vehicle monitoring method | |
EP2151667B1 (en) | Equipment for measuring abrasion of trolley wire by image processing | |
JP2010127746A (en) | Apparatus for measuring abrasion and deflection of trolley wire by image processing | |
JP2009276910A (en) | Image processor, method and program | |
EP3885701A1 (en) | Image processing device, image processing method, and program | |
EP4082867A1 (en) | Automatic camera inspection system | |
JP3964077B2 (en) | Trolley wire support insulator height measuring device | |
JPH07244717A (en) | Travel environment recognition device for vehicle | |
JP3891730B2 (en) | Trolley wire support bracket mounting angle measuring device | |
JP2020179798A (en) | Turnout detection device and turnout detection method | |
JP2011180049A (en) | Pantograph monitoring system | |
JP4165966B2 (en) | Object recognition device | |
JP2008298733A (en) | Apparatus for measuring wear of trolley wire by image processing | |
JPH11160046A (en) | Appearance inspection method | |
JPH03286399A (en) | Image processing type traffic flow measuring instrument |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20111208 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20140303 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: B60M 1/28 20060101ALI20140225BHEP Ipc: G01B 11/00 20060101AFI20140225BHEP Ipc: B60L 5/26 20060101ALI20140225BHEP Ipc: G06T 1/00 20060101ALI20140225BHEP |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: B60M 1/28 20060101ALI20170209BHEP Ipc: G01B 11/00 20060101AFI20170209BHEP Ipc: B60L 5/26 20060101ALI20170209BHEP Ipc: G06T 1/00 20060101ALI20170209BHEP |
|
INTG | Intention to grant announced |
Effective date: 20170306 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 919489 Country of ref document: AT Kind code of ref document: T Effective date: 20170915 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602010044461 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20170816 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 919489 Country of ref document: AT Kind code of ref document: T Effective date: 20170816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20171116 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20171116 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20171216 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20171117 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602010044461 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 9 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20180517 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 602010044461 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20180513 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20180531 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180531 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180531 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180513 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20181201 Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180513 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180513 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180531 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180513 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20100513 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 Ref country code: MK Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170816 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20240529 Year of fee payment: 15 |