US20170082463A1 - Absolute encoder - Google Patents
Absolute encoder Download PDFInfo
- Publication number
- US20170082463A1 US20170082463A1 US14/861,212 US201514861212A US2017082463A1 US 20170082463 A1 US20170082463 A1 US 20170082463A1 US 201514861212 A US201514861212 A US 201514861212A US 2017082463 A1 US2017082463 A1 US 2017082463A1
- Authority
- US
- United States
- Prior art keywords
- edge
- pixel position
- absolute
- unit
- bit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 claims abstract description 26
- 230000010363 phase shift Effects 0.000 claims abstract description 20
- 238000012937 correction Methods 0.000 claims description 143
- 230000015654 memory Effects 0.000 claims description 59
- 230000006870 function Effects 0.000 claims description 34
- 230000000630 rising effect Effects 0.000 claims description 33
- 238000000034 method Methods 0.000 claims description 26
- 230000001678 irradiating effect Effects 0.000 claims description 4
- 239000000470 constituent Substances 0.000 claims 4
- 238000012545 processing Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 14
- 238000009826 distribution Methods 0.000 description 13
- 238000009499 grossing Methods 0.000 description 12
- 238000005259 measurement Methods 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000006073 displacement reaction Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 230000002708 enhancing effect Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000012887 quadratic function Methods 0.000 description 4
- 238000012888 cubic function Methods 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 229910052751 metal Inorganic materials 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- VYZAMTAEIAYCRO-UHFFFAOYSA-N Chromium Chemical compound [Cr] VYZAMTAEIAYCRO-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 229910052804 chromium Inorganic materials 0.000 description 1
- 239000011651 chromium Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000059 patterning Methods 0.000 description 1
- 238000000206 photolithography Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000007740 vapor deposition Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D5/00—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable
- G01D5/26—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light
- G01D5/32—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light
- G01D5/34—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light the beams of light being detected by photocells
- G01D5/347—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light the beams of light being detected by photocells using displacement encoding scales
- G01D5/34776—Absolute encoders with analogue or digital scales
- G01D5/34792—Absolute encoders with analogue or digital scales with only digital scales or both digital and incremental scales
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D5/00—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable
- G01D5/26—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D18/00—Testing or calibrating apparatus or arrangements provided for in groups G01D1/00 - G01D15/00
- G01D18/002—Automatic recalibration
- G01D18/004—Continuous recalibration
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D5/00—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable
- G01D5/26—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light
- G01D5/32—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light
- G01D5/34—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light the beams of light being detected by photocells
- G01D5/347—Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light the beams of light being detected by photocells using displacement encoding scales
- G01D5/34776—Absolute encoders with analogue or digital scales
- G01D5/34792—Absolute encoders with analogue or digital scales with only digital scales or both digital and incremental scales
- G01D5/34794—Optical encoders using the Vernier principle, i.e. incorporating two or more tracks having a (n, n+1, ...) relationship
Definitions
- the present invention relates to an absolute encoder for detecting the absolute position of a measurement subject.
- Absolute encoders are used in the field of machine tools, robots, and the like in order to accomplish highly precise positioning control.
- An absolute encoder includes, for example, a scale having a light-dark optical pattern, a light emitting element for irradiating the scale with light, a light receiving element for detecting light that has been transmitted through or reflected by the scale, and an arithmetic device disposed in the downstream of the light receiving element, and detects the absolute angle of the scale joined to a rotational axis of a motor or the like.
- This type of absolute encoder generally has on the scale an absolute pattern, which is made up of angle-specific patterns for detecting a rough absolute angle, and an equally spaced incremental pattern for enhancing the resolution. Structured as this, the absolute encoder is capable of detecting the absolute angle at high resolution.
- an absolute rotary encoder includes a rotating cylindrical body with a plurality of marks arranged on a cylindrical surface along the circumferential direction in fixed cycles, a light source for emitting light to the cylindrical surface, a detector for detecting the marks by way of a plurality of photoelectric conversion elements arranged at a pitch smaller than the cycle of the marks, and a calculation unit for calculating the absolute angle based on an output of the detector.
- the calculation unit uses correction data to correct a distortion error due to the geometric arrangement of the cylindrical surface and the detector in relation to each other.
- a displacement detecting device includes a scale that has a scale pattern including incremental components, an optical system for forming an image of the scale pattern with light, a light-receiving element array for detecting the formed scale pattern image, and an arithmetic circuit for analyzing the position of the scale based on a signal of the light-receiving element array.
- the displacement detecting device removes distortion of the optical system by virtually rearranging the light receiving elements based on a distortion table, which is obtained from distortion information of the optical system.
- Japanese Patent Application Laid-open No. 2013-96757 In the displacement detecting device, as well as a displacement detecting method and a displacement detecting program, of Japanese Patent Application Laid-open No. 2013-96757, the distortion of the optical system is corrected for each position of the detector and deterioration in precision due to the distortion of an image forming lens can therefore be reduced.
- Japanese Patent Application Laid-open No. 2013-96757 has the same problem as U.S. Pat. No. 8,759,747 in that, because reducing the cycle of the marks for the purpose of enhancing the resolution gives different widths to a light portion and dark portion of a mark, which is made up of a light portion and dark portion, due to the light diffraction phenomenon, the precision is not improved by correction for each position of the detector alone.
- the present invention has been made to solve the problem described above, and it is therefore an object of the present invention to provide an absolute encoder capable of detecting the absolute angle at high resolution and with high precision.
- an absolute encoder including: a scale including an absolute value code pattern; a light emitting element for irradiating the scale with light; an image sensor for receiving light from the scale; an A/D converter for converting an output from the image sensor into a digital output; and an absolute position computing unit, in which: the absolute position computing unit includes: an edge detecting unit for detecting, based on a signal strength of a signal from the A/D converter and a threshold level that is set in advance, an edge pixel position of the absolute value code pattern on the image sensor, and an edge direction of the absolute value code pattern at the edge pixel position; and an edge position correcting unit for correcting the edge pixel position that is acquired by the edge detecting unit in a manner that varies depending on whether the detected edge direction is a rising edge or a falling edge; and the absolute position computing unit acquires an absolute position of the scale based on the corrected edge pixel position.
- the absolute encoder according to the one embodiment of the present invention is capable of detecting the absolute position with high precision, without being affected by the diffraction of light, even when the scale is reduced in the minimum line width of the absolute value code pattern in order to enhance the resolution.
- FIG. 1 is a diagram for illustrating the configuration of an absolute encoder according to a first embodiment of the present invention.
- FIG. 2 is a graph for showing an example of the light amount distribution of light cast onto an image sensor of the absolute encoder according to the first embodiment of the present invention.
- FIG. 3 is a graph for showing an example of a waveform after correction in a light amount correcting unit of the absolute encoder according to the first embodiment of the present invention.
- FIG. 4 is a graph for showing an example of a waveform after processing in a smoothing processing unit of the absolute encoder according to the first embodiment of the present invention.
- FIG. 5 is a diagram for illustrating the operation of an edge detecting unit of the absolute encoder according to the first embodiment of the present invention.
- FIG. 6 is a diagram for illustrating the operation of the edge detecting unit of the absolute encoder according to the first embodiment of the present invention.
- FIG. 7 is a diagram for illustrating how an edge correction amount is obtained in the absolute encoder according to the first embodiment of the present invention.
- FIG. 8 is a diagram for illustrating the operation of an edge position correcting unit of the absolute encoder according to the first embodiment of the present invention.
- FIG. 9 is a diagram for illustrating the operation of a decoding unit of the absolute encoder according to the first embodiment of the present invention.
- FIG. 10 is a diagram for illustrating the operation of a phase detecting unit of the absolute encoder according to the first embodiment of the present invention.
- FIG. 11 is a diagram for illustrating the configuration of an absolute encoder according to a second embodiment of the present invention.
- FIG. 12 is a diagram for illustrating a fact that the width of a high bit and the width of a low bit change due to the effect of diffraction.
- FIG. 13 is a diagram for illustrating how an edge correction amount is obtained in the absolute encoder according to the second embodiment of the present invention.
- FIG. 14 is a graph for showing an example of measuring basic cycle width data of a high bit and a low bit in the absolute encoder according to the second embodiment of the present invention.
- FIG. 15 is a diagram for illustrating the configuration of an absolute encoder according to a third embodiment of the present invention.
- FIG. 16 is a graph for showing an example of measuring basic cycle width data of a high bit and a low bit in the absolute encoder according to the third embodiment of the present invention.
- FIG. 17 is a diagram for illustrating the configuration of an absolute encoder according to a fourth embodiment of the present invention.
- FIG. 18 is a diagram for illustrating edge groups in the absolute encoder according to the fourth embodiment of the present invention.
- FIG. 19 is a graph for showing an example of an edge position residual error in the absolute encoder according to the fourth embodiment of the present invention.
- FIG. 20 is a set of graphs each for showing a correction method of the absolute encoder according to the fourth embodiment of the present invention.
- FIG. 21 is a set of graphs each for showing the correction method of the absolute encoder according to the fourth embodiment of the present invention.
- FIG. 22 is a schematic configuration diagram for illustrating an example of the hardware configuration of an absolute position computing unit of the absolute encoder according to each embodiment of the present invention.
- FIG. 1 The configuration of an absolute encoder 1 according to a first embodiment of the present invention is illustrated in FIG. 1 .
- the basic configuration of the absolute encoder 1 includes a light emitting element 2 , an image sensor 3 , a scale 200 , an A/D converter 4 , and an absolute position computing unit 5 .
- the components of the absolute encoder 1 are described one by one below.
- the light emitting element 2 is an illumination unit for irradiating the scale 200 with light.
- a point light source LED for example, is used as the light emitting element 2 .
- the image sensor 3 is a light detecting unit for receiving light from the scale 200 , and is an image pick-up device such as a CCD image sensor or a CMOS image sensor.
- the image sensor 3 is one-dimensional in this embodiment, but may instead be two-dimensional.
- the scale 200 is joined to a rotational shaft 6 of a motor or the like, and is provided with one track, which has an absolute value code pattern 300 .
- a plurality of reflective portions 301 and a plurality of non-reflective portions 302 are arranged in the circumferential direction.
- the reflective portions 301 are portions that reflect light from the light emitting element 2 .
- the non-reflective portions 302 are portions that absorb or transmit light from the light emitting element 2 , or reflect light from the light emitting element 2 at a reflectance lower than that of the reflective portions 301 .
- the reflective portions 301 and the non-reflective portions 302 function so as to modulate the light intensity distribution of light cast onto the image sensor 3 .
- the absolute value code pattern 300 includes the reflective portions 301 and the non-reflective portions 302 so that the angular position of the scale 200 is characterized, and uses, for example, a code string that is obtained by encoding pseudo-random codes such as M-series codes through Manchester encoding.
- the present invention is also applicable to a transmissive encoder in which the light emitting element 2 and the image sensor 3 are placed so as to face each other across the scale 200 .
- the absolute value code pattern 300 includes transmissive portions and non-transmissive portions. Regardless of whether the absolute encoder 1 is reflective or transmissive, the absolute value code pattern 300 is not limited to a particular configuration as long as the absolute value code pattern 300 modifies the light intensity distribution of light cast onto the image sensor 3 .
- the reflective portions 301 and non-reflective portions 302 of the scale 200 are formed by, for example, depositing a metal such as chromium through vapor deposition on a glass substrate, and patterning the resultant metal film through photolithography.
- the scale 200 is not limited to particular materials and fabrication methods as long as the reflective portions and the non-reflective portions are formed in the case of a reflective encoder and as long as the transmissive portions and the non-transmissive portions are formed in the case of a transmissive encoder.
- the A/D converter 4 is a signal converting unit for converting an analog signal from the image sensor 3 into a digital signal.
- the absolute position computing unit 5 is a computing unit for computing the absolute position of the scale 200 based on an output from the A/D converter 4 , and includes a light amount correcting unit 100 , a smoothing processing unit 101 , an edge detecting unit 102 , an edge position correcting unit 103 , a decoding unit 104 , a rough detection unit 105 , a phase detecting unit 106 , and a high precision detection unit 107 .
- an image obtained by the image sensor 3 is converted by the A/D converter 4 into digital signals, which are then input to the light amount correcting unit 100 .
- the signals input to the light amount correcting unit 100 have, for example, a light amount distribution 70 shown in FIG. 2 , where the axis of abscissa represents the pixel position and the axis of ordinate represents the signal strength.
- a high bit 8 in FIG. 2 indicates a pattern at the reflective portions 301 of the scale 200
- a low bit 9 indicates a pattern in the non-reflective portions 302 of the scale 200 . As shown in FIG.
- the light amount correcting unit 100 therefore makes a correction for each pixel based on a light amount correction value, which is measured in advance, in order to turn the uneven light amount distribution into an even light amount distribution.
- a post-light amount correction light amount distribution 71 of FIG. 3 is obtained as a result.
- the post-light amount correction light amount distribution 71 which is the result of the correction in the light amount correction unit 100 , is sent to the smoothing processing unit 101 , where smoothing processing is performed on the post-light amount correction light amount distribution 71 .
- the smoothing processing unit 101 uses, for example, a moving average filter to acquire, for example, a post-smoothing processing light amount distribution 72 shown in FIG. 4 . While this embodiment takes a moving average filter as an example, processing through a Gaussian filter or the like may be executed instead, and any method that smoothes signals can be used. Light amount correction, which precedes the smoothing processing in this embodiment, may be executed after the smoothing processing. The present invention is also applicable to cases where the smoothing processing is not executed.
- the post-smoothing processing light amount distribution 72 is sent to the edge detecting unit 102 , which acquires an edge position on the image sensor 3 that equals a preset threshold level 10 (hereinafter referred to as edge pixel position 11 ).
- FIG. 5 is an enlarged view of the vicinity of the edge pixel position, which is enclosed by the broken line frame in FIG. 4 .
- the edge detecting unit 102 first determines whether or not there is an edge based on the signal strengths of an i-th pixel and an (i+1)-th pixel, which are adjacent pixels as illustrated in FIG. 5 .
- the edge detecting unit 102 determines that there is an edge when the signal strength of the i-th pixel is lower than the threshold level 10 and the signal strength of the (i ⁇ 1)-th pixel is higher than the threshold level 10 , or when the signal strength of the i-th pixel is higher than the threshold level 10 and the signal strength of the (i ⁇ 1)-th pixel is lower than the threshold level 10 .
- the edge detecting unit 102 When it is determined that there is an edge with respect to the i-th pixel and the (i+1)-th pixel, the edge detecting unit 102 next acquires through sub-pixel processing the edge pixel position 11 , which equals the threshold level 10 , by performing linear interpolation on the i-th pixel and the (i+1)-th pixel, which are on either side of the threshold level 10 .
- edge pixel position 11 which equals the threshold level 10
- the edge pixel position 11 is obtained by linear interpolation based on two pixels that are on either side of the threshold level 10 in this embodiment
- two or more pixels that are on either side of the threshold level 10 may be used to obtain the edge pixel position 11 .
- a higher-order function such as a quadratic function or a cubic function may be used for interpolation.
- the edge detecting unit 102 detects an edge direction 50 of FIG. 6 , for example, based on the signal strengths of the i-th pixel and the (i+1)-th pixel, which are on either side of the threshold level 10 .
- the edge direction 50 is a rising edge 51 when the signal strength of the i-th pixel is lower than the signal strength of the (i+1)-th pixel, and is a falling edge 52 when the signal strength of the i-th pixel is greater than the signal strength of the (i ⁇ 1)-th pixel.
- the edge pixel position 11 and edge direction 50 detected by the edge detecting unit 102 are sent to the edge position correcting unit 103 .
- the edge position correcting unit 103 acquires an edge correction amount from the edge pixel position 11 and edge direction 50 detected by the edge detecting unit 102 , and corrects the pixel position of the edge pixel position 11 based on the edge direction 50 .
- the high bit is narrow. Whether the high bit is narrow or wide depends on the distance between the image sensor 3 and the scale 200 . In the case of a single slit, where light spreads due to diffraction, the high bit is wide. In the case of an encoder or other devices that have a plurality of slits, an image is formed by diffraction interference in which the diffraction pattern of one slit interferes with the diffraction pattern of another slit, and the high bit is therefore wide depending on the distance.
- the high bit 8 and low bit 7 of light cast onto the image sensor 3 have a basic cycle width fh and a basic cycle width fl, respectively, which are not equal to each other due to the effect of the diffraction of light.
- the term “basic cycle width” refers to the minimum line width of the absolute value code pattern 300 , which includes the reflective portions 301 and the non-reflective portions 302 .
- the edge correction amount of the i-th edge pixel position is acquired as follows:
- the edge position correcting unit 103 first identifies a space between the rising edge 51 and the falling edge 52 as a high bit, and a space between the falling edge 52 and the rising edge 51 as a low bit. Based on the high bit 8 and the low bit 9 that are adjacent to the i-th edge pixel position, the edge position correcting unit 103 acquires a distance Lh between the edge pixel positions of the high bit 8 and a distance Ll between the edge pixel positions of the low bit 9 by Expression (1) and Expression (2).
- the width of the high bit 8 namely, the distance between the edge pixel positions on either side of the high bit 8
- the width of the low bit 9 namely, the distance between the edge pixel positions on either side of the low bit 9 , is Ll.
- the distances Lh and Ll are each divided by an ideal basic cycle width F of the absolute value code pattern 300 , and the quotient is rounded off to the closest whole number to obtain an integral multiple N (N is 1 or more) of the ideal basic cycle width F.
- the basic cycle width fh of the high bit 8 and the basic cycle width fl of the low bit 9 are expressed by their respective integral multiples N as follows:
- Nh ⁇ Lh/F N of the high bit 8
- Nl ⁇ Ll/F N of the low bit 9
- Nh and Nl are each a number equal to or more than 1
- Each integral multiple N indicates the number of successive bits (an integer). In other words, N indicates how many high bits are observed in succession, or how many low bits are observed in succession.
- the basic cycle widths fh and fl are therefore 8 and 12, respectively.
- the corrected basic cycle width fh′ of the high bit 8 and the corrected basic cycle width fl′ of the low bit 9 are equal to each other.
- the edge correction amount ⁇ of the i-th edge pixel position is expressed by Expression (7).
- the edge correction amount ⁇ of the i-th edge pixel position can be obtained as 1 ⁇ 4 of a difference between the uncorrected basic cycle width fh of the high bit 8 that is adjacent to the i-th edge pixel position and the uncorrected basic cycle width fl of the low bit 9 that is adjacent to the i-th edge pixel position.
- the edge position correcting unit 103 acquires the edge correction amount ⁇ for each of the edge pixel positions, and makes a correction with the use of Expression (8) or Expression (9) depending on the edge direction 50 , i.e., the rising edge 51 or the falling edge 52 .
- the edge pixel position 11 after the edge position correction processing is, for example, as illustrated in FIG. 8 .
- the decoding unit 104 converts the high bit 8 and the low bit 9 into a 1/0 bit string 12 based on the edge direction 50 and the edge pixel position 11 .
- the bit string is generated so that, for example, the bit value is 1 from the rising edge 51 to the falling edge 52 , and is 0 from the falling edge 52 to the rising edge 51 .
- the high bit 8 is expressed as a bit value “1”
- the low bit 9 is expressed as a bit value “0”.
- the decoding unit 104 calculates the integral multiples N (Nh and Hl) from the ideal basic cycle width F and the distance between edge pixel positions, and arranges, in succession, N bits each having one of the bit value “1” and the bit value “0”.
- N Nh and Hl
- pseudo-random codes such as M-series codes are encoded by Manchester encoding, and the bit string 12 therefore ideally includes two successive bits of the bit value “1” or the bit value “0” at maximum, for example, as illustrated in FIG. 9 .
- digitization processing may instead be used to convert the basic cycle widths into a 1/0 bit string as in the related art, and the present invention is not limited to a particular method as long as the method used is capable of converting the basic cycle widths into a 1/0 bit string.
- the rough detection unit 105 detects a rough absolute position from the bit string 12 of FIG. 9 detected by the decoding unit 104 .
- the rough detection unit 105 identifies a rough absolute position by, for example, storing in advance bit strings that form the absolute value code pattern 300 of the scale 200 in a look-up table, and comparing the bit string 12 detected by the decoding unit 104 with the bit strings in the look-up table.
- the phase detecting unit 106 acquires a phase shift amount ⁇ in relation to a reference pixel position 13 of the image sensor 3 as illustrated in FIG. 10 .
- the edge position correcting unit 103 corrects the edge pixel positions of the M detected edges, and the corrected edge pixel positions are denoted by ZC( 1 ), ZC( 2 ), ZC(i), . . . and ZC(M).
- ZC(i) is expressed by Expression (10) with the use of the phase shift amount ⁇ of a shift from the reference pixel position 13 .
- the phase shift amount ⁇ is a negative value when ZC(i) is to the left of the reference pixel position 13 , and is a positive value when ZC(i) is to the right of the reference pixel position 13 .
- the phase detecting unit 106 then processes other edges than the ZC(i) that is closest to the reference pixel center position P by acquiring an integral multiple N(i) of the basic cycle F with respect to the edge pixel position ZC(i). Examples of the integer multiple N(i) are calculated as follows:
- N ( i ⁇ 1) ( ZC ( i ⁇ 1) ⁇ ZC ( i ))/ F
- N ( i+ 1) ( ZC ( i+ 1) ⁇ ZC ( i ))/ F
- the edge pixel positions ZC(i ⁇ 1) and ZC(i+1) are expressed by Expression (11) and Expression (12).
- ZC ( i ⁇ 1) P+ ⁇ +F ⁇ N ( i ⁇ 1)+ ⁇ N ( i ⁇ 1) 2 + ⁇ N ( i ⁇ 1) 3 (11)
- Symbols ⁇ and ⁇ represent a two-dimensional parameter and a three-dimensional parameter, respectively.
- the edge pixel positions are thus expressed by Expression (13) with the use of the integral multiples N, the reference pixel center position P, the phase shift amount ⁇ , and the high-dimensional parameters ⁇ and ⁇ .
- the phase shift amount ⁇ can be obtained in the form of the least square method.
- the reference pixel position 13 can be the center pixel, or the leftmost or rightmost pixel, of the image sensor 3 , and is not particularly limited. While all edge pixel positions are used to obtain the phase shift amount ⁇ in the form of the least square method in this embodiment, the phase shift amount ⁇ may be obtained directly from a difference between the center position of the reference pixel position 13 and the edge pixel position ZC(i ⁇ 1) that is closest to the reference pixel position 13 .
- the high precision detection unit 107 adds the rough absolute position acquired by the rough detection unit 105 and the phase shift amount ⁇ acquired by the phase detecting unit 106 to obtain the absolute position of the scale 200 .
- the absolute position can be detected with high precision even when the minimum line width of the absolute value code pattern 300 is reduced for the purpose of enhancing the resolution because the absolute position computing unit 5 includes the edge detecting unit 102 and the edge position correcting unit 103 , the edge detecting unit 102 detects the edge pixel position 11 , which crosses the threshold level 10 set in advance, and the edge direction 50 , the edge position correcting unit 103 acquires the width of the high bit 8 , which represents the reflective portions 301 of the absolute value code pattern 300 projected onto the image sensor 3 , and the width of the low bit 9 , which represents the non-reflective portions 302 of the absolute value code pattern 300 projected onto the image sensor 3 , the edge correction amount ⁇ is calculated from the width of the high bit 8 and the width of the low bit 9 , the edge pixel position 11 is corrected by the edge correction amount ⁇ in a manner that varies depending on whether the edge direction 50 is the rising edge 51 or the falling edge 52 , and the absolute position computing unit 5 uses the corrected edge
- the absolute position computing unit 5 further includes the decoding unit 104 for converting the high bit 8 and the low bit 9 into the 1/0 bit string 12 based on the edge direction acquired by the edge detecting unit 102 and information of the edge pixel position corrected by the edge position correcting unit 103 , the rough detection unit 105 for identifying a rough absolute position from the bit string 12 acquired by the decoding unit 104 , the phase detecting unit 106 for acquiring a phase shift amount in relation to the reference pixel position 13 of the image sensor 3 based on the information of the corrected edge pixel position, and the high precision detection unit 107 for acquiring a highly precise absolute position from the rough absolute position acquired by the rough detection unit 105 and information of the phase shift amount acquired by the phase detecting unit 106 .
- the absolute position can therefore be obtained with high precision from the absolute value code pattern 300 alone.
- the need to provide a scale with two tracks, namely, an absolute pattern and an incremental pattern, in order to detect the absolute position as in the related art is thus eliminated, which means that the device size can be reduced and that the absolute position can be detected with high precision at high resolution.
- the high bit 8 and the low bit 9 that are adjacent to the edge pixel position 11 can be made equal to each other in width despite variations in the widths of the high bit 8 and the low bit 9 , which depend on the pixel position of the image sensor 3 .
- a lens or the like for collimating light from the light emitting element 2 is thus eliminated, and the device can be made thin.
- the first embodiment is configured so that the edge position correcting unit 103 acquires the edge correction amount of the edge pixel position 11 .
- a second embodiment of the present invention describes a method in which an edge correction data memory 113 is provided as illustrated in FIG. 11 , the edge correction amount is obtained as a function of the pixel position of the image sensor 3 , the edge correction data memory 113 stores edge correction amount information obtained in advance, and the edge position correcting unit 103 uses the information in the edge correction data memory 113 to correct the edge pixel position 11 .
- An absolute encoder 1 of the second embodiment is the same in basic configuration as the absolute encoder 1 of the first embodiment, except that the edge correction data memory 113 is added and that the edge position correcting unit 103 uses a different computing method.
- the rest of the components are the same as those in the first embodiment, and are denoted by the same reference symbols in order to omit descriptions thereof.
- the effect of diffraction differs in the central portion and peripheral portion of the image sensor 3 because the distance from the light emitting element 2 to the image sensor 3 grows toward the peripheral portion of the image sensor 3 as illustrated in FIG. 12 . Consequently, the difference between the width of the high bit 8 and the width of the low bit 9 increases toward the peripheral portion of the image sensor 3 .
- the absolute encoder 1 of the second embodiment therefore acquires the edge correction amount as a function of the pixel position of the image sensor 3 .
- the image sensor 3 obtains an image at an appropriate angular position, and processing up through the computation in the edge detecting unit 102 is executed to obtain the edge pixel position 11 and the edge direction 50 .
- the i-th edge pixel position is given as ZC(i)
- the (i+1)-th edge pixel position is given as ZC(i+1) as illustrated in FIG.
- the bit is identified as the high bit 8 if ZC(i) is the rising edge 51 , and a basic cycle width fh(xh) of the high bit 8 is calculated from a center pixel xh of the high bit 8 and the distance Lh between the edge pixel positions of the high bit 8 by Expression (14), Expression (15), and Expression (16).
- N is an integer equal to or more than 1, and represents an integral multiple of an ideal basic cycle width as in the first embodiment.
- a center pixel xl of the low bit 9 and a basic cycle width fl(xl) of the low bit 9 are obtained in the same manner.
- the bit is identified as the low bit 9 when ZC(i) is the falling edge 52 .
- the bit center position data and basic cycle width data of the high bit 8 and the low bit 9 at a different pixel position can be obtained.
- the center pixel data and basic cycle width data of the high bit 8 and the low bit 9 are plotted as shown in FIG. 14 .
- Measurement data of the high bit 8 is denoted by H 14 a
- an approximate curve of the high bit 8 is denoted by H 14 b
- measurement data of the low bit 9 is denoted by L 14 a
- an approximate curve of the low bit 9 is denoted by L 14 b .
- the high bit 8 and the low bit 9 have different basic cycle width characteristics in relation to the pixel position, and the difference between the basic cycle width of the high bit 8 and the basic cycle width of the low bit 9 grows toward the peripheral portion of the image sensor 3 .
- an approximate function fh(x) for the basic cycle width data of the high bit 8 in relation to the pixel position and an approximate function fl(x) for the basic cycle width data of the low bit 9 in relation to the pixel position are obtained by a quadratic least square method.
- the obtained quadratic functions are expressed by Expression (17) and Expression (18) when the pixel position is given as x and parameters of the functions are given as fho, ⁇ h, ⁇ h, flo, ⁇ l, and ⁇ l.
- the edge correction amount is obtained by the same principle as in the first embodiment, namely, as 1 ⁇ 4 of the difference between the basic cycle width of the high bit 8 and the basic cycle width of the low bit 9 .
- An edge correction amount ⁇ (x) of the pixel position x of the image sensor 3 is obtained by Expression (19).
- Correction by the edge correction amount ⁇ (x) is made in combination with a normal test prior to the shipping of the encoder, for example, and parameters of the obtained edge correction amount function ⁇ (x) are saved in the edge correction data memory 113 .
- An edge position correction method used by the edge position correcting unit 103 is described next.
- the edge position correcting unit 103 acquires parameters of the edge correction amount ⁇ (x) from the edge correction data memory 113 .
- the edge position correcting unit 103 makes a correction with the use of Expression (20) or Expression (21), depending on whether the edge direction 50 is the rising edge 51 or the falling edge 52 .
- the edge pixel position 11 can be corrected with an even higher precision.
- an approximate function is analyzed with the use of the measured basic cycle width data of the high bit 8 and the low bit 9 , and the edge correction amount ⁇ is calculated from the analyzed approximate function. This prevents an error caused by a foreign object or the like at some edge pixel positions from affecting other edges much, and the absolute position can be detected with high precision despite an error factor such as a foreign object.
- the edge correction amount ⁇ is calculated after the basic cycle width characteristics of the high bit 8 and the low bit 9 , which vary depending on where the light emitting element 2 and the image sensor 3 are mounted in relation to the scale 200 , are obtained with the light emitting element 2 and the image sensor 3 mounted. The attachment tolerance of the light emitting element 2 and the image sensor 3 can therefore be relaxed.
- edge correction data memory 113 provided so that the edge position correcting unit 103 corrects the edge pixel position 11 by using data in the edge correction data memory 113 , the need to calculate the edge correction amount ⁇ each time is eliminated, and the calculation load is accordingly lightened.
- a fitting function of an even higher order may be used instead.
- the data may be sectioned into areas for linear interpolation, and any function that represents the basic cycle width characteristics of the high bit 8 and the low bit 9 can be employed.
- the value of the edge correction amount may be saved for each pixel of the image sensor 3 .
- the edge position correcting unit 103 in this case corrects an edge by the edge correction amount ⁇ that is obtained by, for example, interpolating a space between pixels through linear interpolation or the like.
- Data saved in the edge correction data memory 113 is not particularly limited as long as the saved data is information necessary to obtain the edge correction amount ⁇ (x) of the pixel position x.
- the present invention is applicable when data at one angular position, at least, is available.
- This embodiment is configured so that information of the edge correction amount obtained as a function of the pixel position of the image sensor 3 is measured in advance and stored in the edge correction data memory 113 .
- the edge position correcting unit 103 may acquire the edge correction amount as a function of the pixel position of the image sensor 3 to correct the edge pixel position 11 .
- the second embodiment is configured so that the edge position correcting unit 103 corrects the edge pixel position 11 with the use of the edge correction amount information in the edge correction data memory 113 that is obtained in advance.
- data in the edge correction data memory 113 may be updated regularly by providing a correction data recalculating unit 123 as illustrated in FIG. 15 .
- An absolute encoder 1 of the third embodiment is the same in basic configuration as the absolute encoder 1 of the second embodiment, except that the correction data recalculating unit 123 is added.
- the rest of the components are the same as those in the first embodiment and the second embodiment, and are denoted by the same reference symbols in order to omit descriptions thereof.
- a change in ambient temperature changes the positional relation of the light emitting element 2 and the image sensor 3 to the scale 200 .
- a change in the gap from the scale 200 to the light emitting element 2 and the image sensor 3 changes the basic cycle width characteristics of the high bit 8 and the low bit 9 as well.
- the basic cycle width characteristics of the high bit 8 and the low bit 9 at the initial attachment position are as shown in FIG. 14
- the basic cycle width characteristics of the high bit 8 and the low bit 9 that are obtained when the gap increases are as shown in FIG. 16 , for example.
- Measurement data of the high bit 8 is denoted by H 16 a
- an approximate curve of the high bit 8 is denoted by H 16 b
- measurement data of the low bit 9 is denoted by L 16 a
- an approximate curve of the low bit 9 is denoted by L 16 b.
- a change in basic cycle width characteristics of the high bit 8 and the low bit 9 as this leads to a drop in the precision of absolute position detection when the positions of the light emitting element 2 and the image sensor 3 in relation to the scale 200 change because the edge is corrected by the wrong edge correction amount ⁇ (x).
- the third embodiment is therefore configured so that the correction data recalculating unit 123 updates the edge correction amount ⁇ (x) obtained as a function of the pixel position of the image sensor 3 .
- the correction data recalculating unit 123 identifies the bit as the high bit 8 when the edge direction 50 is the rising edge 51 , calculates the center pixel xh and basic cycle width fh(xh) of the high bit 8 in the same manner that is used in the second embodiment to create the edge correction data, and stores the data in a memory area that is secured for the high bit 8 in the edge correction data memory 113 .
- the correction data recalculating unit 123 identifies the bit as the low bit 9 when the edge direction 50 is the falling edge 52 , calculates the center pixel xl and basic cycle width fl(xl) of the low bit 9 in the same manner that is used in the second embodiment to create the edge correction data, and stores the data in a memory area (not shown) that is secured for the low bit 9 in the edge correction data memory 113 .
- the correction data recalculating unit 123 keeps collecting information about the edge pixel position 11 and the edge direction 50 until T seconds elapse since the start of the data collection, and then uses the data in the memory area secured for the high bit 8 to obtain the parameters of Expression (17) by the quadratic least square method. Similarly, the correction data recalculating unit 123 uses the data in the memory area secured for the low bit 9 to acquire the parameters of Expression (18) by the quadratic least square method. From the acquired parameters, parameters of the edge correction amount ⁇ (x) are calculated by Expression (19) to rewrite the data in the edge correction data memory 113 . The data in the memory area secured for the high bit 8 and the data in the memory area secured for the low bit 9 are cleared, and the correction data recalculating unit 123 starts collecting data again.
- the timing of data update may be determined based on pixel position information of the image sensor 3 .
- the pixel range of the image sensor 3 is sectioned into M areas and, when the bit center pixels xh and xl enter all of the M areas, the parameters of the edge correction amount ⁇ (x) are calculated from the data in the memory area secured for the high bit 8 and the data in the memory area secured for the low bit 9 to update the data in the edge correction data memory 113 .
- the data in the edge correction data memory 113 may of course be updated as the need arises, by calculating the parameters of the edge correction amount ⁇ (x) in the correction data recalculating unit 123 from data of one image obtained by the image sensor 3 .
- the correction data recalculating unit 123 is provided to update data in the edge correction data memory 113 , parts displacement that accompanies a change in temperature or other changes is prevented from decreasing precision, and high precision detection can therefore be maintained.
- the reliability of the encoder can be improved by comparing information of the edge correction amount ⁇ (x) that is calculated by the correction data recalculating unit 123 with pre-update information of the edge correction amount ⁇ (x) that is in the edge correction data memory 113 , determining that there is an encoder anomaly when a change between the pre-update information and the post-update information exceeds a range set in advance, and sounding an alarm or issuing an alert in other ways.
- the first embodiment to the third embodiment are configured so that the edge position correcting unit 103 corrects the edge pixel position 11 in a manner that varies depending on the edge direction 50 .
- Described here is a method in which an absolute (ABS) pattern correction data memory 133 is provided as illustrated in FIG. 17 , the edge pixel position 11 is corrected in a manner suited to the absolute value code pattern 300 , and the phase detecting unit 106 uses the corrected edge pixel position 11 to calculate the phase shift amount ⁇ .
- ABS absolute
- An absolute encoder 1 according to a fourth embodiment of the present invention is the same in basic configuration as the absolute encoder 1 of the third embodiment, except that the ABS pattern correction data memory 133 is added and that the phase detecting unit 106 executes different processing.
- the rest of the components are the same as those in the first embodiment to the third embodiment, and are denoted by the same reference symbols in order to omit descriptions thereof.
- the code pattern 300 that is used on the scale 200 of the fourth embodiment is a pattern that is obtained by encoding pseudo-random codes such as M-series codes through Manchester encoding.
- Manchester encoding converts one bit into two bits so that, for example, a bit having a value “1” is turned into “1 0” whereas a bit having a value “0” is turned into “0 1”.
- An M-series pattern that is 101110, for example, is turned into 100110101001 by Manchester encoding. In other words, in a bit string created by Manchester encoding, the number of successive “1” bits and “0” bits is two at maximum.
- the bit string thus created by Manchester encoding is divided between the rising edge 51 and the falling edge 52 to be classified into eight groups, which are made up of groups 401 to 408 as illustrated in FIG. 18 .
- the edge pixel positions 11 of the rising edge 51 and the falling edge 52 are varied also by interference from another reflective portion 301 .
- the fourth embodiment therefore involves dividing the bit string into groups of the rising edge 51 and groups of the falling edge 52 , namely, eight groups in total, for correction.
- a method of creating correction values of the ABS pattern correction data memory 133 is described next.
- the image sensor 3 obtains an image at an appropriate angular position, processing that precedes computation in the phase detecting unit 106 is executed, and the phase detecting unit 106 calculates the phase shift amount 8 of a shift from the reference pixel position 13 of the image sensor 3 by the least square method.
- the phase detecting unit 106 also calculates, from the result of the fitting by the least square method, a residual error for each edge position, and saves the edge position residual error and a bit string that corresponds to the rough absolute position acquired by the rough detection unit 105 in a residual error saving memory (not shown).
- the same computation is executed at a different angular position of the scale 200 .
- edge position residual errors are plotted in relation to the pixel position as shown in FIG. 19 .
- R 19 is the rising edge
- F 19 is the falling edge.
- the results of the edge position residual errors in relation to the pixel position are divided into the groups of FIG. 18 , namely, eight groups in total, as shown in FIG. 20 and FIG. 21 .
- characteristics of the edge position residual errors in relation to the pixel position of the image sensor 3 vary between the rising edge 51 and the falling edge 52 .
- the residual error characteristics also slightly vary among the four groups belonging to the same edge, namely, the rising edge 51 or the falling edge 52 .
- Data of the edge position residual errors is therefore used to analyze an approximate function for the groups of the rising edge 51 and the groups of the falling edge 52 , namely, eight groups in total. For example, pixel positions are divided into sixteen areas and approximated to straight lines so as to save parameters of the straight lines of the respective areas in the ABS pattern correction data memory 133 .
- the number of the divided areas can be smaller or larger than 16, although the precision of the correction is higher when the number of the divided areas is larger.
- the least square method may be used to fit a higher-order function such as a quadratic function or a cubic function.
- ABS pattern correction data are executed in combination with a normal test prior to the shipping of the encoder, for example.
- An image obtained by the image sensor 3 is processed by the method described in the first embodiment to the third embodiment, up through the computation in the rough detection unit 105 , and a bit string in the look-up table that corresponds to the rough absolute position is sent to the phase detecting unit 106 along with the edge pixel position 11 and the edge direction 50 .
- the phase detecting unit 106 identifies, for each edge pixel position 11 , a group to which the edge pixel position 11 belongs out of the groups of FIG. 18 , based on the edge direction 50 , the bit string that corresponds to the rough absolute position, and two adjacent pixels in front of and past the edge pixel position 11 .
- the phase detecting unit 106 next acquires from the ABS pattern correction data memory 133 correction parameters based on the identified group, and calculates an edge correction amount at the edge pixel position 11 from the obtained correction parameters.
- the edge pixel position 11 is corrected by adding ⁇ 2(x) to the edge pixel position 11 in the case of the rising edge 51 and in the case of the falling edge 52 both.
- the phase detecting unit 106 uses the thus corrected edge pixel position 11 to acquire the phase shift amount ⁇ , and the absolute position is calculated with high precision.
- the ABS pattern correction data memory 133 where the ABS pattern correction data memory 133 is provided, a bit string is divided between the rising edge 51 and the falling edge 52 into eight groups in total, and the edge pixel position 11 is corrected by an edge correction amount obtained in advance for each group separately, an error due to the effect of diffraction is eliminated, and the absolute position can be detected with high precision.
- an ABS pattern correction data recalculating unit 133 a which is indicated by the broken line in FIG. 17 , may be provided as in the third embodiment to update the ABS pattern correction data memory 133 .
- FIG. 22 is a schematic configuration for illustrating an example of the hardware configuration of the absolute position computing unit 5 in the absolute encoder according to each embodiment of the present invention.
- an interface (I/F) 551 a processor 552 , a memory 553 , and an alarm device 554 are connected to a bus line BL by bus connection.
- the I/F 551 receives signals from the A/D converter 4 and others.
- the memory 553 stores a program of processing executed by the processor 552 , and various types of data relevant to the processing.
- the alarm device 554 sounds an alarm or issues an alert in other ways in the event of, for example, an encoder anomaly.
- the functions of the light amount correcting unit 100 , the smoothing processing unit 101 , the edge detecting unit 102 , the edge position correcting unit 103 , the decoding unit 104 , the rough detection unit 105 , the phase detecting unit 106 , the high precision detection unit 107 , the correction data recalculating unit 123 , the ABS pattern correction data recalculating unit 133 a , and other units in FIG. 1 , FIG. 11 , FIG. 15 , and FIG. 17 are stored as a program in, for example, the memory 553 , and are executed by the processor 552 .
- the edge correction data memory 113 in FIG. 11 , FIG. 15 , and FIG. 17 and the ABS pattern correction data memory 133 in FIG. 17 correspond to the memory 553 .
- the memory 553 also stores, among others, the light amount correction values measured in advance and the look-up table for bit strings forming the absolute value code pattern 300 , which are described in the first embodiment, and the calculated edge position residual errors and the bit string corresponding to the rough absolute position acquired by the rough detection unit 105 , which are described in the fourth embodiment.
- the residual error saving memory is built from the memory 553 .
- the functions of the light amount correcting unit 100 , the smoothing processing unit 101 , the edge detecting unit 102 , the edge position correcting unit 103 , the decoding unit 104 , the rough detection unit 105 , the phase detecting unit 106 , the high precision detection unit 107 , the correction data recalculating unit 123 , the ABS pattern correction data recalculating unit 133 a , and other units, and generation of the data written in the memories to be used by the respective units may be configured by digital circuits that execute the respective functions, instead of the processor.
- the first embodiment to fourth embodiment of the present invention can be used in combination or alone.
- first embodiment to fourth embodiment of the present invention describe a reflective optical system
- present invention is also applicable to a transmissive optical system.
- the present invention is not limited to the rotary encoder for detecting the rotation angle described in the embodiments, and is also applicable to linear encoders for measuring the position on a straight line.
- first embodiment to fourth embodiment of the present invention describe the case where only one track having the code pattern 300 is provided on the scale 200 , the present invention is also applicable to encoders that have a plurality of tracks.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optical Transform (AREA)
- Transmission And Conversion Of Sensor Element Output (AREA)
Abstract
Description
- 1. Field of the Invention
- The present invention relates to an absolute encoder for detecting the absolute position of a measurement subject.
- 2. Description of the Related Art
- Absolute encoders are used in the field of machine tools, robots, and the like in order to accomplish highly precise positioning control. An absolute encoder includes, for example, a scale having a light-dark optical pattern, a light emitting element for irradiating the scale with light, a light receiving element for detecting light that has been transmitted through or reflected by the scale, and an arithmetic device disposed in the downstream of the light receiving element, and detects the absolute angle of the scale joined to a rotational axis of a motor or the like.
- This type of absolute encoder generally has on the scale an absolute pattern, which is made up of angle-specific patterns for detecting a rough absolute angle, and an equally spaced incremental pattern for enhancing the resolution. Structured as this, the absolute encoder is capable of detecting the absolute angle at high resolution.
- However, the improvement in resolution is making heretofore ignored errors non-negligible, and the importance of more precise detection methods is growing.
- Heretofore, there have been proposed Methods as disclosed in U.S. Pat. No. 8,759,747 and Japanese Patent Application Laid-open No. 2013-96757 as methods with which high precision detection is accomplished.
- In U.S. Pat. No. 8,759,747, for example, an absolute rotary encoder includes a rotating cylindrical body with a plurality of marks arranged on a cylindrical surface along the circumferential direction in fixed cycles, a light source for emitting light to the cylindrical surface, a detector for detecting the marks by way of a plurality of photoelectric conversion elements arranged at a pitch smaller than the cycle of the marks, and a calculation unit for calculating the absolute angle based on an output of the detector. The calculation unit uses correction data to correct a distortion error due to the geometric arrangement of the cylindrical surface and the detector in relation to each other.
- In Japanese Patent Application Laid-open No. 2013-96757, a displacement detecting device includes a scale that has a scale pattern including incremental components, an optical system for forming an image of the scale pattern with light, a light-receiving element array for detecting the formed scale pattern image, and an arithmetic circuit for analyzing the position of the scale based on a signal of the light-receiving element array. The displacement detecting device removes distortion of the optical system by virtually rearranging the light receiving elements based on a distortion table, which is obtained from distortion information of the optical system.
- However, U.S. Pat. No. 8,759,747 and Japanese Patent Application Laid-open No. 2013-96757 have the following problem:
- The absolute rotary encoder of U.S. Pat. No. 8,759,747 corrects the effect of the cylindrical surface for each position of the detector, and can therefore eliminate the effect of the cylindrical surface. However, there is a problem in that, because reducing the cycle of the marks for the purpose of enhancing the resolution gives different widths to a light portion and dark portion of a mark, which is made up of a light portion and dark portion of a received optical signal, due to the light diffraction phenomenon, the precision is not improved by correction for each position of the detector alone.
- In the displacement detecting device, as well as a displacement detecting method and a displacement detecting program, of Japanese Patent Application Laid-open No. 2013-96757, the distortion of the optical system is corrected for each position of the detector and deterioration in precision due to the distortion of an image forming lens can therefore be reduced. However, Japanese Patent Application Laid-open No. 2013-96757 has the same problem as U.S. Pat. No. 8,759,747 in that, because reducing the cycle of the marks for the purpose of enhancing the resolution gives different widths to a light portion and dark portion of a mark, which is made up of a light portion and dark portion, due to the light diffraction phenomenon, the precision is not improved by correction for each position of the detector alone.
- The present invention has been made to solve the problem described above, and it is therefore an object of the present invention to provide an absolute encoder capable of detecting the absolute angle at high resolution and with high precision.
- According to one embodiment of the present invention, there is provided an absolute encoder, including: a scale including an absolute value code pattern; a light emitting element for irradiating the scale with light; an image sensor for receiving light from the scale; an A/D converter for converting an output from the image sensor into a digital output; and an absolute position computing unit, in which: the absolute position computing unit includes: an edge detecting unit for detecting, based on a signal strength of a signal from the A/D converter and a threshold level that is set in advance, an edge pixel position of the absolute value code pattern on the image sensor, and an edge direction of the absolute value code pattern at the edge pixel position; and an edge position correcting unit for correcting the edge pixel position that is acquired by the edge detecting unit in a manner that varies depending on whether the detected edge direction is a rising edge or a falling edge; and the absolute position computing unit acquires an absolute position of the scale based on the corrected edge pixel position.
- The absolute encoder according to the one embodiment of the present invention is capable of detecting the absolute position with high precision, without being affected by the diffraction of light, even when the scale is reduced in the minimum line width of the absolute value code pattern in order to enhance the resolution.
-
FIG. 1 is a diagram for illustrating the configuration of an absolute encoder according to a first embodiment of the present invention. -
FIG. 2 is a graph for showing an example of the light amount distribution of light cast onto an image sensor of the absolute encoder according to the first embodiment of the present invention. -
FIG. 3 is a graph for showing an example of a waveform after correction in a light amount correcting unit of the absolute encoder according to the first embodiment of the present invention. -
FIG. 4 is a graph for showing an example of a waveform after processing in a smoothing processing unit of the absolute encoder according to the first embodiment of the present invention. -
FIG. 5 is a diagram for illustrating the operation of an edge detecting unit of the absolute encoder according to the first embodiment of the present invention. -
FIG. 6 is a diagram for illustrating the operation of the edge detecting unit of the absolute encoder according to the first embodiment of the present invention. -
FIG. 7 is a diagram for illustrating how an edge correction amount is obtained in the absolute encoder according to the first embodiment of the present invention. -
FIG. 8 is a diagram for illustrating the operation of an edge position correcting unit of the absolute encoder according to the first embodiment of the present invention. -
FIG. 9 is a diagram for illustrating the operation of a decoding unit of the absolute encoder according to the first embodiment of the present invention. -
FIG. 10 is a diagram for illustrating the operation of a phase detecting unit of the absolute encoder according to the first embodiment of the present invention. -
FIG. 11 is a diagram for illustrating the configuration of an absolute encoder according to a second embodiment of the present invention. -
FIG. 12 is a diagram for illustrating a fact that the width of a high bit and the width of a low bit change due to the effect of diffraction. -
FIG. 13 is a diagram for illustrating how an edge correction amount is obtained in the absolute encoder according to the second embodiment of the present invention. -
FIG. 14 is a graph for showing an example of measuring basic cycle width data of a high bit and a low bit in the absolute encoder according to the second embodiment of the present invention. -
FIG. 15 is a diagram for illustrating the configuration of an absolute encoder according to a third embodiment of the present invention. -
FIG. 16 is a graph for showing an example of measuring basic cycle width data of a high bit and a low bit in the absolute encoder according to the third embodiment of the present invention. -
FIG. 17 is a diagram for illustrating the configuration of an absolute encoder according to a fourth embodiment of the present invention. -
FIG. 18 is a diagram for illustrating edge groups in the absolute encoder according to the fourth embodiment of the present invention. -
FIG. 19 is a graph for showing an example of an edge position residual error in the absolute encoder according to the fourth embodiment of the present invention. -
FIG. 20 is a set of graphs each for showing a correction method of the absolute encoder according to the fourth embodiment of the present invention. -
FIG. 21 is a set of graphs each for showing the correction method of the absolute encoder according to the fourth embodiment of the present invention. -
FIG. 22 is a schematic configuration diagram for illustrating an example of the hardware configuration of an absolute position computing unit of the absolute encoder according to each embodiment of the present invention. - Now, an absolute encoder according to each of embodiments of the present invention is described with reference to the drawings. Note that, in each of the embodiments, the same or corresponding portions are denoted by the same reference symbols, and the overlapping description thereof is omitted.
- The configuration of an
absolute encoder 1 according to a first embodiment of the present invention is illustrated inFIG. 1 . The basic configuration of theabsolute encoder 1 includes alight emitting element 2, animage sensor 3, ascale 200, an A/D converter 4, and an absoluteposition computing unit 5. The components of theabsolute encoder 1 are described one by one below. - The
light emitting element 2 is an illumination unit for irradiating thescale 200 with light. A point light source LED, for example, is used as thelight emitting element 2. - The
image sensor 3 is a light detecting unit for receiving light from thescale 200, and is an image pick-up device such as a CCD image sensor or a CMOS image sensor. Theimage sensor 3 is one-dimensional in this embodiment, but may instead be two-dimensional. - The
scale 200 is joined to arotational shaft 6 of a motor or the like, and is provided with one track, which has an absolutevalue code pattern 300. In the absolutevalue code pattern 300, a plurality ofreflective portions 301 and a plurality ofnon-reflective portions 302 are arranged in the circumferential direction. Thereflective portions 301 are portions that reflect light from thelight emitting element 2. Thenon-reflective portions 302 are portions that absorb or transmit light from thelight emitting element 2, or reflect light from thelight emitting element 2 at a reflectance lower than that of thereflective portions 301. Thereflective portions 301 and thenon-reflective portions 302 function so as to modulate the light intensity distribution of light cast onto theimage sensor 3. - The absolute
value code pattern 300 includes thereflective portions 301 and thenon-reflective portions 302 so that the angular position of thescale 200 is characterized, and uses, for example, a code string that is obtained by encoding pseudo-random codes such as M-series codes through Manchester encoding. - While this embodiment takes as an example a reflective encoder in which the
light emitting element 2 and theimage sensor 3 are both placed on one side of thescale 200, the present invention is also applicable to a transmissive encoder in which thelight emitting element 2 and theimage sensor 3 are placed so as to face each other across thescale 200. In the case of the transmissive encoder, the absolutevalue code pattern 300 includes transmissive portions and non-transmissive portions. Regardless of whether theabsolute encoder 1 is reflective or transmissive, the absolutevalue code pattern 300 is not limited to a particular configuration as long as the absolutevalue code pattern 300 modifies the light intensity distribution of light cast onto theimage sensor 3. - The
reflective portions 301 andnon-reflective portions 302 of thescale 200 are formed by, for example, depositing a metal such as chromium through vapor deposition on a glass substrate, and patterning the resultant metal film through photolithography. Thescale 200 is not limited to particular materials and fabrication methods as long as the reflective portions and the non-reflective portions are formed in the case of a reflective encoder and as long as the transmissive portions and the non-transmissive portions are formed in the case of a transmissive encoder. - The A/
D converter 4 is a signal converting unit for converting an analog signal from theimage sensor 3 into a digital signal. - The absolute
position computing unit 5 is a computing unit for computing the absolute position of thescale 200 based on an output from the A/D converter 4, and includes a lightamount correcting unit 100, a smoothingprocessing unit 101, anedge detecting unit 102, an edgeposition correcting unit 103, adecoding unit 104, arough detection unit 105, aphase detecting unit 106, and a highprecision detection unit 107. - The operation of the absolute
position computing unit 5 is now described. - First, an image obtained by the
image sensor 3 is converted by the A/D converter 4 into digital signals, which are then input to the lightamount correcting unit 100. The signals input to the lightamount correcting unit 100 have, for example, alight amount distribution 70 shown inFIG. 2 , where the axis of abscissa represents the pixel position and the axis of ordinate represents the signal strength. Ahigh bit 8 inFIG. 2 indicates a pattern at thereflective portions 301 of thescale 200, and alow bit 9 indicates a pattern in thenon-reflective portions 302 of thescale 200. As shown inFIG. 2 , in the absolutevalue code pattern 300 of thescale 200, which is projected onto theimage sensor 3, the light amount distribution of thehigh bit 8 and thelow bit 9 is uneven due to the effects of the light amount distribution of thelight emitting element 2 itself, gain fluctuations among pixels of theimage sensor 3, and the like. The lightamount correcting unit 100 therefore makes a correction for each pixel based on a light amount correction value, which is measured in advance, in order to turn the uneven light amount distribution into an even light amount distribution. A post-light amount correctionlight amount distribution 71 ofFIG. 3 , for example, is obtained as a result. - The post-light amount correction
light amount distribution 71, which is the result of the correction in the lightamount correction unit 100, is sent to the smoothingprocessing unit 101, where smoothing processing is performed on the post-light amount correctionlight amount distribution 71. The smoothingprocessing unit 101 uses, for example, a moving average filter to acquire, for example, a post-smoothing processinglight amount distribution 72 shown inFIG. 4 . While this embodiment takes a moving average filter as an example, processing through a Gaussian filter or the like may be executed instead, and any method that smoothes signals can be used. Light amount correction, which precedes the smoothing processing in this embodiment, may be executed after the smoothing processing. The present invention is also applicable to cases where the smoothing processing is not executed. - The post-smoothing processing
light amount distribution 72 is sent to theedge detecting unit 102, which acquires an edge position on theimage sensor 3 that equals a preset threshold level 10 (hereinafter referred to as edge pixel position 11). -
FIG. 5 is an enlarged view of the vicinity of the edge pixel position, which is enclosed by the broken line frame inFIG. 4 . - The
edge detecting unit 102 first determines whether or not there is an edge based on the signal strengths of an i-th pixel and an (i+1)-th pixel, which are adjacent pixels as illustrated inFIG. 5 . Theedge detecting unit 102 determines that there is an edge when the signal strength of the i-th pixel is lower than thethreshold level 10 and the signal strength of the (i−1)-th pixel is higher than thethreshold level 10, or when the signal strength of the i-th pixel is higher than thethreshold level 10 and the signal strength of the (i−1)-th pixel is lower than thethreshold level 10. - When it is determined that there is an edge with respect to the i-th pixel and the (i+1)-th pixel, the
edge detecting unit 102 next acquires through sub-pixel processing theedge pixel position 11, which equals thethreshold level 10, by performing linear interpolation on the i-th pixel and the (i+1)-th pixel, which are on either side of thethreshold level 10. - While the
edge pixel position 11, which equals thethreshold level 10, is obtained by linear interpolation based on two pixels that are on either side of thethreshold level 10 in this embodiment, two or more pixels that are on either side of thethreshold level 10 may be used to obtain theedge pixel position 11. Instead of linear interpolation, a higher-order function such as a quadratic function or a cubic function may be used for interpolation. - In addition to the
edge pixel position 11, theedge detecting unit 102 detects anedge direction 50 ofFIG. 6 , for example, based on the signal strengths of the i-th pixel and the (i+1)-th pixel, which are on either side of thethreshold level 10. Theedge direction 50 is a risingedge 51 when the signal strength of the i-th pixel is lower than the signal strength of the (i+1)-th pixel, and is a fallingedge 52 when the signal strength of the i-th pixel is greater than the signal strength of the (i−1)-th pixel. - The
edge pixel position 11 andedge direction 50 detected by theedge detecting unit 102 are sent to the edgeposition correcting unit 103. The edgeposition correcting unit 103 acquires an edge correction amount from theedge pixel position 11 andedge direction 50 detected by theedge detecting unit 102, and corrects the pixel position of theedge pixel position 11 based on theedge direction 50. - How the
edge pixel position 11 is corrected by the edgeposition correcting unit 103 is now described with reference toFIG. 7 . - The following description is of a case where the high bit is narrow. Whether the high bit is narrow or wide depends on the distance between the
image sensor 3 and thescale 200. In the case of a single slit, where light spreads due to diffraction, the high bit is wide. In the case of an encoder or other devices that have a plurality of slits, an image is formed by diffraction interference in which the diffraction pattern of one slit interferes with the diffraction pattern of another slit, and the high bit is therefore wide depending on the distance. - As illustrated in
FIG. 7 , thehigh bit 8 andlow bit 7 of light cast onto theimage sensor 3 have a basic cycle width fh and a basic cycle width fl, respectively, which are not equal to each other due to the effect of the diffraction of light. The term “basic cycle width” refers to the minimum line width of the absolutevalue code pattern 300, which includes thereflective portions 301 and thenon-reflective portions 302. - When the (i−1)-th edge pixel position is given as ZC(i−1), the i-th edge pixel position is given as ZC(i), and the (i+1)-th edge pixel position is given as ZC(i−1), the edge correction amount of the i-th edge pixel position is acquired as follows:
- The edge
position correcting unit 103 first identifies a space between the risingedge 51 and the fallingedge 52 as a high bit, and a space between the fallingedge 52 and the risingedge 51 as a low bit. Based on thehigh bit 8 and thelow bit 9 that are adjacent to the i-th edge pixel position, the edgeposition correcting unit 103 acquires a distance Lh between the edge pixel positions of thehigh bit 8 and a distance Ll between the edge pixel positions of thelow bit 9 by Expression (1) and Expression (2). - The width of the
high bit 8, namely, the distance between the edge pixel positions on either side of thehigh bit 8, is Lh. The width of thelow bit 9, namely, the distance between the edge pixel positions on either side of thelow bit 9, is Ll. - As illustrated in
FIG. 7 , Lh=fh and Ll=fl are satisfied when thehigh bit 8 and thelow bit 9 have their respective basic cycle widths. -
Lh=ZC(i)−ZC(i−1) (1) -
Ll=ZC(i+1)−ZC(i) (2) - The distances Lh and Ll are each divided by an ideal basic cycle width F of the absolute
value code pattern 300, and the quotient is rounded off to the closest whole number to obtain an integral multiple N (N is 1 or more) of the ideal basic cycle width F. The basic cycle width fh of thehigh bit 8 and the basic cycle width fl of thelow bit 9 are expressed by their respective integral multiples N as follows: -
fh=Lh/N (3) -
fl=Ll/N (4) - Because N of Lh and N of Ll are obtained separately (Nh≈Lh/F: N of the
high bit 8, Nl≈Ll/F: N of thelow bit 9, Nh and Nl are each a number equal to or more than 1), the basic cycle widths are expressed more minutely as follows: -
fh=Lh/Nh (3a) -
fl=Ll/Nl (4a) - Each integral multiple N (Nh or Nl) indicates the number of successive bits (an integer). In other words, N indicates how many high bits are observed in succession, or how many low bits are observed in succession.
- For example, when the ideal basic cycle width F of the absolute
value code pattern 300 is ten pixels and the edge positions ZC(i−1), ZC(i), and ZC(i+1) inFIG. 7 are assumed as 6, 14, and 26, respectively, Lh is 8 and Ll is 12. - On the other hand, Nh and Nl, which are integers obtained by dividing Lh and Ll by F and rounding the quotients off, are expressed as Nh=Lh/F≈1 and Nl=Ll/F≈1, respectively. The basic cycle widths fh and fl are therefore 8 and 12, respectively.
- When ZC(i+2) is 44 in this example, Lh=44−26=18 and Nh=Lh/F≈2, and the basic cycle width fh is therefore fh=18/2=9.
- When an objective edge correction amount is given as δ, the corrected basic cycle width of the
high bit 8 is given as fh′, and the corrected basic cycle width of thelow bit 9 is given as fl′, fh′ and fl′ are expressed by Expression (5) and Expression (6). -
fh′=ZC(i)−ZC(i−1)+2δ=fh+2δ (5) -
fl′=ZC(i+1)−ZC(i)−2δ=fl−2δ (6) - The corrected basic cycle width fh′ of the
high bit 8 and the corrected basic cycle width fl′ of thelow bit 9 are equal to each other. From Expression (5) and Expression (6), the edge correction amount δ of the i-th edge pixel position is expressed by Expression (7). -
δ=(fl−fh)/4 (7) - This means that the edge correction amount δ of the i-th edge pixel position can be obtained as ¼ of a difference between the uncorrected basic cycle width fh of the
high bit 8 that is adjacent to the i-th edge pixel position and the uncorrected basic cycle width fl of thelow bit 9 that is adjacent to the i-th edge pixel position. Accordingly, when theedge pixel position 11 is given as x (=ZC(i)), the correctededge pixel position 11 of the risingedge 51 is given as XR, and the correctededge pixel position 11 of the fallingedge 52 is given as XF, the edgeposition correcting unit 103 acquires the edge correction amount δ for each of the edge pixel positions, and makes a correction with the use of Expression (8) or Expression (9) depending on theedge direction 50, i.e., the risingedge 51 or the fallingedge 52. Theedge pixel position 11 after the edge position correction processing is, for example, as illustrated inFIG. 8 . -
XR=x−δ (8) -
XF=x+δ (9) - Next, the
decoding unit 104 converts thehigh bit 8 and thelow bit 9 into a 1/0bit string 12 based on theedge direction 50 and theedge pixel position 11. The bit string is generated so that, for example, the bit value is 1 from the risingedge 51 to the fallingedge 52, and is 0 from the fallingedge 52 to the risingedge 51. In short, thehigh bit 8 is expressed as a bit value “1” and thelow bit 9 is expressed as a bit value “0”. Thedecoding unit 104, as in the edgeposition correcting unit 103, calculates the integral multiples N (Nh and Hl) from the ideal basic cycle width F and the distance between edge pixel positions, and arranges, in succession, N bits each having one of the bit value “1” and the bit value “0”. In this embodiment, pseudo-random codes such as M-series codes are encoded by Manchester encoding, and thebit string 12 therefore ideally includes two successive bits of the bit value “1” or the bit value “0” at maximum, for example, as illustrated inFIG. 9 . - While the basic cycle widths are converted into a 1/0 bit string based on the
edge direction 50 and theedge pixel position 11 in this embodiment, digitization processing may instead be used to convert the basic cycle widths into a 1/0 bit string as in the related art, and the present invention is not limited to a particular method as long as the method used is capable of converting the basic cycle widths into a 1/0 bit string. - Next, the
rough detection unit 105 detects a rough absolute position from thebit string 12 ofFIG. 9 detected by thedecoding unit 104. Therough detection unit 105 identifies a rough absolute position by, for example, storing in advance bit strings that form the absolutevalue code pattern 300 of thescale 200 in a look-up table, and comparing thebit string 12 detected by thedecoding unit 104 with the bit strings in the look-up table. - Next, the
phase detecting unit 106 acquires a phase shift amount θ in relation to areference pixel position 13 of theimage sensor 3 as illustrated inFIG. 10 . - How the
phase detecting unit 106 acquires the phase shift amount θ is now described. - In the case where the
edge detecting unit 102 detects M edges, the edgeposition correcting unit 103 corrects the edge pixel positions of the M detected edges, and the corrected edge pixel positions are denoted by ZC(1), ZC(2), ZC(i), . . . and ZC(M). When the center position of thereference pixel position 13 is given as P, and an edge pixel position that is closest to P is given as ZC(i), ZC(i) is expressed by Expression (10) with the use of the phase shift amount θ of a shift from thereference pixel position 13. -
ZC(i)=P+θ (10) - The phase shift amount θ is a negative value when ZC(i) is to the left of the
reference pixel position 13, and is a positive value when ZC(i) is to the right of thereference pixel position 13. - The
phase detecting unit 106 then processes other edges than the ZC(i) that is closest to the reference pixel center position P by acquiring an integral multiple N(i) of the basic cycle F with respect to the edge pixel position ZC(i). Examples of the integer multiple N(i) are calculated as follows: -
N(i−1)=(ZC(i−1)−ZC(i))/F -
N(i+1)=(ZC(i+1)−ZC(i))/F - The integer multiple N(i) is calculated as N(i)=(ZC(i)−ZC(i))/F=0. In the example of
FIG. 10 , N(i−1)=2, N(i+1)=2, and N(i+2)=1. Using the integer multiple N of the basic cycle width F, the edge pixel positions ZC(i−1) and ZC(i+1) are expressed by Expression (11) and Expression (12). -
ZC(i−1)=P+θ+F×N(i−1)+αN(i−1)2 +βN(i−1)3 (11) -
ZC(i+1)=P+θ+F×N(i+1)+αN(i+1)2 +βN(i+1)3 (12) - Symbols α and β represent a two-dimensional parameter and a three-dimensional parameter, respectively. The edge pixel positions are thus expressed by Expression (13) with the use of the integral multiples N, the reference pixel center position P, the phase shift amount θ, and the high-dimensional parameters α and β.
-
- By solving the equation of Expression (13), the phase shift amount θ can be obtained in the form of the least square method.
- The
reference pixel position 13 can be the center pixel, or the leftmost or rightmost pixel, of theimage sensor 3, and is not particularly limited. While all edge pixel positions are used to obtain the phase shift amount θ in the form of the least square method in this embodiment, the phase shift amount θ may be obtained directly from a difference between the center position of thereference pixel position 13 and the edge pixel position ZC(i−1) that is closest to thereference pixel position 13. - Lastly, the high
precision detection unit 107 adds the rough absolute position acquired by therough detection unit 105 and the phase shift amount θ acquired by thephase detecting unit 106 to obtain the absolute position of thescale 200. - According to the configuration described above, the absolute position can be detected with high precision even when the minimum line width of the absolute
value code pattern 300 is reduced for the purpose of enhancing the resolution because the absoluteposition computing unit 5 includes theedge detecting unit 102 and the edgeposition correcting unit 103, theedge detecting unit 102 detects theedge pixel position 11, which crosses thethreshold level 10 set in advance, and theedge direction 50, the edgeposition correcting unit 103 acquires the width of thehigh bit 8, which represents thereflective portions 301 of the absolutevalue code pattern 300 projected onto theimage sensor 3, and the width of thelow bit 9, which represents thenon-reflective portions 302 of the absolutevalue code pattern 300 projected onto theimage sensor 3, the edge correction amount δ is calculated from the width of thehigh bit 8 and the width of thelow bit 9, theedge pixel position 11 is corrected by the edge correction amount δ in a manner that varies depending on whether theedge direction 50 is the risingedge 51 or the fallingedge 52, and the absoluteposition computing unit 5 uses the corrected edge pixel position to detect the absolute position of thescale 200. - The absolute
position computing unit 5 further includes thedecoding unit 104 for converting thehigh bit 8 and thelow bit 9 into the 1/0bit string 12 based on the edge direction acquired by theedge detecting unit 102 and information of the edge pixel position corrected by the edgeposition correcting unit 103, therough detection unit 105 for identifying a rough absolute position from thebit string 12 acquired by thedecoding unit 104, thephase detecting unit 106 for acquiring a phase shift amount in relation to thereference pixel position 13 of theimage sensor 3 based on the information of the corrected edge pixel position, and the highprecision detection unit 107 for acquiring a highly precise absolute position from the rough absolute position acquired by therough detection unit 105 and information of the phase shift amount acquired by thephase detecting unit 106. The absolute position can therefore be obtained with high precision from the absolutevalue code pattern 300 alone. The need to provide a scale with two tracks, namely, an absolute pattern and an incremental pattern, in order to detect the absolute position as in the related art is thus eliminated, which means that the device size can be reduced and that the absolute position can be detected with high precision at high resolution. - In addition, with the edge correction amount calculated from the widths of the
high bit 8 and thelow bit 9 that are adjacent to theedge pixel position 11, thehigh bit 8 and thelow bit 9 that are adjacent to theedge pixel position 11 can be made equal to each other in width despite variations in the widths of thehigh bit 8 and thelow bit 9, which depend on the pixel position of theimage sensor 3. A lens or the like for collimating light from thelight emitting element 2 is thus eliminated, and the device can be made thin. - The first embodiment is configured so that the edge
position correcting unit 103 acquires the edge correction amount of theedge pixel position 11. A second embodiment of the present invention describes a method in which an edgecorrection data memory 113 is provided as illustrated inFIG. 11 , the edge correction amount is obtained as a function of the pixel position of theimage sensor 3, the edgecorrection data memory 113 stores edge correction amount information obtained in advance, and the edgeposition correcting unit 103 uses the information in the edgecorrection data memory 113 to correct theedge pixel position 11. - An
absolute encoder 1 of the second embodiment is the same in basic configuration as theabsolute encoder 1 of the first embodiment, except that the edgecorrection data memory 113 is added and that the edgeposition correcting unit 103 uses a different computing method. The rest of the components are the same as those in the first embodiment, and are denoted by the same reference symbols in order to omit descriptions thereof. - In the case where an image forming lens or a similar component is not used, the effect of diffraction differs in the central portion and peripheral portion of the
image sensor 3 because the distance from thelight emitting element 2 to theimage sensor 3 grows toward the peripheral portion of theimage sensor 3 as illustrated inFIG. 12 . Consequently, the difference between the width of thehigh bit 8 and the width of thelow bit 9 increases toward the peripheral portion of theimage sensor 3. Theabsolute encoder 1 of the second embodiment therefore acquires the edge correction amount as a function of the pixel position of theimage sensor 3. - A description is given on a method of calculating the edge correction amount of the pixel position of the
image sensor 3 from data about the basic cycle widths of thehigh bit 8 and thelow bit 9. - First, with the
absolute encoder 1 mounted to a motor, theimage sensor 3 obtains an image at an appropriate angular position, and processing up through the computation in theedge detecting unit 102 is executed to obtain theedge pixel position 11 and theedge direction 50. When the i-th edge pixel position is given as ZC(i) and the (i+1)-th edge pixel position is given as ZC(i+1) as illustrated inFIG. 13 , the bit is identified as thehigh bit 8 if ZC(i) is the risingedge 51, and a basic cycle width fh(xh) of thehigh bit 8 is calculated from a center pixel xh of thehigh bit 8 and the distance Lh between the edge pixel positions of thehigh bit 8 by Expression (14), Expression (15), and Expression (16). -
Lh=ZC(i+1)−ZC(i) (14) -
xh=(ZC(i+1)+ZC(i))/2 (15) -
fh=Lh/N (16) - The symbol N is an integer equal to or more than 1, and represents an integral multiple of an ideal basic cycle width as in the first embodiment.
- A center pixel xl of the
low bit 9 and a basic cycle width fl(xl) of thelow bit 9 are obtained in the same manner. In the case of thelow bit 9, the bit is identified as thelow bit 9 when ZC(i) is the fallingedge 52. - The integral multiple N (Nh or Nl) is expressed more minutely as Nh=Lh/F or Nl=Ll/F as in the first embodiment.
- By changing the angular position of the
scale 200, the bit center position data and basic cycle width data of thehigh bit 8 and thelow bit 9 at a different pixel position can be obtained. For example, when a measurement subject is measured 1,800 times at an angle pitch of 0.2 degrees, the center pixel data and basic cycle width data of thehigh bit 8 and thelow bit 9 are plotted as shown inFIG. 14 . Measurement data of thehigh bit 8 is denoted by H14 a, an approximate curve of thehigh bit 8 is denoted by H14 b, measurement data of thelow bit 9 is denoted by L14 a, and an approximate curve of thelow bit 9 is denoted by L14 b. As shown inFIG. 14 , thehigh bit 8 and thelow bit 9 have different basic cycle width characteristics in relation to the pixel position, and the difference between the basic cycle width of thehigh bit 8 and the basic cycle width of thelow bit 9 grows toward the peripheral portion of theimage sensor 3. - Next, an approximate function fh(x) for the basic cycle width data of the
high bit 8 in relation to the pixel position and an approximate function fl(x) for the basic cycle width data of thelow bit 9 in relation to the pixel position are obtained by a quadratic least square method. The obtained quadratic functions are expressed by Expression (17) and Expression (18) when the pixel position is given as x and parameters of the functions are given as fho, αh, βh, flo, αl, and βl. -
fh(x)=fho+αh×x+βh×x 2 (17) -
fl(x)=flo+αl×x+βl×x 2 (18) - The edge correction amount is obtained by the same principle as in the first embodiment, namely, as ¼ of the difference between the basic cycle width of the
high bit 8 and the basic cycle width of thelow bit 9. An edge correction amount δ(x) of the pixel position x of theimage sensor 3 is obtained by Expression (19). -
- Correction by the edge correction amount δ(x) is made in combination with a normal test prior to the shipping of the encoder, for example, and parameters of the obtained edge correction amount function δ(x) are saved in the edge
correction data memory 113. - An edge position correction method used by the edge
position correcting unit 103 is described next. - After the
edge detecting unit 102 calculates theedge pixel position 11 and theedge direction 50, the edgeposition correcting unit 103 acquires parameters of the edge correction amount δ(x) from the edgecorrection data memory 113. With the edge pixel position given as x, the corrected edge pixel position of the risingedge 51 given as XR(x), and the corrected edge pixel position of the fallingedge 52 given as XF(x), the edgeposition correcting unit 103 makes a correction with the use of Expression (20) or Expression (21), depending on whether theedge direction 50 is the risingedge 51 or the fallingedge 52. -
XR(x)=x−δ(x) (20) -
XF(x)=x+δ(x) (21) - According to the configuration described above, where the basic cycle width data of the
high bit 8 and thelow bit 9 in relation to the pixel position of theimage sensor 3 is measured in advance, and the edge correction amount δ is obtained from the measured data as a function of the pixel position of theimage sensor 3, theedge pixel position 11 can be corrected with an even higher precision. - In addition, an approximate function is analyzed with the use of the measured basic cycle width data of the
high bit 8 and thelow bit 9, and the edge correction amount δ is calculated from the analyzed approximate function. This prevents an error caused by a foreign object or the like at some edge pixel positions from affecting other edges much, and the absolute position can be detected with high precision despite an error factor such as a foreign object. - Further, the edge correction amount δ is calculated after the basic cycle width characteristics of the
high bit 8 and thelow bit 9, which vary depending on where thelight emitting element 2 and theimage sensor 3 are mounted in relation to thescale 200, are obtained with thelight emitting element 2 and theimage sensor 3 mounted. The attachment tolerance of thelight emitting element 2 and theimage sensor 3 can therefore be relaxed. - Moreover, with the edge
correction data memory 113 provided so that the edgeposition correcting unit 103 corrects theedge pixel position 11 by using data in the edgecorrection data memory 113, the need to calculate the edge correction amount δ each time is eliminated, and the calculation load is accordingly lightened. - While a quadratic function is fitted to the basic cycle width data of the
high bit 8 and thelow bit 9 in the second embodiment, a fitting function of an even higher order may be used instead. Alternatively, the data may be sectioned into areas for linear interpolation, and any function that represents the basic cycle width characteristics of thehigh bit 8 and thelow bit 9 can be employed. - Instead of saving in the edge
correction data memory 113 parameters of the edge correction amount function δ(x) that are obtained in advance, the value of the edge correction amount may be saved for each pixel of theimage sensor 3. The edgeposition correcting unit 103 in this case corrects an edge by the edge correction amount δ that is obtained by, for example, interpolating a space between pixels through linear interpolation or the like. Data saved in the edgecorrection data memory 113 is not particularly limited as long as the saved data is information necessary to obtain the edge correction amount δ(x) of the pixel position x. - While a measurement subject is measured 1,800 times at a pitch of 0.2 degrees to obtain the basic cycle width data of the
high bit 8 and thelow bit 9 in this embodiment, the present invention is applicable when data at one angular position, at least, is available. - This embodiment is configured so that information of the edge correction amount obtained as a function of the pixel position of the
image sensor 3 is measured in advance and stored in the edgecorrection data memory 113. Instead of providing the edgecorrection data memory 113, as in the first embodiment, the edgeposition correcting unit 103 may acquire the edge correction amount as a function of the pixel position of theimage sensor 3 to correct theedge pixel position 11. - The second embodiment is configured so that the edge
position correcting unit 103 corrects theedge pixel position 11 with the use of the edge correction amount information in the edgecorrection data memory 113 that is obtained in advance. Alternatively, data in the edgecorrection data memory 113 may be updated regularly by providing a correctiondata recalculating unit 123 as illustrated inFIG. 15 . - An
absolute encoder 1 of the third embodiment is the same in basic configuration as theabsolute encoder 1 of the second embodiment, except that the correctiondata recalculating unit 123 is added. The rest of the components are the same as those in the first embodiment and the second embodiment, and are denoted by the same reference symbols in order to omit descriptions thereof. - A change in ambient temperature changes the positional relation of the
light emitting element 2 and theimage sensor 3 to thescale 200. For example, a change in the gap from thescale 200 to thelight emitting element 2 and theimage sensor 3 changes the basic cycle width characteristics of thehigh bit 8 and thelow bit 9 as well. In the case where the basic cycle width characteristics of thehigh bit 8 and thelow bit 9 at the initial attachment position are as shown inFIG. 14 , the basic cycle width characteristics of thehigh bit 8 and thelow bit 9 that are obtained when the gap increases are as shown inFIG. 16 , for example. Measurement data of thehigh bit 8 is denoted by H16 a, an approximate curve of thehigh bit 8 is denoted by H16 b, measurement data of thelow bit 9 is denoted by L16 a, and an approximate curve of thelow bit 9 is denoted by L16 b. - A change in basic cycle width characteristics of the
high bit 8 and thelow bit 9 as this leads to a drop in the precision of absolute position detection when the positions of thelight emitting element 2 and theimage sensor 3 in relation to thescale 200 change because the edge is corrected by the wrong edge correction amount δ(x). The third embodiment is therefore configured so that the correctiondata recalculating unit 123 updates the edge correction amount δ(x) obtained as a function of the pixel position of theimage sensor 3. - The operation of the correction
data recalculating unit 123 is now described. - Information about the
edge pixel position 11 andedge direction 50 calculated by theedge detecting unit 102 is sent to the correctiondata recalculating unit 123 as well as to the edgeposition correcting unit 103. The correctiondata recalculating unit 123 identifies the bit as thehigh bit 8 when theedge direction 50 is the risingedge 51, calculates the center pixel xh and basic cycle width fh(xh) of thehigh bit 8 in the same manner that is used in the second embodiment to create the edge correction data, and stores the data in a memory area that is secured for thehigh bit 8 in the edgecorrection data memory 113. Similarly, the correctiondata recalculating unit 123 identifies the bit as thelow bit 9 when theedge direction 50 is the fallingedge 52, calculates the center pixel xl and basic cycle width fl(xl) of thelow bit 9 in the same manner that is used in the second embodiment to create the edge correction data, and stores the data in a memory area (not shown) that is secured for thelow bit 9 in the edgecorrection data memory 113. - The correction
data recalculating unit 123 keeps collecting information about theedge pixel position 11 and theedge direction 50 until T seconds elapse since the start of the data collection, and then uses the data in the memory area secured for thehigh bit 8 to obtain the parameters of Expression (17) by the quadratic least square method. Similarly, the correctiondata recalculating unit 123 uses the data in the memory area secured for thelow bit 9 to acquire the parameters of Expression (18) by the quadratic least square method. From the acquired parameters, parameters of the edge correction amount δ(x) are calculated by Expression (19) to rewrite the data in the edgecorrection data memory 113. The data in the memory area secured for thehigh bit 8 and the data in the memory area secured for thelow bit 9 are cleared, and the correctiondata recalculating unit 123 starts collecting data again. - While the data in the edge
correction data memory 113 is updated after T seconds elapse since the start of the data collection in the third embodiment, the timing of data update may be determined based on pixel position information of theimage sensor 3. For example, the pixel range of theimage sensor 3 is sectioned into M areas and, when the bit center pixels xh and xl enter all of the M areas, the parameters of the edge correction amount δ(x) are calculated from the data in the memory area secured for thehigh bit 8 and the data in the memory area secured for thelow bit 9 to update the data in the edgecorrection data memory 113. Thus, there are various possible modes with regards to the timing of updating the data in the edgecorrection data memory 113. The data in the edgecorrection data memory 113 may of course be updated as the need arises, by calculating the parameters of the edge correction amount δ(x) in the correctiondata recalculating unit 123 from data of one image obtained by theimage sensor 3. - According to this configuration, where the correction
data recalculating unit 123 is provided to update data in the edgecorrection data memory 113, parts displacement that accompanies a change in temperature or other changes is prevented from decreasing precision, and high precision detection can therefore be maintained. - In addition, the reliability of the encoder can be improved by comparing information of the edge correction amount δ(x) that is calculated by the correction
data recalculating unit 123 with pre-update information of the edge correction amount δ(x) that is in the edgecorrection data memory 113, determining that there is an encoder anomaly when a change between the pre-update information and the post-update information exceeds a range set in advance, and sounding an alarm or issuing an alert in other ways. - The first embodiment to the third embodiment are configured so that the edge
position correcting unit 103 corrects theedge pixel position 11 in a manner that varies depending on theedge direction 50. Described here is a method in which an absolute (ABS) patterncorrection data memory 133 is provided as illustrated inFIG. 17 , theedge pixel position 11 is corrected in a manner suited to the absolutevalue code pattern 300, and thephase detecting unit 106 uses the correctededge pixel position 11 to calculate the phase shift amount θ. - An
absolute encoder 1 according to a fourth embodiment of the present invention is the same in basic configuration as theabsolute encoder 1 of the third embodiment, except that the ABS patterncorrection data memory 133 is added and that thephase detecting unit 106 executes different processing. The rest of the components are the same as those in the first embodiment to the third embodiment, and are denoted by the same reference symbols in order to omit descriptions thereof. - The
code pattern 300 that is used on thescale 200 of the fourth embodiment is a pattern that is obtained by encoding pseudo-random codes such as M-series codes through Manchester encoding. Manchester encoding converts one bit into two bits so that, for example, a bit having a value “1” is turned into “1 0” whereas a bit having a value “0” is turned into “0 1”. An M-series pattern that is 101110, for example, is turned into 100110101001 by Manchester encoding. In other words, in a bit string created by Manchester encoding, the number of successive “1” bits and “0” bits is two at maximum. - The bit string thus created by Manchester encoding is divided between the rising
edge 51 and the fallingedge 52 to be classified into eight groups, which are made up ofgroups 401 to 408 as illustrated inFIG. 18 . - As has been described, when attention is paid on one of the
reflective portions 301, light reflected by thereflective portion 301 causes the edge pixel positions 11 of the risingedge 51 and the fallingedge 52 to vary because of the light diffraction phenomenon, with the result that thehigh bit 8 and thelow bit 9 have widths different from each other. However, the edge pixel positions 11 of the risingedge 51 and the fallingedge 52 are varied also by interference from anotherreflective portion 301. The fourth embodiment therefore involves dividing the bit string into groups of the risingedge 51 and groups of the fallingedge 52, namely, eight groups in total, for correction. - A method of creating correction values of the ABS pattern
correction data memory 133 is described next. - First, with the
absolute encoder 1 mounted to a motor, theimage sensor 3 obtains an image at an appropriate angular position, processing that precedes computation in thephase detecting unit 106 is executed, and thephase detecting unit 106 calculates thephase shift amount 8 of a shift from thereference pixel position 13 of theimage sensor 3 by the least square method. Thephase detecting unit 106 also calculates, from the result of the fitting by the least square method, a residual error for each edge position, and saves the edge position residual error and a bit string that corresponds to the rough absolute position acquired by therough detection unit 105 in a residual error saving memory (not shown). - The same computation is executed at a different angular position of the
scale 200. For example, when a measurement subject is measured 1,800 times at an angle pitch of 0.2 degrees, edge position residual errors are plotted in relation to the pixel position as shown inFIG. 19 . Denoted by R19 is the rising edge and denoted by F19 is the falling edge. Based on the bit string in the residual error saving memory, the results of the edge position residual errors in relation to the pixel position are divided into the groups ofFIG. 18 , namely, eight groups in total, as shown inFIG. 20 andFIG. 21 . As shown inFIG. 20 andFIG. 21 , characteristics of the edge position residual errors in relation to the pixel position of theimage sensor 3 vary between the risingedge 51 and the fallingedge 52. The residual error characteristics also slightly vary among the four groups belonging to the same edge, namely, the risingedge 51 or the fallingedge 52. Data of the edge position residual errors is therefore used to analyze an approximate function for the groups of the risingedge 51 and the groups of the fallingedge 52, namely, eight groups in total. For example, pixel positions are divided into sixteen areas and approximated to straight lines so as to save parameters of the straight lines of the respective areas in the ABS patterncorrection data memory 133. - While pixel positions are divided into sixteen areas and approximated to straight lines in the fourth embodiment, the number of the divided areas can be smaller or larger than 16, although the precision of the correction is higher when the number of the divided areas is larger. Instead of dividing into areas, the least square method may be used to fit a higher-order function such as a quadratic function or a cubic function.
- The creation and saving of the ABS pattern correction data are executed in combination with a normal test prior to the shipping of the encoder, for example.
- Processing executed in the
phase detecting unit 106 is described next. - An image obtained by the
image sensor 3 is processed by the method described in the first embodiment to the third embodiment, up through the computation in therough detection unit 105, and a bit string in the look-up table that corresponds to the rough absolute position is sent to thephase detecting unit 106 along with theedge pixel position 11 and theedge direction 50. Thephase detecting unit 106 identifies, for eachedge pixel position 11, a group to which theedge pixel position 11 belongs out of the groups ofFIG. 18 , based on theedge direction 50, the bit string that corresponds to the rough absolute position, and two adjacent pixels in front of and past theedge pixel position 11. - The
phase detecting unit 106 next acquires from the ABS patterncorrection data memory 133 correction parameters based on the identified group, and calculates an edge correction amount at theedge pixel position 11 from the obtained correction parameters. When the calculated correction amount is given as edge correction amount δ2(x), theedge pixel position 11 is corrected by adding δ2(x) to theedge pixel position 11 in the case of the risingedge 51 and in the case of the fallingedge 52 both. Thephase detecting unit 106 uses the thus correctededge pixel position 11 to acquire the phase shift amount θ, and the absolute position is calculated with high precision. - According to this configuration, where the ABS pattern
correction data memory 133 is provided, a bit string is divided between the risingedge 51 and the fallingedge 52 into eight groups in total, and theedge pixel position 11 is corrected by an edge correction amount obtained in advance for each group separately, an error due to the effect of diffraction is eliminated, and the absolute position can be detected with high precision. - While data in the ABS pattern
correction data memory 133 is obtained in advance in the fourth embodiment, an ABS pattern correction data recalculating unit 133 a, which is indicated by the broken line inFIG. 17 , may be provided as in the third embodiment to update the ABS patterncorrection data memory 133. -
FIG. 22 is a schematic configuration for illustrating an example of the hardware configuration of the absoluteposition computing unit 5 in the absolute encoder according to each embodiment of the present invention. InFIG. 22 , an interface (I/F) 551, aprocessor 552, amemory 553, and analarm device 554 are connected to a bus line BL by bus connection. The I/F 551 receives signals from the A/D converter 4 and others. Thememory 553 stores a program of processing executed by theprocessor 552, and various types of data relevant to the processing. Thealarm device 554 sounds an alarm or issues an alert in other ways in the event of, for example, an encoder anomaly. - The functions of the light
amount correcting unit 100, the smoothingprocessing unit 101, theedge detecting unit 102, the edgeposition correcting unit 103, thedecoding unit 104, therough detection unit 105, thephase detecting unit 106, the highprecision detection unit 107, the correctiondata recalculating unit 123, the ABS pattern correction data recalculating unit 133 a, and other units inFIG. 1 ,FIG. 11 ,FIG. 15 , andFIG. 17 are stored as a program in, for example, thememory 553, and are executed by theprocessor 552. The edgecorrection data memory 113 inFIG. 11 ,FIG. 15 , andFIG. 17 and the ABS patterncorrection data memory 133 inFIG. 17 correspond to thememory 553. - The
memory 553 also stores, among others, the light amount correction values measured in advance and the look-up table for bit strings forming the absolutevalue code pattern 300, which are described in the first embodiment, and the calculated edge position residual errors and the bit string corresponding to the rough absolute position acquired by therough detection unit 105, which are described in the fourth embodiment. The residual error saving memory is built from thememory 553. - The functions of the light
amount correcting unit 100, the smoothingprocessing unit 101, theedge detecting unit 102, the edgeposition correcting unit 103, thedecoding unit 104, therough detection unit 105, thephase detecting unit 106, the highprecision detection unit 107, the correctiondata recalculating unit 123, the ABS pattern correction data recalculating unit 133 a, and other units, and generation of the data written in the memories to be used by the respective units may be configured by digital circuits that execute the respective functions, instead of the processor. - The first embodiment to fourth embodiment of the present invention can be used in combination or alone.
- While the first embodiment to fourth embodiment of the present invention describe a reflective optical system, the present invention is also applicable to a transmissive optical system. The present invention is not limited to the rotary encoder for detecting the rotation angle described in the embodiments, and is also applicable to linear encoders for measuring the position on a straight line.
- While the first embodiment to fourth embodiment of the present invention describe the case where only one track having the
code pattern 300 is provided on thescale 200, the present invention is also applicable to encoders that have a plurality of tracks. - The present invention has been described through preferred embodiments. However, it should be understood that other alterations and changes can be made within the spirit and scope of the present invention. The appended claims are therefore intended to encompass all modifications and changes that are within the true spirit and scope of the present invention.
Claims (12)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/861,212 US9605981B1 (en) | 2015-09-22 | 2015-09-22 | Absolute encoder |
PCT/JP2016/061644 WO2017051559A1 (en) | 2015-09-22 | 2016-04-04 | Absolute encoder |
CN201680054040.1A CN108027259B (en) | 2015-09-22 | 2016-04-04 | Absolute encoder |
KR1020187007021A KR102008632B1 (en) | 2015-09-22 | 2016-04-04 | Absolute encoder |
JP2017511358A JP6355827B2 (en) | 2015-09-22 | 2016-04-04 | Absolute encoder |
DE112016004275.2T DE112016004275T5 (en) | 2015-09-22 | 2016-04-04 | Absolute encoders |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/861,212 US9605981B1 (en) | 2015-09-22 | 2015-09-22 | Absolute encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170082463A1 true US20170082463A1 (en) | 2017-03-23 |
US9605981B1 US9605981B1 (en) | 2017-03-28 |
Family
ID=55863155
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/861,212 Expired - Fee Related US9605981B1 (en) | 2015-09-22 | 2015-09-22 | Absolute encoder |
Country Status (6)
Country | Link |
---|---|
US (1) | US9605981B1 (en) |
JP (1) | JP6355827B2 (en) |
KR (1) | KR102008632B1 (en) |
CN (1) | CN108027259B (en) |
DE (1) | DE112016004275T5 (en) |
WO (1) | WO2017051559A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019198264A1 (en) * | 2018-04-12 | 2019-10-17 | Mitsubishi Electric Corporation | Encoder, absolute positioning encoder method, and absolute positioning encoder system |
CN112585431A (en) * | 2018-06-07 | 2021-03-30 | P·M·约翰逊 | Linear and rotary multi-track absolute position encoder and method of use |
US11162818B2 (en) * | 2018-09-27 | 2021-11-02 | Melexis Technologies Sa | Sensor device, system and related method |
US11341342B2 (en) * | 2018-02-14 | 2022-05-24 | Aeolus Robotics Corporation Limited | Optical encoder and method of operating the same |
CN116481582A (en) * | 2023-06-21 | 2023-07-25 | 深圳深蕾科技股份有限公司 | Precision detection system of incremental photoelectric encoder |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7102058B2 (en) | 2017-05-22 | 2022-07-19 | 株式会社ミツトヨ | Photoelectric encoder |
CN108827353B (en) * | 2018-07-03 | 2020-06-02 | 吉林大学 | Pseudo-random code and increment code synchronization method |
JP7078486B2 (en) * | 2018-08-01 | 2022-05-31 | 株式会社トプコン | Angle detection system and angle detection method |
US10886932B2 (en) | 2018-09-11 | 2021-01-05 | Tt Electronics Plc | Method and apparatus for alignment adjustment of encoder systems |
KR102082476B1 (en) * | 2018-10-17 | 2020-02-27 | 한국표준과학연구원 | 2D Absolute Position Measuring Method And Absolute Position Measuring Apparatus |
WO2022044323A1 (en) * | 2020-08-31 | 2022-03-03 | 三菱電機株式会社 | Absolute encoder |
CN112284429B (en) * | 2020-12-31 | 2021-03-26 | 深圳煜炜光学科技有限公司 | Method and device for correcting uniformity of laser radar code disc |
CN113686365B (en) * | 2021-09-02 | 2022-06-17 | 北京精雕科技集团有限公司 | Absolute position measuring device |
CN115077574B (en) * | 2022-04-28 | 2023-10-20 | 横川机器人(深圳)有限公司 | Inductance type absolute value encoder based on environmental induction |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS56142404A (en) * | 1980-04-09 | 1981-11-06 | Nec Corp | System for measuring plate width |
JPH0776684B2 (en) * | 1985-07-03 | 1995-08-16 | 北陽電機株式会社 | Optical contour measuring device |
DE3825295C2 (en) * | 1988-07-26 | 1994-05-11 | Heidelberger Druckmasch Ag | Device for detecting the position of a paper edge |
JPH0518782A (en) * | 1991-07-10 | 1993-01-26 | Sony Corp | Absolute position detecting device |
JP3412897B2 (en) | 1994-02-18 | 2003-06-03 | 三菱電機株式会社 | Absolute encoder |
JP2002286425A (en) * | 2001-03-23 | 2002-10-03 | Omron Corp | Displacement sensor |
JP2003247856A (en) * | 2002-02-26 | 2003-09-05 | Toyota Motor Corp | Apparatus and method for detecting position |
JP2003262649A (en) * | 2002-03-08 | 2003-09-19 | Fuji Electric Co Ltd | Speed detector |
US6664535B1 (en) * | 2002-07-16 | 2003-12-16 | Mitutoyo Corporation | Scale structures and methods usable in an absolute position transducer |
JP4292571B2 (en) * | 2003-03-31 | 2009-07-08 | 株式会社デンソー | Magnetic sensor adjustment method and magnetic sensor adjustment device |
JP2005249452A (en) * | 2004-03-02 | 2005-09-15 | Konica Minolta Medical & Graphic Inc | Linear encoder, image reading device and image recording device |
KR100544207B1 (en) * | 2004-07-30 | 2006-01-23 | 삼성전자주식회사 | Method and apparatus for adjusting alignment of image forming device |
JP4885630B2 (en) * | 2006-07-05 | 2012-02-29 | 株式会社ミツトヨ | Two-dimensional encoder and its scale |
DE102007045362A1 (en) * | 2007-09-22 | 2009-04-02 | Dr. Johannes Heidenhain Gmbh | Position measuring device |
DE102007061287A1 (en) | 2007-12-19 | 2009-06-25 | Dr. Johannes Heidenhain Gmbh | Position measuring device and method for absolute position determination |
JP5011201B2 (en) | 2008-05-01 | 2012-08-29 | 株式会社ミツトヨ | Absolute position measuring encoder |
JP5103267B2 (en) * | 2008-05-13 | 2012-12-19 | 株式会社ミツトヨ | Absolute position measuring encoder |
JP5560873B2 (en) * | 2010-04-21 | 2014-07-30 | 株式会社ニコン | Encoder and encoder position detection method |
JP5832088B2 (en) | 2010-12-15 | 2015-12-16 | キヤノン株式会社 | Rotary encoder |
US20120283986A1 (en) * | 2011-05-03 | 2012-11-08 | Ashok Veeraraghavan | System and Method for Measuring Positions |
JP6074672B2 (en) | 2011-10-28 | 2017-02-08 | 株式会社ミツトヨ | Displacement detection device, displacement detection method, and displacement detection program |
US20130204574A1 (en) * | 2012-02-07 | 2013-08-08 | Amit Agrawal | Method for Estimating Positions Using Absolute Encoders |
US20150377654A1 (en) * | 2012-02-07 | 2015-12-31 | Mitsubishi Electric Research Laboratories, Inc. | Method and System for Estimating Positions Using Absolute Encoders |
DE112014002505T5 (en) * | 2013-05-21 | 2016-04-28 | Mitsubishi Electric Corporation | Method for self-calibrating a rotary encoder |
JP6149740B2 (en) * | 2014-01-23 | 2017-06-21 | 三菱電機株式会社 | Absolute encoder |
-
2015
- 2015-09-22 US US14/861,212 patent/US9605981B1/en not_active Expired - Fee Related
-
2016
- 2016-04-04 WO PCT/JP2016/061644 patent/WO2017051559A1/en active Application Filing
- 2016-04-04 DE DE112016004275.2T patent/DE112016004275T5/en not_active Withdrawn
- 2016-04-04 CN CN201680054040.1A patent/CN108027259B/en active Active
- 2016-04-04 JP JP2017511358A patent/JP6355827B2/en active Active
- 2016-04-04 KR KR1020187007021A patent/KR102008632B1/en active IP Right Grant
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11341342B2 (en) * | 2018-02-14 | 2022-05-24 | Aeolus Robotics Corporation Limited | Optical encoder and method of operating the same |
WO2019198264A1 (en) * | 2018-04-12 | 2019-10-17 | Mitsubishi Electric Corporation | Encoder, absolute positioning encoder method, and absolute positioning encoder system |
US10795151B2 (en) | 2018-04-12 | 2020-10-06 | Mitsubishi Electric Research Laboratories, Inc. | Methods and systems for terahertz-based positioning |
CN112585431A (en) * | 2018-06-07 | 2021-03-30 | P·M·约翰逊 | Linear and rotary multi-track absolute position encoder and method of use |
US11162818B2 (en) * | 2018-09-27 | 2021-11-02 | Melexis Technologies Sa | Sensor device, system and related method |
CN116481582A (en) * | 2023-06-21 | 2023-07-25 | 深圳深蕾科技股份有限公司 | Precision detection system of incremental photoelectric encoder |
Also Published As
Publication number | Publication date |
---|---|
CN108027259A (en) | 2018-05-11 |
WO2017051559A1 (en) | 2017-03-30 |
US9605981B1 (en) | 2017-03-28 |
KR102008632B1 (en) | 2019-08-07 |
CN108027259B (en) | 2020-09-18 |
KR20180040629A (en) | 2018-04-20 |
JP2017531783A (en) | 2017-10-26 |
JP6355827B2 (en) | 2018-07-11 |
DE112016004275T5 (en) | 2018-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9605981B1 (en) | Absolute encoder | |
EP2466267B1 (en) | Absolute rotary encoder | |
JP5379761B2 (en) | Absolute encoder | |
JP5837201B2 (en) | Method and apparatus for determining position | |
US8785838B2 (en) | Absolute rotary encoder | |
US6304190B1 (en) | Method for determining the absolute angular position of the steering wheel of a motor vehicle, and optoelecronic steering angle sensor | |
US8227744B2 (en) | Absolute position length measurement type encoder | |
JP4703059B2 (en) | Photoelectric encoder | |
JP2016014574A (en) | Absolute encoder | |
US20150377654A1 (en) | Method and System for Estimating Positions Using Absolute Encoders | |
JP6149740B2 (en) | Absolute encoder | |
EP2477006A1 (en) | High resolution absolute linear encoder | |
WO2018163424A1 (en) | Absolute encoder | |
EP2275782B1 (en) | High resolution absolute rotary encoder | |
US9534936B2 (en) | Reference signal generation apparatus and reference signal generation system | |
JP5974154B2 (en) | Rotary encoder | |
JPH0335111A (en) | Absolute position detecting device | |
JP5701740B2 (en) | Optical encoder | |
WO2017043249A1 (en) | Method and apparatus for determining position on scale | |
JP2018119816A (en) | Absolute encoder, industrial machine, and product manufacturing method | |
JP2014106210A (en) | Absolute encoder and method for finding absolute position | |
WO2014132631A1 (en) | Absolute encoder | |
JP6023561B2 (en) | Measuring device, measuring method, and absolute encoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MITSUBISHI ELECTRIC RESEARCH LABORATORIES, INC., M Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:THORNTON, JAY E.;AGRAWAL, AMIT;SIGNING DATES FROM 20150818 TO 20150910;REEL/FRAME:036623/0104 Owner name: MITSUBISHI ELECTRIC CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NOGUCHI, TAKUYA;NASU, OSAMU;NAKAJIMA, HAJIME;AND OTHERS;SIGNING DATES FROM 20150723 TO 20150729;REEL/FRAME:036623/0045 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210328 |