WO2022264576A1 - Distance measurement device and distance measurement method - Google Patents

Distance measurement device and distance measurement method Download PDF

Info

Publication number
WO2022264576A1
WO2022264576A1 PCT/JP2022/011891 JP2022011891W WO2022264576A1 WO 2022264576 A1 WO2022264576 A1 WO 2022264576A1 JP 2022011891 W JP2022011891 W JP 2022011891W WO 2022264576 A1 WO2022264576 A1 WO 2022264576A1
Authority
WO
WIPO (PCT)
Prior art keywords
color pattern
light
color
pixel
pattern
Prior art date
Application number
PCT/JP2022/011891
Other languages
French (fr)
Japanese (ja)
Inventor
和彦 村岡
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2022264576A1 publication Critical patent/WO2022264576A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication

Definitions

  • the present disclosure relates to a rangefinder and a rangefinder method.
  • a method is generally known in which a light source image projected onto the object surface from a light source as one viewpoint is observed by a camera set at the other viewpoint and the shape is measured.
  • Such active methods project two-dimensional color patterns such as stripe patterns.
  • the present disclosure provides a distance measuring device and a distance measuring method capable of suppressing the influence of the color of the object to be measured.
  • a projection unit that sequentially projects a predetermined first color pattern and a predetermined second color pattern onto a measurement target; an information processing unit that measures the distance to the measurement object based on the first color pattern and the second color pattern; with The distance measuring device, wherein light projected onto the same projection area of the first color pattern and the light of the second color pattern are light of different wavelength bands.
  • the light projected onto the same projection area of the first color pattern and the second color pattern is composed of a combination of light of three wavelength bands. At least one of the three wavelength bands of light may be projected, and the remaining wavelength bands of the three wavelength bands of light may be projected in the second color pattern.
  • Each of the light in the three wavelength bands may be light in the red band, the green band, or the blue band.
  • an imaging unit that sequentially images the measurement object onto which the first color pattern and the second color pattern are projected; You may also prepare.
  • the imaging unit is A timing generation unit may be provided for generating a control signal for controlling projection timing of the first color pattern and the second color pattern in the projection unit and imaging timing of the imaging unit.
  • the imaging unit is A red pixel having sensitivity to light in the red band and having less sensitivity to green and blue bands than light in the red band, and a pixel having sensitivity to light in the green band and having sensitivity to red and blue bands Green pixels whose sensitivity is suppressed from light in the green band, and blue pixels which have sensitivity to light in the blue band and whose sensitivity to red and green bands is suppressed from light in the blue band. It may have a pixel array in which imaging pixels are arranged two-dimensionally.
  • Each of the red pixel, the green pixel, and the blue pixel within the imaging pixel may generate more pixel signals for either the first color pattern or the second color pattern.
  • the imaging unit is A first accumulated charge amount accumulated according to the first color pattern and accumulated according to the second color pattern for each of the red pixel, the green pixel, and the blue pixel in the imaging pixel A voltage corresponding to the difference between the accumulated charge amount and the accumulated charge amount may be converted into a digital pixel signal.
  • Each of the red pixel, the green pixel, and the blue pixel in the imaging pixel may be controlled such that the imaging time for the first color pattern and the imaging time for the second color pattern are equal. good.
  • the information processing unit An image generation unit may be provided that generates a color image of the measurement object based on the output signal of the imaging pixel for the first color pattern and the output signal of the imaging pixel for the second color pattern.
  • the information processing unit Based on the digital pixel signal, patterns are sequentially detected from a digital image of the first color pattern and the second color pattern, and the detected pattern, the first color pattern projected by the projection unit, and the second color pattern are detected. It may have a measuring unit that associates at least one of the two color patterns and generates a distance to each portion of the measurement target.
  • the first color pattern and the second color pattern are composed of a plurality of partitioned regions, and the color of light in each partitioned region is any one of the red band, the green band, and the blue band. or any combination of two of the red band, the green band and the blue band of light.
  • the plurality of partitioned areas may be grouped, and the color of the light for each partitioned area within the group may be different.
  • combinations of two adjacent colors may all be different.
  • the colors of the plurality of partitioned areas may be a cyclic pattern based on a De Brown sequence.
  • the plurality of partitioned areas may be a tile pattern that is two-dimensionally arranged in a matrix.
  • the imaging unit is For each of the red pixel, the green pixel, and the blue pixel in the imaging pixel, a voltage corresponding to a first accumulated charge amount accumulated according to the first color pattern is converted into a first digital signal. Then, a voltage corresponding to the accumulated charge amount accumulated according to the second color pattern may be converted into a second digital signal, and the difference between them may be used as a digital pixel signal.
  • the information processing unit A pattern generation unit that generates the first color pattern and the second color pattern may be provided, and a signal having information on the first color pattern and the second color pattern may be supplied to the projection unit.
  • the imaging unit is The infrared wavelength band is divided into three wavelength bands, and consists of pixels that are sensitive only to each wavelength band,
  • the first color pattern and the second color pattern may be configured in each of the three wavelength bands of the infrared region.
  • the imaging unit is At least the wavelength band may be divided into two wavelength bands, and pixels having sensitivity only to each wavelength band may be configured.
  • a ranging method is provided, which is a band of light.
  • FIG. 4 is a diagram for explaining the principle of measurement up to a measurement target;
  • FIG. 2 is a block diagram showing a more detailed configuration example of the distance measuring device according to the embodiment;
  • FIG. 3 is a schematic diagram showing a configuration example of a pixel array;
  • FIG. 4 is a diagram schematically showing a color arrangement example of a first color pattern and a second color pattern;
  • FIG. 4 is a diagram schematically showing a projection example of a first color pattern;
  • FIG. 4 is a diagram showing an example of projection onto a non-reflective area; The table shown in FIG.
  • FIG. 4 is a diagram showing a configuration example of imaging pixels;
  • FIG. 2 is a block diagram showing an example of an ADC circuit forming an AD converter; Schematic diagram of a differential exposure process.
  • FIG. 4 is a diagram showing an example of a shutter timing chart;
  • FIG. 4 is a diagram showing an example of a transfer timing chart of floating diffusion;
  • FIG. 4 is a diagram showing an example of a timing chart of reading;
  • FIG. 4 is a diagram showing a case where a pixel has sensitivity to the first color pattern;
  • FIG. 4 is a diagram showing a configuration example of imaging pixels;
  • FIG. 2 is a block diagram showing an example of an ADC circuit forming an AD converter; Schematic diagram of a differential exposure process.
  • FIG. 4 is a diagram showing an example of a shutter timing chart;
  • FIG. 4 is a diagram showing an example of a transfer timing chart of floating diffusion;
  • FIG. 4 is a diagram showing an example of a timing chart of reading;
  • FIG. 10 is a diagram showing a case where a pixel has sensitivity to a second color pattern; A table showing the luminance when a pattern is projected onto a reflective object. A table showing the result of performing differential AD conversion on an object having reflectance (R: 50%, G: 100%, B: 75%). A table showing the results of conversion to significant color components.
  • FIG. 6B is a table showing results when the pattern shown in FIG. 6A is subjected to differential AD conversion according to the present embodiment; FIG. A table showing the results of measuring an object that does not reflect blue light. Table showing color stripe patterns generated by De Brown.
  • FIG. 10 is a diagram showing an example of extension of a cyclic pattern by the de Brownian sequence algorithm; A table showing a color tile pattern with color variations extended in the column direction.
  • FIG. 27 is a diagram showing the arrangement of the color tile pattern example of FIG. 26;
  • FIG. 10 is a diagram showing a timing chart for reading a counter difference and a digital difference; The figure which shows the waveform of the comparator input of ADC.
  • FIG. 10 is a diagram showing a case where a specific pixel has sensitivity to a second color pattern;
  • the rangefinder and the rangefinder method may include components and functions that are not shown or described. The following description does not exclude components or features not shown or described.
  • FIG. 1 is a diagram showing a configuration example of a distance measuring device 1 according to this embodiment.
  • the distance measuring device 1 includes a projection section 100 , an imaging section 200 and an information processing section 300 .
  • the projection unit 100 according to the present embodiment is, for example, a projector, and sequentially projects a plurality of types of color patterns onto the measurement object T100.
  • a region T100a indicates a convex region of the measurement object T100.
  • the information processing section 300 includes, for example, a CPU (Central Processing Unit) and an MPU (Micro Processor), and configures each processing section by executing a program stored in the storage section.
  • the information processing section 300 generates a plurality of types of color patterns to be projected from the projection section 100 and supplies them to the projection section 100 . Further, the information processing section 300 processes images of a plurality of types of color patterns captured by the imaging section 200, and generates distance information and color information of the measurement target T100.
  • the program used by the information processing section 300 may be stored in the storage section, or may be stored in a storage medium such as a DVD (Digital Versatile Disc), a cloud computer, or the like.
  • the program may be executed by a CPU (Central Processing Unit) or an MPU (Micro Processor) using a RAM (Random Access Memory) or the like as a work area, or may be executed by an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate), or the like. may be implemented by an integrated circuit of
  • the imaging unit 200 is, for example, a camera, and captures images of multiple types of color patterns projected onto the measurement object T100.
  • FIG. 2 is a diagram for explaining the measurement principle up to the measurement object T100.
  • the information processing section 300 generates distance information based on the principle of triangulation.
  • A be the viewpoint of the projection unit 100
  • B be the viewpoint of the imaging unit 200
  • L be the length of a line segment AB connecting A and B.
  • the distance L and the angles of the projection unit 100 and the imaging unit 200 with respect to the line segment AB are set in the information processing unit 300 in advance.
  • the distance to the point P of the measurement object T100 is calculated by the formula (1).
  • the angle ⁇ from the imaging unit 200 is set for each projection angle of color patterns, which will be described later.
  • the angle ⁇ is calculated by using the positional information of the color pattern on the image captured by the imaging unit 200 . That is, the distance and surface shape of the measurement object T100 are measured by calculating the angles ⁇ and ⁇ of each point P at the edge of each color pattern over the entire surface of the measurement object T100.
  • FIG. 3 is a block diagram showing a more detailed configuration example of the distance measuring device 1 according to this embodiment.
  • the projection unit 100 is, for example, a projector. This projector is composed of, for example, a light source, a liquid crystal panel, and a lens.
  • the projection unit 100 sequentially projects the first color pattern (pattern1) and the second color pattern (pattern2) onto the measurement object T100 according to the synchronization signal generated by the imaging unit 200 .
  • the liquid crystal panel is transmissive and has elements that transmit or block the light emitted from the light source. These elements are arranged in a matrix to generate an image input to the projector.
  • the liquid crystal panel has red (R)/green (G)/blue (B) pixels arranged in a matrix, and is sequentially driven by the image input to the projector, that is, the first color pattern and the second color pattern. be done.
  • Light emitted from the light source passes through the liquid crystal panel and is projected onto the screen by the lens.
  • the first color pattern and the second color pattern are sequentially projected onto the measurement object T100 in accordance with the synchronization signal generated by the imaging unit 200.
  • red, green, and blue may be simply described as R, G, and B.
  • the projector by turning on the light source, the difference in scanning speed between the pixel array 206 of the imaging unit 200 and the projecting unit 100 is suppressed, and the pixel array 206 and the projecting unit 100 can operate in synchronization.
  • the projector in addition to the light source control function, can update the first color pattern and the second color pattern of the liquid crystal panel before the light emission timing. Details of the update timing will be described later.
  • digital light processing (DLP) or the like may be used instead of the liquid crystal panel for the image generation element used in the projector. In this case, light emitted from the light source is reflected by digital light processing (DLP) and projected onto a screen through a lens. Projection of the projector can be controlled by light emission of the light source, and can be used in the projection unit 100 according to the present embodiment.
  • DLP digital light processing
  • the imaging unit 200 has a lens 202 and an image sensor 204 .
  • An image of the object T100 to be measured is formed on the image sensor 204 via the lens 202, and captured by the principle of photoelectric conversion. That is, the image sensor 204 includes a pixel array 206, an AD converter 208, a signal processor 210, an image output interface 212, a timing controller 214, a pixel controller 216, and a timing output interface 218. have.
  • FIG. 4 is a schematic diagram showing a configuration example of the pixel array 206.
  • the pixel array 206 is configured by arranging a plurality of imaging pixels P206 in a matrix.
  • each imaging pixel P206 has an RGB pixel.
  • the imaging pixel P206 includes a red (R) pixel photoelectrically converted via a red filter, a green (G) pixel photoelectrically converted via a green filter, and a blue (B) pixel photoelectrically converted via a blue filter. ) color pixels.
  • each pixel uses a type that transfers accumulated charges (here, photoelectrons) accumulated by photoelectric conversion of the photodiode PD to the floating diffusion FD and accumulates them (see Patent Document 2).
  • a type pixel in which a phototransistor (PD) and a floating diffusion FD form a pair, a voltage corresponding to the first accumulated charge amount corresponding to the first color pattern and the second A voltage corresponding to the second accumulated charge amount corresponding to the second color pattern can be AD-converted at once to generate a digital pixel value.
  • PD phototransistor
  • a voltage corresponding to the second accumulated charge amount corresponding to the second color pattern can be AD-converted at once to generate a digital pixel value.
  • the color pattern according to the present embodiment includes red light corresponding to the transmission band of the red filter, green light corresponding to the transmission band of the green filter, and green light corresponding to the transmission band of the blue filter. and a corresponding blue light.
  • the red (R) color pixel is configured such that its sensitivity is suppressed and does not react to blue light and green light in the color pattern.
  • the green (G) color pixel is configured to have reduced sensitivity and not react to red and blue light in the color pattern.
  • the blue (B) color pixel is configured such that it has reduced sensitivity and does not react to red and green light in the color pattern.
  • each imaging pixel P206 has sensitivity to light in the red band, and a red (R) pixel whose sensitivity to green and blue bands is suppressed more than light in the red band, and sensitivity to light in the green band. and a green (G) pixel in which the sensitivity to the red band and the blue band is suppressed from the light in the green band, and a pixel having sensitivity to the light in the blue band and having sensitivity to the red band and the green band and blue (B) pixels that are suppressed from light in the blue band.
  • R red
  • G green
  • a pixel of a type in which a transistor (PD) and a rolling diffusion FD are paired is used, but the present invention is not limited to this.
  • PD transistor
  • FD rolling diffusion FD
  • the pixels are composed of red (R) color pixels, green (G) color pixels, and blue (G) color pixels, but are not limited to this.
  • the infrared wavelength band may be divided into three separate wavelength bands, and a pixel having sensitivity only to each wavelength band may be configured.
  • the color pattern consists of each of three separate wavelength bands.
  • the red (R) color pixel, green (G) color pixel, and blue (G) color pixel may simply be referred to as R pixel, G pixel, and B pixel, respectively.
  • the pixel control unit 216 controls the shutter/rotating diffusion for each of the R/G/B pixels that make up each imaging pixel P206 by horizontally wired control wires.
  • FD Controls transfer/reading.
  • the control lines (control wires 8-1) for control are shared in the horizontal direction, and row addresses (vertical addresses 8-1) are assigned from top to bottom.
  • Wirings VSL1 to VSLn arranged in the vertical direction are signal lines for supplying R/G/B pixel data to the AD converter 208 (see FIG. 3), and are accumulated from pixels selected by control wires. A signal charge is input to the AD converter 208 .
  • a voltage corresponding to the accumulated signal charge accumulated in each pixel of the pixel array 206 is supplied to the AD converter 208 via the wirings VSL1 to VSLn.
  • the AD conversion unit 208 AD-converts the voltage corresponding to the accumulated signal charge and converts it into a digital pixel signal, which is digital data.
  • the AD converter 208 performs, for example, differential AD conversion.
  • the AD conversion unit 208 linearly converts a voltage corresponding to the accumulated signal charge (photoelectric charge) of each pixel accumulated by exposure, and calculates the first accumulated charge amount of the first frame corresponding to the imaging of the first color pattern. and the second accumulated charge amount of the second frame corresponding to imaging of the second color pattern. Details of an operation example of the AD conversion unit 208 will also be described later.
  • the signal processing unit 210 supplies the digital image data generated by the AD conversion unit 208 to the image output interface 212 .
  • the signal processing unit 210 can perform various signal processing such as noise reduction processing and threshold processing.
  • the image output interface 212 performs level conversion to the signal line.
  • the image output interface 212 then supplies the image data generated by the AD converter 208 to the information processor 300 .
  • the timing control unit 214 controls the timing of the first color pattern and the second color pattern projected from the projection unit 100, the imaging timing of the first color pattern and the second color pattern in each pixel of the pixel array 206, and AD conversion. Control is performed to synchronize with the AD conversion timing of the unit 208 . More specifically, the timing control section 214 controls the drive timing of the differential AD conversion for the AD conversion section 208 . The timing control unit 214 controls the imaging timing of the first color pattern and the second color pattern in each pixel of the pixel array 206 for the pixel control unit 216 .
  • the pixel control unit 216 controls the imaging timing of the first color pattern and the second color pattern in each pixel of the pixel array 206 according to the timing control of the timing control unit 214 as described above. More specifically, it controls control timing of initialization of each pixel of the pixel array 206, photoelectric conversion in the phototransistor (PD), transfer from the phototransistor (PD) to the rolling diffusion FD, and the like.
  • the timing output interface 218 converts the level of the control signal including the synchronization timing information generated by the timing control unit 214 to the signal line. Then, the timing output interface 218 supplies the information processing section 300 with a control signal including information on the synchronization timing.
  • the information processing unit 300 includes a timing input interface 302, a pattern generation unit 304, a projector output interface 306, a light emission control unit 308, an image input interface 310, a second signal processing unit 312, an image generation unit 314, 3 It has a dimension measurement calculator 316 and a data output interface 318 .
  • the timing input interface 302 inputs a control signal including synchronization timing information generated by the timing control section 214 .
  • the pattern generator 304 generates the first color pattern and the second color pattern according to the synchronization timing generated by the timing controller 214 .
  • the projector output interface 306 supplies an image signal including information on the first color pattern and the second color pattern generated by the pattern generation unit 304 to the projection unit 100 .
  • the light emission control unit 308 controls the light emission of the light source of the projection unit 100 in correspondence with the projection timing of the first color pattern and the second color pattern according to the synchronization timing generated by the timing control unit 214 . That is, the light emission control unit 308 supplies a light emission control signal including information on light emission timing to the projection unit 100 via the projector output interface 306 .
  • the image input interface 310 inputs image data captured by the imaging unit 200 and supplies the data to the second signal processing unit 312 .
  • the second signal processing unit 312 performs RGB normalization and binarization processing of the image data captured by the imaging unit 200 . Details of these processes will be described later.
  • the image generation unit 314 generates normal RGB color captured image data based on the RGB-normalized image data.
  • the image generator 314 then supplies the captured image data to the data output interface 318 .
  • the image generator 314 can also perform gamma conversion, frequency processing, noise reduction processing, and the like on the captured image data.
  • the three-dimensional measurement calculation unit 316 uses the binarized image data to obtain the projection angle ⁇ of the first color pattern and the second color pattern and ⁇ at the corresponding location in the image captured by the imaging unit 200. do. Then, the three-dimensional measurement calculation unit 316 calculates the distance d of each point based on the formula (1), generates three-dimensional measurement data, and supplies the data output interface 318 with the three-dimensional measurement data. In this way, the three-dimensional measurement calculation unit 316 detects patterns from digital images obtained by imaging the first color pattern and the second color pattern, and detects the detected patterns, the first color pattern projected by the projection unit 100, and the second color pattern. At least one of the patterns is associated to generate the distance to each portion of the measurement object T100. Note that the three-dimensional measurement calculation unit according to this embodiment corresponds to the measurement unit.
  • the data output interface 318 supplies captured image data and three-dimensional measurement data supplied from the image generation unit 314 to a storage unit (storage), a screen, and the like.
  • the three-dimensional measurement data means the distance to each part of the measurement target T100.
  • FIG. 5 is a diagram schematically showing a color arrangement example of the first color pattern and the second color pattern.
  • the first color pattern and the second color pattern are sequentially repeated and projected onto the measurement target T100 by the projection unit 100.
  • FIG. In FIG. 5, the first color pattern and the second color pattern are shown on separate lines, but the first color pattern and the second color pattern are projected in order to the same position.
  • the stripe designated C6 in the second color pattern is projected at the same position as the stripe C1.
  • C1 to C6 correspond to color codes (see FIG. 6B), which will be described later.
  • FIG. 6A shows a combination of red (R: red) light, green (G: green) light, and blue (B: blue) light of the first color pattern (patter1) and the second color pattern (patter2).
  • R red
  • G green
  • B blue
  • FIG. 6B is a table showing color code examples.
  • color number C1 is red
  • color number C2 is pink
  • color number C3 is green
  • color number C4 is yellow
  • color number C5 corresponds to blue
  • color number C6 corresponds to sky blue.
  • the stripe number (color bar number) in FIG. 6A corresponds to the position of each stripe in FIG.
  • Projected colors (colors) indicate colors corresponding to the color codes (C1 to C6) in FIG. 6B.
  • R/G/B indicates a combination of red (R: red) light, green (G: green) light, and blue (B: blue) light.
  • red (red) light is indicated by 1/0/0 and consists of red (R:red) light.
  • pink light is indicated by 1/0/1 and is composed of red (R) light and blue (B) light.
  • green light is indicated by 0/1/0 and is composed of green (G) light.
  • yellow light is indicated by 1/1/0 and is composed of red (R) light and green (G) light.
  • blue light is indicated by 0/0/1 and is composed of blue (B) light.
  • sky blue light is indicated by 0/1/1 and is composed of green (G) light and blue (B) light.
  • R/G/B in the second color pattern is 0/1/1.
  • a sky blue color with color number C6 shown is illuminated.
  • the light projected onto the same projection area of the first color pattern and the second color pattern includes light of three wavelength bands (red (R) light and green (G) light).
  • red (R) light, green (G) light, and , blue (B) light at least one wavelength band (for example, red) is projected
  • the second color pattern projects light of three wavelength bands (red (R: red) light and green ( G (green) light and light in the remaining wavelength band in blue (B) light (eg, green (G) light and blue (B) light) are projected.
  • red (R: red) light and green ( G (green) light and light in the remaining wavelength band in blue (B) light eg, green (G) light and blue (B) light
  • exclusive means 1/0/0 [R/G/B] constituent colors in the first color pattern and 0/1/1 [R/G/B] constituent colors in the second color pattern.
  • the relationship of 1/1/1 [R/G/B] when added to the color is referred to as exclusive or complementary.
  • the R pixel is red light of color number C1 composed of 1/0/0 [R/G/B] in the first color pattern.
  • the R pixel reacts, that is, photoelectrically converted, but in the second color pattern, sky blue light of color number C6 composed of 0/1/1 [R/G/B] is projected. photoelectric conversion does not occur.
  • the G pixel does not perform photoelectric conversion when red light having a color number C1 composed of 1/0/0 [R/G/B] is projected.
  • photoelectric conversion is performed when sky blue light of color number C6 composed of 0/1/1 [R/G/B] is projected.
  • the B pixel does not perform photoelectric conversion when red light having a color number C1 composed of 1/0/0 [R/G/B] is projected.
  • photoelectric conversion is performed when sky blue light of color number C6 composed of 0/1/1 [R/G/B] is projected.
  • each of the red, green, and blue pixels in imaging pixel P206 provides more pixel signals for either the first color pattern or the second color pattern. Generate.
  • the R pixel projects pink light of color number C2 composed of 1/0/1 [R/G/B] in the first color pattern.
  • the second color pattern when green light of color number C3 composed of 0/1/0 [R/G/B] is projected, photoelectric conversion is not performed.
  • the G pixel does not perform photoelectric conversion when pink light having a color number C2 composed of 1/0/1 [R/G/B] is projected, but in the second color pattern, 0 /1/0 [R/G/B], and photoelectric conversion is performed when green light of color number C3 is projected.
  • the B pixel photoelectrically converts when pink light of color number C2 composed of 1/0/1 [R/G/B] is projected, but in the second color pattern, 0 /1/0 [R/G/B], when green light with color number C3 is projected, photoelectric conversion is not performed.
  • the first color pattern and the second color pattern are (Color No. C1, Color No. C6), ( Color No. C2, Color No. C3), (Color No. C3, Color No. C2), (Color No. C4, Color No. C5), (Color No. C5, Color No. C4), (Color No. .C6, Color No. C1), the exclusively combined projection light is projected. Note that the combinations are not limited to these combinations as long as they are exclusive.
  • FIG. 7 is a diagram schematically showing a projection example of the first color pattern.
  • the observation point P is the part where the color changes.
  • the pattern generator 304 assigns IDs R1, R2, R3, R4, .
  • the pattern generator 304 assigns IDs such as pi1 to pin, G1 to Gn, Y1 to Yn, B1 to Bn, sb1 to sbn, etc. to the color numbers C2 to C6.
  • FIG. 8 is a diagram showing an example of projection onto a region T100a where the specific light is not reflected.
  • FIG. 8A shows a front view and a top view of the convex region T100a of the measurement object T100.
  • the convex region T100a has a yellow color. Therefore, blue (B) light is absorbed by the yellow of the convex region T100a and is hardly reflected.
  • FIG. 8B shows a state in which the first color pattern is projected onto the convex region T100a of the measurement object T100.
  • FIG. 8C shows a state in which the second color pattern is projected onto the convex region T100a of the measurement target T100. Areas B1 and B2 that are imaged as black in the convex area T100a are also shown.
  • FIG. 9 is a table showing the reaction of each pixel RGB as imaging R/G/B in addition to the table shown in FIG. 6A.
  • blue (B) light is absorbed by the yellow of the convex region T100a and is hardly reflected.
  • the reaction of the B pixel indicated by imaging R/G/B is shown as 0.
  • blue (B: blue) light of color number C5 (Color No. 5) composed of projection 0/0/1 [R/G/B], imaging 0/0/0 [R/G/B] ], resulting in a state in which almost no optical signal is obtained from each of the RGB pixels.
  • the imaging pixel P206 indicated by imaging 0/0/0 [R/G/B] is imaged as a black area B1 (see FIG. 8).
  • the first color pattern and the second color pattern are exclusive so that (Color No. C4, Color No. C5), (Color No. C5, Color No. C4) are combined objectively (or complementary). Therefore, yellow light of color number C4 (Color No. C4) composed of 1/1/0 [R/G/B] has a reflection component.
  • FIG. 10 is a table in the case of projecting only the first color pattern as a comparative example.
  • blue (B: With blue) light the imaging becomes 0/0/0 [R/G/B], and almost no optical signal is obtained from each RGB pixel. For this reason, it becomes impossible to distinguish whether the object is a black area or a yellow area in the area where the blue (B: blue) light of stripe number 5 is projected.
  • FIG. 9 when the first color pattern and the second color pattern that are exclusively combined are projected, even if the convex region T100a is colored, the first color pattern , or the second color pattern. Therefore, in the case of projecting the first color pattern and the second color pattern that are exclusively combined, the object is either a black area (or the object is at infinity) or a yellow area (a specific color is absorbed). It is possible to determine whether the object is either a black area (or the object is at infinity) or a yellow area (a specific color is absorbed). It is possible to determine whether the first color pattern and the
  • the light projected onto the same projection area for the first color pattern and the second color pattern are lights of different wavelength bands.
  • a color pattern can be configured such that even if light in one wavelength band is absorbed by the measurement target T100, light in the other wavelength band is reflected.
  • the projection light is a combination of red (R: red) light, green (G: green) light, and blue (B: blue) light, but the present invention is not limited to this.
  • the light projected onto the same projection area for the first color pattern and the second color pattern may be light of different wavelength bands.
  • the wavelength band may be divided into at least two wavelength bands, and the pixels forming the imaging unit 200 (see FIG. 2) may be configured to have sensitivity only in each wavelength band.
  • FIG. 11 is a diagram showing a configuration example of the imaging pixel P206. As shown in FIG. 11, each pixel forming the imaging pixel P206 has a photodiode PD, transistors SW1, SW2, SW3, and AMP.
  • the control wires consist of three control lines RST, TRG and SEL.
  • the photodiode PD has one end connected to the ground GND and the other end connected to one end of the transistor SW1.
  • the other end of this transistor SW1 is connected to the floating diffusion FD.
  • One end of the transistor SW2 is connected to the floating diffusion FD, and the other end is connected to one end of the transistor AMP.
  • One end of the transistor AMP is connected to the VSS line, and the other end is connected to one end of the transistor SW3.
  • the other end of transistor SW3 is connected to the VSL line.
  • the gate of the transistor SW1 is connected to the control line TRG
  • the gate of the transistor SW2 is connected to the control line RST
  • the gate of the transistor SW3 is connected to the control line SEL
  • the gate of the transistor AMP is connected to the floating diffusion FD.
  • the photodiode PD may be simply referred to as PD
  • the floating diffusion FD may be simply referred to as FD.
  • FIG. 12 is a block diagram showing an example of an ADC circuit forming the AD conversion section 208.
  • the ADC circuit (see FIG. 4) has multiple AZ circuits, comparators, and counters.
  • the DAC circuit has an internal counter (not shown), and outputs a voltage value corresponding to the counter value while decrementing.
  • the AZ circuit adjusts the voltage of the input signal to the AZ reference voltage during the period (not shown) indicated by the auto-zero circuitry, and holds the voltage offset between the input and the output. This AZ circuit can maintain the voltage offset beyond the indicated period.
  • This ADC circuit has two AZ circuits for the input from the DAC circuit and the input from the VSL line. The AZ circuits output the voltage offset to the comparator.
  • FIG. 13 is a schematic diagram of differential exposure processing. An example of differential exposure processing will be described with reference to FIG. As shown in FIG. 13, the image Im130 indicates the driving timing of the image sensor 204. As shown in FIG. The horizontal axis indicates time, and the vertical axis indicates the vertical address of the pixel array 206 (see FIG. 4). The image sensor 204 sequentially processes each vertical address. That is, the oblique graph shows how the vertical address is incremented and processed.
  • An image Im132 shows a flash strobe signal, which is a control signal for the timing of projector light emission.
  • An image Im134 indicates generation and update timings of the first color pattern and the second color pattern, respectively.
  • An image Im136 indicates the timing of projector light emission.
  • Image Im138 shows the projection timings of the first color pattern and the second color pattern.
  • pixel reset As shown in the image Im130, in the pixel array 206 (see FIG. 4), pixel reset (shutter), FD transfer, readout "read”, and AD conversion are sequentially performed for each row.
  • the left line L130 indicates the pixel reset "shutter” timing
  • the center line L132 indicates the "FD transfer” timing
  • the right line L134 indicates the "read” timing.
  • pixel reset may be described as a shutter.
  • FD transfer is performed at the central timing of pixel reset (shutter) and readout "read".
  • the FD transfer from the pixel reset (shutter) and the reading "read" from the FD transfer form a congruent parallelogram.
  • the exposure from pixel reset (shutter) to FD transfer is defined as the first frame
  • the exposure from FD transfer to readout "read" is defined as the second frame. That is, the first frame corresponds to imaging with the first color pattern, and the second frame corresponds to imaging with the second color pattern.
  • a timing generation unit 214 (see FIG. 3) in the image sensor 204 generates a pixel reset (shutter) startup timing, an FD transfer startup timing, a read startup timing, and a flash strobe timing that is a projector emission timing.
  • a control signal including information on pixel reset (shutter) activation timing, FD transfer activation timing, read activation timing, and projector light emission timing is supplied to the pixel control unit 216 and controlled as a trigger for pixel control.
  • a flash strobe signal which is a control signal for projector light emission timing, is sent from the timing output I/F 212 of the image sensor in the image sensor 204 to the pattern generation unit 304 via the timing input I/F 302 of the information processing unit 300. It is supplied to the light emission control unit 308 .
  • a flash strobe signal which is a control signal for the timing of projector light emission, is received from the image sensor 204. to be issued.
  • the flash emission period is the timing at which the pixel reset (shutter), FD transfer, and readout "read” operations are stopped in each of the two exposures, and the lengths of the two flash emission (Flash strobe) periods are made equal.
  • the first color pattern and the second color pattern are updated during the period in which the light source does not emit light.
  • Screen update is triggered by the timing at which the flash strobe signal received from the imaging unit 200 changes from high to low.
  • the screen of the first color pattern (Pattern 1) is displayed in advance, and the screen of the second color pattern (Pattern 2) is updated at the changing point of the first flash strobe signal from High to Low. .
  • the screen is updated with the first color pattern (Pattern 1) at the point where the second flash strobe signal changes from High to Low, and this is repeated thereafter.
  • the projection unit 100 emits light by controlling the light source of the projection unit 100 .
  • a light emitting diode (LED), a halogen lamp, a xenon flash, or the like can be used as the light source.
  • each pixel of the imaging pixel P206 sequentially accumulates signal charges corresponding to the first color pattern and signal charges corresponding to the second color pattern, and supplies them to the VSL line. In other words, these pixels sequentially drive the shutter "shutter”, “FD transfer”, and readout "read”.
  • FIG. 14 is a diagram illustrating an example of a shutter timing chart.
  • Signal levels of the control lines RST, TRG, and SEL are shown in order from the top.
  • the control line TRG becomes high (High)
  • the transistor SW1 becomes conductive
  • the charge of the photodiode PD is conducted to the FD.
  • the control line RST is High
  • the transistor SW2 is brought into a conductive state
  • the charge of the floating diffusion FD is conducted to the VSS line.
  • the charges in the photodiode PD and the floating diffusion FD flow to the VSS line and the charges are cleared.
  • the transistors SW1 and SW2 are cut off and the charge of the photodiode PD does not flow. .
  • interruption may be described as a non-conducting state.
  • FIG. 15 is a diagram showing an example of a transfer timing chart of the floating diffusion FD. Signal levels of the control lines RST, TRG, and SEL are shown in order from the top.
  • the control line TRG By setting the control line TRG to high, the transistor SW1 becomes conductive, and the charge of the photodiode PD is transferred and held in the floating diffusion FD.
  • the control line TRG is set to low to cut off the transistor SW1, so that the charges accumulated by the exposure of the photodiode PD are no longer transferred to the FD.
  • the charge of the photodiode PD storing the charge of the exposure of the first frame corresponding to the first color pattern is moved to the floating diffusion FD and held therein.
  • FIG. 16 is a diagram showing an example of a read timing chart. Signal levels of the control lines RST, TRG, and SEL are shown in order from the top. As shown in FIG. 14, in read control, auto zero "Auto zero", FD reset, FD transfer, and AD conversion are continuously performed.
  • the transistor SW3 is brought into conduction by setting the control line SEL to High.
  • the voltage of FD is amplified by the amplifying action of transistor AMP and conducted to VSL.
  • a voltage corresponding to the charge amount of the first frame exposure accumulated in the FD is conducted to the VSL line.
  • the AD converter 208 records and holds the offset between the potential of the VSL line and the AZ reference voltage by means of the AZ circuit mechanism.
  • the control SEL low again, the transistor SW3 is rendered non-conducting, and the change in the charge amount of the FD is controlled not to be conducted to the VSL line.
  • the FD reset makes the transistor SW2 conductive by setting the control line RST to High.
  • the charges in the FD are conducted to the Vss line for resetting.
  • the transistor SW2 is rendered non-conductive.
  • the transistor SW1 is made conductive by setting the control line TRG to High. This transfers the charge of the photodiode PD to the FD. Next, the charge of the photodiode PD exposed and accumulated in the second frame, that is, the second color pattern is transferred to the FD and held.
  • the control line SEL is set to High to turn on the transistor SW3.
  • the voltage of the FD is amplified by the amplifying action of the transistor AMP, and the voltage is conducted to the VSL line.
  • the AD converter 208 AD-converts the voltage value conducted to the VSL line.
  • the difference between the voltage proportional to the amount of accumulated exposure charge in the first frame corresponding to the first color pattern and the voltage proportional to the amount of accumulated exposure charge in the second frame corresponding to the second color pattern is AD converted.
  • SEL is set to low to bring the transistor SW3 into a non-conducting state.
  • FIG. 17 and 18 are diagrams showing waveforms of VSL (after passing through AZ) L170 and DAC (after passing through AZ) of the comparator (FIG. 12) input when the first color pattern is presented.
  • FIG. 17 is a diagram showing a case where pixels have sensitivity to the first color pattern, that is, a case of photoelectric conversion.
  • the first color pattern and the second color pattern are configured exclusively, so photoelectric conversion is performed in only one of the first color pattern and the second color pattern in a specific pixel.
  • VSL (after passing through AZ) L170 in FIG. 17, charges have negative energy, so the more charges, the larger the negative voltage. In FIG. 17, the voltage in the downward direction becomes negative and the energy increases.
  • the control states in FIG. 17 correspond to the read “read” timing chart in FIG.
  • the voltage of charge accumulated in FD appears at VSL (after passing through AZ) L170. Therefore, since there is a value corresponding to the accumulated charge in the first color pattern imaged in the first frame, the voltage of VSL (after passing through AZ) L170 greatly swings downward.
  • the auto-zero AZ circuit Fig.
  • VSL is in high impedance (Hi-Z), so it is maintained in a state where it does not change significantly.
  • the value of the second frame is read.
  • the VSL (after passing through the AZ) voltage L180 becomes smaller than the voltage of the VSL line in the first frame and swings upward greatly.
  • AD conversion is performed after the VSL (after passing through AZ) voltage L170 is stabilized. This shows single-slope AD conversion that linearly changes the output of the DAC circuit (see FIG. 12) from higher voltage to lower voltage.
  • the ADC counter increments at regular intervals from 0 at the slope start of the DAC. At the intersection of the VSL (after AZ) voltage L170 and the DAC voltage, the comparator flips and the counter stops counting.
  • the value of this counter becomes digital data and is supplied to the signal processing unit 210 .
  • the slopes of the counter and the DAC circuit are synchronized, and the count value of the AZ reference voltage is constant and preset. Therefore, by subtracting the counter value of the AZ reference voltage from the count value when the comparator is inverted, the AD converter result can be obtained as a positive or negative value. As shown in FIG. 17, if the pixel is sensitive to the first color pattern (pattern 1), a negative value will result.
  • FIG. 18 is a diagram showing the case where the pixels have sensitivity to the second color pattern, that is, the case of photoelectric conversion.
  • the AD conversion method is the same procedure as in FIG. 17 described above.
  • the amount of light transmitted to the photoelectric conversion element of a specific pixel is large (dark) in the first frame and small (bright) in the second frame.
  • the result of AD conversion that swings downward is a positive value.
  • a specific pixel has a negative value when it is sensitive to the first color pattern (pattern1), and a positive value when it is sensitive to the second color pattern (pattern2).
  • the color codes of the first color pattern (pattern1) and the second color pattern (pattern2) can be obtained. It is possible to make the determination by one AD conversion.
  • the combination of the first color pattern (pattern1) and the second color pattern (pattern2) and the image sensor 204 that performs differential AD conversion reduce the influence of ambient light that becomes noise, and the two colors are combined. It is possible to acquire patterns at the same time.
  • the period of the first frame including the first color pattern (pattern1) and the period of the second frame including the second color pattern (pattern2) are simultaneously exposed to reflected light from ambient light. These become noise in the measurement. For this reason, a darkroom-like environment is created in order to suppress the light that becomes noise during distance measurement. The degree of freedom in the measurement environment increases.
  • N1 noise
  • N2 noise
  • N1 N2. Therefore, by performing differential AD conversion, the exposure of the first frame is subtracted from the exposure of the second frame. becomes possible.
  • the exposure components N1 and N2 of the ambient light reflection disappear, and only the information of the exposure components of the color patterns P1C and P2C is extracted.
  • the image sensor 204 according to the present embodiment separates colors using color filters that pass any one of R, G, and B as described above. Only the values corresponding to the emission of either the first color pattern (pattern 1) or the second color pattern (pattern 2) are present. For example, as described above, when Color bar No.
  • AD conversion can be performed twice in the first period and the second period, and arithmetic subtraction processing can be performed.
  • AD conversion consumes a lot of power, the amount of data is large, and arithmetic processing also consumes power.
  • power consumption, data amount, and arithmetic processing can be reduced by performing AD conversion at once.
  • the difference is obtained by the analog circuit, but the corresponding processing may be performed by digital calculation.
  • FIG. 19 is a table showing luminance when the color pattern shown in FIG. 6A is projected onto an object with reflectance (R: 50%, G: 100%, B: 75%). That is, it represents the brightness of the surface when projected onto an object whose reflectance is R (red) 50%, G (green) 100%, and B (blue) 75%.
  • the value is a resolution of 0-255.
  • FIG. 20 is a table showing the result of performing differential AD conversion by projecting the color pattern shown in FIG. 6A onto an object with reflectance (R: 50%, G: 100%, B: 75%).
  • the value is a resolution of -255 to 255.
  • the first color pattern (pattern 1) has a negative numerical value
  • the second color pattern (pattern 2) has a positive numerical value.
  • the values of (R, G, B) with resolution in the range of -255 to 255 are supplied to the second signal processing unit 312 as an image signal.
  • the second signal processing unit 312 sets the result to 1 when the value AD after the differential AD conversion is larger than the threshold value TH (assumed to be a positive number), that is, when AD>TH, and sets the value to the threshold value TH. If it is smaller than the result of multiplication by -1, that is, if AD ⁇ -TH, the result is -1. The result is 0 when the value AD satisfies -TH ⁇ AD ⁇ TH.
  • the influence of noise can be excluded by setting the threshold value TH to a numerical value that takes noise into account.
  • FIG. 21 is a table showing the results of conversion into significant color components. That is, it is a table showing an example of the results of conversion into significant color components by the second signal processing unit 312 based on the values after minute AD conversion.
  • FIG. 22 is a table showing the result when the pattern shown in FIG. 6A is subjected to differential AD conversion according to this embodiment.
  • the calculation result obtained by subtracting the first color pattern (pattern1) from the second color pattern (pattern2) according to the equation (2) is shown.
  • the row values after theoretical difference AD conversion match the values generated by the second signal processing unit 312 . In this way, the pattern can be measured by the measurement by the image sensor in consideration of the reflectance.
  • the second signal processing unit 312 can estimate and complement the surrounding conditions of this measurement point, for example, the color result of the left and right neighbors.
  • the left neighbor of Color bar No. 1 is No. 6, but the left neighbor of No. 2 is No. 1, and the right neighbor of No. 2 is No. 3, but 1 Complementary guessing can be performed using features such as the fact that the number to the right of the number is the number 2.
  • the second signal processing unit 312 detects all the detected change points, such as the boundary between No. 1 and No. 2, the boundary between No. 2 and No. 3, etc., because the measurement points are the color change points in the case of stripes. Image position and which Color bar No.
  • the three-dimensional measurement calculation unit 316 calculates the angle between the imaging unit 200 and the measurement point from the image position, and also calculates the color bar number.
  • the angle between the projection unit 100 and the measurement point is calculated from the information, and the distance to the measurement point is calculated by triangulation from three pieces of information, namely, the predetermined distance L between the projection unit 100 and the imaging unit 200 .
  • the three-dimensional measurement calculation unit 316 calculates distances at all measurement points P, and stores the data in storage or displays the data on a display via the data output I/F 318 .
  • the result for that color is 0 when there is no reflection.
  • FIG. 20 shows the brightness of each color reflected by illuminating an object with reflectance of R: 50%, G: 100%, and B: 75%.
  • Numerical values are normalized expressions in the range from 0 to 255. Thus, 50% is 127, 100% is 255, and 75% is 191.
  • the second signal processing unit 312 takes ⁇ 127, 255, 191 ⁇ for Color bar No. 1 in FIG. , 255, 191 ⁇ .
  • the white color components are ⁇ 255, 255, 255 ⁇ and the reflectance of the measurement object is ⁇ 50%, 100%, 75% ⁇ .
  • the colors ⁇ 127, 255, 191 ⁇ are obtained. This matches the value calculated from Color bar No. 1 in FIG.
  • the second signal processing unit 312 (see FIG. 3) and the image generating unit 314 (see FIG. 3) perform these series of transformations on all pixels obtained by the image sensor 204, and the image generating unit 314 (see FIG. 3) (See Fig. 3).
  • the image generation unit 314 generates two-dimensional RGB image data based on digital pixel signals for each imaging pixel P206 such as RGB ⁇ 127, 255, 191 ⁇ , and outputs the data to the storage via the data output I/F 318. record or display.
  • FIG. 24 is a table showing color stripe patterns (color bar numbers) generated by de Bruijn. As shown in FIG. 24, the color pattern according to the present embodiment can be constructed even with a de Brown color arrangement or a matrix tile pattern.
  • FIG. 25 is a diagram showing an example of extension of a cyclic pattern by the de Brownian sequence algorithm. As shown in FIGS. 24 and 25, for example, in the first color pattern (pattern1), there is only one pattern with red on the left and green on the right. Only one combination is possible.
  • each color component may be configured to emit light or not. That is, it is created by an exclusive combination. Therefore, the second color pattern (pattern2) is composed of color components not used in the first color pattern (pattern1). In the second color pattern (pattern2) as well, every combination of two adjacent colors is unique. These become a cycle of 30 colors, and the rightmost color, for example, sky blue of the first color pattern (pattern 1) becomes red, and the cycle of 30 colors is repeated again. Colors connected by repetition also become unique combinations.
  • the above processing is also possible with the color stripe pattern shown in FIG.
  • the number (color bar no) of the color stripe pattern that matches the improved color stripe pattern is searched and identified by the de Bruijn column in FIG.
  • the number increases compared to the circulation of six colors, it is possible to widen the diopter difference while maintaining the measurement granularity, or to increase the measurement granularity by narrowing the width of each color. It becomes possible.
  • FIG. 26 is a table showing color tile patterns in which color changes are expanded in the column direction.
  • FIG. 27 is a diagram showing the arrangement of the color tile pattern example of FIG. The numbers in FIG. 27 correspond to color tile numbers.
  • each color component is created by a combination of light emission and non-light emission
  • the second color pattern is composed of the color components not used in the first color pattern (pattern1). All combinations of two adjacent colors are unique in the second color pattern (pattern 2) as well.
  • Color tile patterns can also be processed by the method already described. However, in cases such as when there is no reflection of a specific color component, in addition to estimation based on the left or right adjacent color, estimation based on the upper or lower color can be used. As a result, when there is an obstacle such as lack of reflection, the information for the complementary calculation increases, so the accuracy of the complementary operation is further improved.
  • the color tile pattern may be de Brown's method or another circulating pattern.
  • FIG. 28 is a diagram showing a timing chart for reading the counter difference and the digital difference.
  • FIG. 28 is a diagram illustrating an example of performing AD conversion twice. Signal levels of the control lines RST, TRG, and SEL are shown in order from the top.
  • auto zero "Auto zero” AD conversion, FD reset, FD transfer, and AD conversion are performed continuously. Note that the control of the shutter state and FD transfer state is the same as in the examples of FIGS. On the other hand, the read state control is different.
  • auto zero "Auto zero (2)”, AD conversion (2), FD reset, FD transfer, and AD conversion (3) are performed in succession.
  • the transistor SW11 of the pixel (see FIG. 11) is turned off, and the AD converter (see FIG. 4) of the AD conversion unit 208 changes the voltage of the VSL line and AZ Record and store the offset from the reference voltage.
  • the AD conversion (2) state by setting the control line SEL to High, the transistor SW3 is rendered conductive, and the voltage of the FD is transmitted to the VSL line with a value amplified by the transistor AMP. As a result, a voltage corresponding to the amount of charge accumulated in the first frame exposure accumulated in the FD is conducted to the VSL line.
  • the AD converter (see FIG. 4) performs the first AD conversion on the voltage of the VSL line.
  • the FD reset is performed by setting the control line RST to High to bring the transistor SW2 into a conducting state and to flow the charge of the FD to the Vss line for resetting.
  • the control line RST is set to low again to turn off the transistor SW2.
  • the transistor SW1 is made conductive by setting the control line TRG to High to transfer the charge of the PD to the FD. As a result, the charges of the PD exposed and accumulated in the second frame are transferred to the FD and held.
  • the control line SEL is set to High to bring the transistor SW3 into a conductive state.
  • the voltage of FD is amplified by the transistor AMP and conducted to the VSL line.
  • the AD converter performs AD conversion of this value. This means that the voltage proportional to the amount of charge accumulated in exposure in the second frame is AD-converted.
  • the control line SEL is set to low to bring the transistor SW3 into a non-conducting state.
  • FIG. 29 is a diagram showing VSL (after passing through AZ) and DAC (after passing through AZ) waveforms of ADC comparator inputs corresponding to the timing chart of FIG. FIG. 29 shows a case where a specific pixel has sensitivity to the first color pattern.
  • VSL after passing through AZ
  • L290 in FIG. 29, since the charge has negative energy, the more the charge, the larger the negative voltage. In FIG. 29, the voltage in the downward direction becomes negative and the energy increases.
  • the voltages of VSL (after passing through AZ) L290 and DAC (after passing through AZ) are aligned with the AZ reference voltage, and this is used as the reference point.
  • the AZ circuit then maintains this offset value and continues to output the offset-added output to VSL (after passing through AZ) L290 and DAC (after passing through AZ).
  • the AD conversion (2) state the voltage of the charge accumulated in FD appears on VSL. Therefore, since the accumulated charge in the first frame has an accumulated amount corresponding to the projection light, VSL (after passing through AZ) L290 swings downward greatly.
  • VSL (after passing through AZ) When L290 stabilizes, perform the first AD conversion and count with the ADC counter that was cleared to 0 in advance.
  • the counter is decremented because the difference processing is performed by the counter.
  • the comparator is inverted and the ADC counter stops and holds the value.
  • the VSL line L290 is at high impedance (Hi-Z), so a state that does not change significantly is maintained.
  • the value of the second frame is read.
  • the second frame there is no sensitivity to the second color pattern, and the exposure component of reflected ambient light is primarily converted to stored charge. Therefore, the voltage corresponding to the accumulated charge amount becomes smaller than the value of the first frame. In this way, it swings slightly downward with respect to the AZ reference voltage. As shown in FIG. 29, the large value in the first frame results in an upward swing.
  • the second AD conversion is performed.
  • the ADC counter increments from the value of the first AD conversion.
  • the comparator flips and the counter stops counting. Since the first AD conversion counted in the negative direction, the value of this counter is the result of the difference between the two frames. is the value of
  • FIG. 30 is a diagram showing a case where specific pixels have sensitivity to the second color pattern.
  • the AD conversion method is the same as the procedure shown in FIG. Since the 1st frame is dark and the 2nd frame is bright, the value of VSL (after passing through AZ) L300 in the AD conversion (2) state becomes small, swings slightly downward from the AZ reference voltage, and swings greatly downward in AD conversion (2). , the result is positive because the increment value is greater. 17 and 18, and subsequent processing can be performed in the same manner as in FIGS.
  • the ADC counter is incremented and decremented. Difference processing may be performed by inverting before . Further, AD conversion of the first frame may be performed in increments during AD conversion (2), and the value may be sent to the signal processing unit. After that, the ADC is cleared to 0 before the AD conversion (3), the AD conversion of the second frame is performed, the value is sent to the signal processing section, and the signal processing section executes the difference between the two in a digital circuit.
  • the projection unit 100 sequentially projects the first color pattern and the second color pattern onto the measurement object T100, and the information processing unit 300 projects the first color pattern, Based on the second color pattern, the distance to the measurement object T100 is measured.
  • the light projected onto the same projection area of the first color pattern and the light of the second color pattern are light of different wavelength bands.
  • a color pattern can be configured such that even if light in one wavelength band is absorbed by the measurement target T100, light in the other wavelength band is reflected.
  • the light projected onto the same projection area of the first color pattern and the second color pattern is composed of a combination of light of three wavelength bands, and the first color pattern has three wavelength bands for the same projection area.
  • the second color pattern light in at least one wavelength band is projected, and light in the remaining wavelength bands among the light in the three wavelength bands is projected in the second color pattern.
  • This technology can be configured as follows.
  • a projection unit that sequentially projects a predetermined first color pattern and a predetermined second color pattern onto an object to be measured; an information processing unit that measures the distance to the measurement object based on the first color pattern and the second color pattern; with The distance measuring device, wherein light projected onto the same projection area of the first color pattern and the light of the second color pattern are light of different wavelength bands.
  • the light projected onto the same projection area of the first color pattern and the second color pattern is composed of a combination of light of three wavelength bands, and the first color pattern and the second color pattern are projected onto the same projection area.
  • each of the light in the three wavelength bands is one of light in the red band, green band, and blue band.
  • the imaging unit The distance measurement according to (4), further comprising a timing generation unit that generates a control signal for controlling projection timing of the first color pattern and the second color pattern in the projection unit and imaging timing of the imaging unit.
  • a timing generation unit that generates a control signal for controlling projection timing of the first color pattern and the second color pattern in the projection unit and imaging timing of the imaging unit.
  • the imaging unit A red pixel having sensitivity to light in the red band and having less sensitivity to green and blue bands than light in the red band, and a pixel having sensitivity to light in the green band and having sensitivity to red and blue bands Green pixels whose sensitivity is suppressed from light in the green band, and blue pixels which have sensitivity to light in the blue band and whose sensitivity to red and green bands is suppressed from light in the blue band.
  • the distance measuring device according to (5) which has a pixel array in which imaging pixels are arranged two-dimensionally.
  • Each of the red pixel, the green pixel, and the blue pixel in the imaging pixel generates more pixel signals for either the first color pattern or the second color pattern , (6).
  • the imaging unit A first accumulated charge amount accumulated according to the first color pattern and accumulated according to the second color pattern for each of the red pixel, the green pixel, and the blue pixel in the imaging pixel.
  • the distance measuring device according to (6) which converts a voltage corresponding to a difference between the accumulated charge amount and the accumulated charge amount into a digital pixel signal.
  • each of the red pixel, the green pixel, and the blue pixel in the imaging pixel is controlled such that the imaging time for the first color pattern and the imaging time for the second color pattern are equal; (6) A distance measuring device according to the above.
  • the information processing unit (6) to an image generation unit that generates a color image of the measurement target based on the output signal of the imaging pixel for the first color pattern and the output signal of the imaging pixel for the second color pattern;
  • the distance measuring device according to any one of (9).
  • the information processing unit Based on the digital pixel signal, patterns are sequentially detected from a digital image of the first color pattern and the second color pattern, and the detected pattern, the first color pattern projected by the projection unit, and the second color pattern are detected.
  • the distance measuring device further comprising a measuring unit that corresponds to at least one of two color patterns and generates a distance to each part of the measurement object.
  • the first color pattern and the second color pattern are composed of a plurality of partitioned regions, and the color of light in each partitioned region is the light of the red band, the green band, and the blue band. or any combination of two of the red band, the green band, and the blue band light.
  • the imaging unit For each of the red pixel, the green pixel, and the blue pixel in the imaging pixel, a voltage corresponding to a first accumulated charge amount accumulated according to the first color pattern is converted into a first digital signal. and converts a voltage corresponding to the amount of accumulated charge accumulated according to the second color pattern into a second digital signal, and uses the difference between them as a digital pixel signal.
  • the information processing unit (1) including a pattern generation unit that generates the first color pattern and the second color pattern, and supplying a signal having information of the first color pattern and the second color pattern to the projection unit;
  • the distance measuring device according to any one of (17) to (17).
  • the imaging unit The infrared wavelength band is divided into three wavelength bands, and consists of pixels that are sensitive only to each wavelength band, The distance measuring device according to (1), wherein the first color pattern and the second color pattern are configured in each of the three wavelength bands of the infrared region.
  • the imaging unit The distance measuring device according to (1), wherein the wavelength band is divided into at least two wavelength bands, and is composed of pixels having sensitivity only in each wavelength band.
  • (21) a projection step of sequentially projecting a predetermined first color pattern and a predetermined second color pattern onto a measurement object; an information processing step of measuring a distance to the measurement target based on the first color pattern and the second color pattern; with The light projected onto the same projection area for the first color pattern and the second color pattern is different from the light projected onto the same projection area for the first color pattern and the second color pattern.
  • a ranging method that uses light in a wavelength band.
  • 1 distance measuring device
  • 100 projection unit
  • 200 imaging unit
  • 206 pixel array
  • 214 timing control unit
  • 300 information processing unit
  • 304 pattern generation unit
  • 316 three-dimensional measurement calculation unit
  • 314 image Generation unit
  • P206 imaging pixels.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

[Problem] To provide a distance measurement device and a distance measurement method in which it is possible to suppress the effect of the color of a measurement object. [Solution] According to the present disclosure, this distance measurement device comprises a projection unit for projecting a prescribed first color pattern and a prescribed second color pattern onto a measurement object in the stated order, and an information processing unit for measuring the distance to the measurement object on the basis of the first and second color patterns, the distance measurement device being such that lights projected in the same projection region in the first and second color patterns are of different wavelength bands.

Description

測距装置、及び測距方法Ranging device and ranging method
 本開示は測距装置、及び測距方法に関する。 The present disclosure relates to a rangefinder and a rangefinder method.
 三角測量の原理を応用したアクティブ法として、一方の視点としての光源から被写体表面に投影された光源像をもう一方の視点に設定したカメラで観察し形状計測する方法が一般に知られている。このようなアクティブ法では、ストライプパタンのような2次元的なカラーパタンが投影される。 As an active method that applies the principle of triangulation, a method is generally known in which a light source image projected onto the object surface from a light source as one viewpoint is observed by a camera set at the other viewpoint and the shape is measured. Such active methods project two-dimensional color patterns such as stripe patterns.
特開2002-191058号公報Japanese Unexamined Patent Application Publication No. 2002-191058 特開2019-153822号公報JP 2019-153822 A
 しかしながら、カラーパタンを撮像するためには、カラーパタンそれぞれの色の波長に対する反射を必要とする。このため、色のついた被写体の測定は困難となる恐れがある。また、物体表面に特定の色の光を当てているため、物体の本来の色を観測する精度が低下する恐れがある。
 そこで、本開示では、測定対象の有する色の影響を抑制可能な測距装置、及び測距方法を提供するものである。
However, imaging a color pattern requires reflection for the wavelength of each color of the color pattern. Therefore, it may be difficult to measure a colored subject. In addition, since the surface of the object is illuminated with light of a specific color, the accuracy of observing the original color of the object may decrease.
Therefore, the present disclosure provides a distance measuring device and a distance measuring method capable of suppressing the influence of the color of the object to be measured.
 上記の課題を解決するために、本開示によれば、所定の第1カラーパタンと、所定の第2カラーパタンとを順に測定対象に投影する投影部と、
 前記第1カラーパタンと、前記第2カラーパタンとに基づき、前記測定対象までの距離を測定する情報処理部と、
 を備え、
 前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、異なる波長帯域の光である、測距装置。
In order to solve the above problems, according to the present disclosure, a projection unit that sequentially projects a predetermined first color pattern and a predetermined second color pattern onto a measurement target;
an information processing unit that measures the distance to the measurement object based on the first color pattern and the second color pattern;
with
The distance measuring device, wherein light projected onto the same projection area of the first color pattern and the light of the second color pattern are light of different wavelength bands.
 前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、3つの波長帯域の光の組合せで構成され、前記同一投影領域に対し、前記第1カラーパタンでは前記3つの波長帯域の光の少なくとも1波長帯域の光を投影し、前記第2カラーパタンでは前記3つの波長帯域の光のなかの残りの波長帯域の光を投影してもよい。 The light projected onto the same projection area of the first color pattern and the second color pattern is composed of a combination of light of three wavelength bands. At least one of the three wavelength bands of light may be projected, and the remaining wavelength bands of the three wavelength bands of light may be projected in the second color pattern.
 前記3つの波長帯域の光のそれぞれは、赤色帯域、緑色帯域、及び青色帯域の光のいずれかであってもよい。 Each of the light in the three wavelength bands may be light in the red band, the green band, or the blue band.
 前記第1カラーパタンと、前記第2カラーパタンとを投影された前記測定対象を順に撮像する撮像部を、
 更に備えてもよい。
an imaging unit that sequentially images the measurement object onto which the first color pattern and the second color pattern are projected;
You may also prepare.
 前記撮像部は、
 前記投影部における前記第1カラーパタンと、前記第2カラーパタンとの投影タイミングと、前記撮像部の撮像タイミングを制御する制御信号を生成するタイミング生成部を有してもよい。
The imaging unit is
A timing generation unit may be provided for generating a control signal for controlling projection timing of the first color pattern and the second color pattern in the projection unit and imaging timing of the imaging unit.
 前記撮像部は、
 赤色帯域の光に感度を有し、緑色帯域、及び青色帯域への感度が赤色帯域の光より抑制された赤色画素と、緑色帯域の光に感度を有し、赤色帯域、及び青色帯域への感度が緑色帯域の光より抑制された緑色画素と、青色帯域の光に感度を有し、赤色帯域、及び緑色帯域への感度が青色帯域の光より抑制された青色画素と、で構成される撮像画素が2次元状に配置された画素アレイを有してもよい。
The imaging unit is
A red pixel having sensitivity to light in the red band and having less sensitivity to green and blue bands than light in the red band, and a pixel having sensitivity to light in the green band and having sensitivity to red and blue bands Green pixels whose sensitivity is suppressed from light in the green band, and blue pixels which have sensitivity to light in the blue band and whose sensitivity to red and green bands is suppressed from light in the blue band. It may have a pixel array in which imaging pixels are arranged two-dimensionally.
 前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれは、前記第1カラーパタン、及び前記第2カラーパタンのいずれかに対してより多くの画素信号を生成してもよい。 Each of the red pixel, the green pixel, and the blue pixel within the imaging pixel may generate more pixel signals for either the first color pattern or the second color pattern. .
 前記撮像部は、
 前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれに対して、前記第1カラーパタンに応じて蓄積された第1蓄積電荷量と、前記第2カラーパタンに応じて蓄積された蓄積電荷量と、の差分に応じた電圧をデジタル画素信号に変換してもよい。
The imaging unit is
A first accumulated charge amount accumulated according to the first color pattern and accumulated according to the second color pattern for each of the red pixel, the green pixel, and the blue pixel in the imaging pixel A voltage corresponding to the difference between the accumulated charge amount and the accumulated charge amount may be converted into a digital pixel signal.
 前記タイミング生成部が生成する制御信号に基づき、
 前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれは、前記第1カラーパタンに対する撮像時間と、前記第2カラーパタンに対する撮像時間とが、等しくなるように制御されてもよい。
Based on the control signal generated by the timing generator,
Each of the red pixel, the green pixel, and the blue pixel in the imaging pixel may be controlled such that the imaging time for the first color pattern and the imaging time for the second color pattern are equal. good.
 前記情報処理部は、
 前記第1カラーパタンに対する前記撮像画素の出力信号と、前記第2カラーパタンに対する前記撮像画素の出力信号と、に基づき、前記測定対象のカラー画像を生成する画像生成部を有してもよい。
The information processing unit
An image generation unit may be provided that generates a color image of the measurement object based on the output signal of the imaging pixel for the first color pattern and the output signal of the imaging pixel for the second color pattern.
 前記情報処理部は、
 前記デジタル画素信号に基づき、前記第1カラーパタン及び前記第2カラーパタンを撮像したデジタル画像から順にパタンを検出し、前記検出したパタンと前記投影部が投影した前記第1カラーパタン、及前記第2カラーパタンの少なくともいずれかを対応させ、前記測定対象の各部分までの距離を生成する計測部を有してもよい。
The information processing unit
Based on the digital pixel signal, patterns are sequentially detected from a digital image of the first color pattern and the second color pattern, and the detected pattern, the first color pattern projected by the projection unit, and the second color pattern are detected. It may have a measuring unit that associates at least one of the two color patterns and generates a distance to each portion of the measurement target.
 前記第1カラーパタン、及び前記第2カラーパタンは、複数の区画領域で構成され、各区画領域のそれぞれの光の色は、前記赤色帯域、前記緑色帯域、及び前記青色帯域の光のいずれか、又は、前記赤色帯域、前記緑色帯域、及び前記青色帯域の光の中の2つに組み合わせのいずれかであってもよい。 The first color pattern and the second color pattern are composed of a plurality of partitioned regions, and the color of light in each partitioned region is any one of the red band, the green band, and the blue band. or any combination of two of the red band, the green band and the blue band of light.
 前記複数の区画領域は、グループ化されており、グループ内の区画領域ごとの光の色は全て異なってもよい。 The plurality of partitioned areas may be grouped, and the color of the light for each partitioned area within the group may be different.
 前記複数の区画領域において、隣接した2つの色の組み合わせは全て異なってもよい。 In the plurality of partitioned areas, combinations of two adjacent colors may all be different.
 前記複数の区画領域の色は、ド・ブラウン列による循環パタンであってもよい。 The colors of the plurality of partitioned areas may be a cyclic pattern based on a De Brown sequence.
 前記複数の区画領域は、行列状に2次元配置されたタイルパタンであってもよい。 The plurality of partitioned areas may be a tile pattern that is two-dimensionally arranged in a matrix.
 前記撮像部は、
 前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれに対して、前記第1カラーパタンに応じて蓄積された第1蓄積電荷量に応じた電圧を第1デジタル信号に変換し、前記第2カラーパタンに応じて蓄積された蓄積電荷量に応じた電圧を第2デジタル信号に変換し、それぞれの差分をデジタル画素信号としてもよい。
The imaging unit is
For each of the red pixel, the green pixel, and the blue pixel in the imaging pixel, a voltage corresponding to a first accumulated charge amount accumulated according to the first color pattern is converted into a first digital signal. Then, a voltage corresponding to the accumulated charge amount accumulated according to the second color pattern may be converted into a second digital signal, and the difference between them may be used as a digital pixel signal.
 前記情報処理部は、
 前記第1カラーパタン、及び前記第2カラーパタンを生成するパタン生成部を有し、前記第1カラーパタン、及び前記第2カラーパタンの情報を有する信号を前記投影部に供給してもよい。
The information processing unit
A pattern generation unit that generates the first color pattern and the second color pattern may be provided, and a signal having information on the first color pattern and the second color pattern may be supplied to the projection unit.
 前記撮像部は、
 赤外域の波長帯域を3つの波長帯域に分け、それぞれの波長帯のみに感度を有する画素で構成され、
 前記第1カラーパタン、及び前記第2カラーパタンは、前記赤外域の前記3つの波長帯のそれぞれで構成されてもよい。
The imaging unit is
The infrared wavelength band is divided into three wavelength bands, and consists of pixels that are sensitive only to each wavelength band,
The first color pattern and the second color pattern may be configured in each of the three wavelength bands of the infrared region.
 前記撮像部は、
 少なくとも波長帯域を2つの波長帯域に分け、それぞれの波長帯域のみに感度を有する画素で構成されてもよい。
The imaging unit is
At least the wavelength band may be divided into two wavelength bands, and pixels having sensitivity only to each wavelength band may be configured.
 上記の課題を解決するために、本開示によれば、所定の第1カラーパタンと、所定の第2カラーパタンとを順に測定対象に投影する投影工程と、
 前記第1カラーパタンと、前記第2カラーパタンとに基づき、前記測定対象までの距離を測定する情報処理工程と、
 を備え、
 前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、異なる波長帯域の光である、測距方法が提供される。
In order to solve the above problems, according to the present disclosure, a projecting step of sequentially projecting a predetermined first color pattern and a predetermined second color pattern onto a measurement object;
an information processing step of measuring a distance to the measurement target based on the first color pattern and the second color pattern;
with
Light projected onto the same projection area of the first color pattern and the second color pattern has different wavelengths from light projected onto the same projection area of the first color pattern and the second color pattern. A ranging method is provided, which is a band of light.
実施形態に係る測距装置の構成例を示す図。The figure which shows the structural example of the distance measuring device which concerns on embodiment. 測定対象までの測定原理を説明するための図。FIG. 4 is a diagram for explaining the principle of measurement up to a measurement target; 本実施形態に係る測距装置のより詳細な構成例を示すブロック図。FIG. 2 is a block diagram showing a more detailed configuration example of the distance measuring device according to the embodiment; 画素アレイの構成例を示す模式図。FIG. 3 is a schematic diagram showing a configuration example of a pixel array; 第1カラーパタン、及び第2カラーパタンの色配置例を模式的に示す図。FIG. 4 is a diagram schematically showing a color arrangement example of a first color pattern and a second color pattern; カラーパタンの赤色光と、緑色光と、青色光と、の組合せを示す表。A table showing combinations of red light, green light, and blue light in color patterns. カラーコード例を示す表。Table showing color code examples. 第1カラーパタンの投影例を模式的に示す図。FIG. 4 is a diagram schematically showing a projection example of a first color pattern; 反射の無い領域への投映例を示す図。FIG. 4 is a diagram showing an example of projection onto a non-reflective area; 図6Aで示した表に、各画素の反応を撮像R/G/Bで更に示した表。The table shown in FIG. 6A further shows the reaction of each pixel by imaging R/G/B. 第1カラーパタンしか投影しない場合の表。The table when only the first color pattern is projected. 撮像画素の構成例を示す図。FIG. 4 is a diagram showing a configuration example of imaging pixels; AD変換部を構成するADC回路の一例を示すブロック図。FIG. 2 is a block diagram showing an example of an ADC circuit forming an AD converter; 差分露光処理の概略図。Schematic diagram of a differential exposure process. シャッターのタイミングチャート例を示す図。FIG. 4 is a diagram showing an example of a shutter timing chart; フローティングディフュージョンの転送タイミングチャート例を示す図。FIG. 4 is a diagram showing an example of a transfer timing chart of floating diffusion; 読み出し(read)のタイミングチャート例を示す図。FIG. 4 is a diagram showing an example of a timing chart of reading; 第1カラーパタンに画素が感度を有する場合を示す図。FIG. 4 is a diagram showing a case where a pixel has sensitivity to the first color pattern; 第2カラーパタンに画素が感度を有する場合を示す図。FIG. 10 is a diagram showing a case where a pixel has sensitivity to a second color pattern; パタンを反射率の物体に投映した場合の輝度を示す表。A table showing the luminance when a pattern is projected onto a reflective object. 反射率(R:50%、G:100%、B:75%)の物体に投映し差分AD変換をした結果を示す表。A table showing the result of performing differential AD conversion on an object having reflectance (R: 50%, G: 100%, B: 75%). 有意なカラー成分に変換した結果を示す表。A table showing the results of conversion to significant color components. 図6Aで示したパタンを本実施形態に係る差分AD変換を通した場合の結果を示す表。FIG. 6B is a table showing results when the pattern shown in FIG. 6A is subjected to differential AD conversion according to the present embodiment; FIG. 青色光を反射しない物体を測定した結果を示す表。A table showing the results of measuring an object that does not reflect blue light. ド・ブラウンにより生成したカラーストライプパタンを示す表。Table showing color stripe patterns generated by De Brown. ド・ブラウン列のアルゴリズムによる循環パタンの拡張の例を示す図。FIG. 10 is a diagram showing an example of extension of a cyclic pattern by the de Brownian sequence algorithm; 縦列方向に色の変化を拡張したカラータイルパタンを示す表。A table showing a color tile pattern with color variations extended in the column direction. 図26のカラータイルパタン例の配置を示す図。FIG. 27 is a diagram showing the arrangement of the color tile pattern example of FIG. 26; カウンタ差分、デジタル差分の読み出しのタイミングチャートを示す図。FIG. 10 is a diagram showing a timing chart for reading a counter difference and a digital difference; ADCのコンパレータ入力の波形を示す図。The figure which shows the waveform of the comparator input of ADC. 特定画素が第2カラーパタンに感度を有する場合を示す図。FIG. 10 is a diagram showing a case where a specific pixel has sensitivity to a second color pattern;
 以下、図面を参照して、測距装置、及び測距方法の実施形態について説明する。以下では、測距装置、及び測距方法の主要な構成部分を中心に説明するが、測距装置、及び測距方法には、図示又は説明されていない構成部分や機能が存在しうる。以下の説明は、図示又は説明されていない構成部分や機能を除外するものではない。 Hereinafter, embodiments of the distance measuring device and the distance measuring method will be described with reference to the drawings. Although the main components of the rangefinder and the rangefinder method will be mainly described below, the rangefinder and the rangefinder method may include components and functions that are not shown or described. The following description does not exclude components or features not shown or described.
(一実施形態)
 図1は、本実施形態に係る測距装置1の構成例を示す図である。図1に示すように、測距装置1は、投影部100と、撮像部200と、情報処理部300とを備える。本実施形態に係る投影部100は、例えばプロジェクタであり、複数種類のカラーパタンを順に測定対象T100に投影する。なお、領域T100aは、測定対象T100の凸領域を示している。
(one embodiment)
FIG. 1 is a diagram showing a configuration example of a distance measuring device 1 according to this embodiment. As shown in FIG. 1 , the distance measuring device 1 includes a projection section 100 , an imaging section 200 and an information processing section 300 . The projection unit 100 according to the present embodiment is, for example, a projector, and sequentially projects a plurality of types of color patterns onto the measurement object T100. A region T100a indicates a convex region of the measurement object T100.
 情報処理部300は、例えばCPU(CentralProcessingUnit)やMPU(MicroProcessor)を含んで構成され、記憶部に記憶されるプログラムを実行することにより各処理部を構成する。この情報処理部300は、投影部100から投影する複数種類のカラーパタンを生成し、投影部100に供給する。また、情報処理部300は、撮像部200が撮影した、複数種類のカラーパタンの画像を処理し、距離情報と測定対象T100のカラー情報とを生成する。なお、情報処理部300が用いるプログラムは、記憶部に記憶に記憶されていてもよいし、あるいは、DVD(DigitalVersatileDisc)などの記憶媒体やクラウドコンピュータ等に記憶されていてもよい。また、そのプログラムは、情報処理部300において、CPU(CentralProcessingUnit)やMPU(MicroProcessor)によってRAM(RandomAccessMemory)等を作業領域として実行されてもよいし、あるいは、ASIC(ApplicationSpecificIntegratedCircuit)やFPGA(FieldProgrammableGateArray)等の集積回路により実行されてもよい。 The information processing section 300 includes, for example, a CPU (Central Processing Unit) and an MPU (Micro Processor), and configures each processing section by executing a program stored in the storage section. The information processing section 300 generates a plurality of types of color patterns to be projected from the projection section 100 and supplies them to the projection section 100 . Further, the information processing section 300 processes images of a plurality of types of color patterns captured by the imaging section 200, and generates distance information and color information of the measurement target T100. The program used by the information processing section 300 may be stored in the storage section, or may be stored in a storage medium such as a DVD (Digital Versatile Disc), a cloud computer, or the like. In the information processing unit 300, the program may be executed by a CPU (Central Processing Unit) or an MPU (Micro Processor) using a RAM (Random Access Memory) or the like as a work area, or may be executed by an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate), or the like. may be implemented by an integrated circuit of
 撮像部200は、例えばカメラであり、測定対象T100に投影に投影された複数種類のカラーパタンを撮像する。 The imaging unit 200 is, for example, a camera, and captures images of multiple types of color patterns projected onto the measurement object T100.
 図2は、測定対象T100までの測定原理を説明するための図である。図2に示すように、情報処理部300は、三角測量の原理に基づき、距離情報を生成する。投影部100の視点をA、撮像部200の視点をB、AとBとを繋いだ線分ABの長さをLとする。この場合、距離Lと、線分ABに対する投影部100と、撮像部200の角度は予め情報処理部300に設定する。 FIG. 2 is a diagram for explaining the measurement principle up to the measurement object T100. As shown in FIG. 2, the information processing section 300 generates distance information based on the principle of triangulation. Let A be the viewpoint of the projection unit 100, B be the viewpoint of the imaging unit 200, and L be the length of a line segment AB connecting A and B. FIG. In this case, the distance L and the angles of the projection unit 100 and the imaging unit 200 with respect to the line segment AB are set in the information processing unit 300 in advance.
 これにより、測定対象T100の点Pまでの距離は、(1)式で算出される。撮像部200からの角度αは、後述するカラーパタンの投影角度毎に設定されている。また、角度βは、撮像部200により撮像された画像上のカラーパターの位置情報を用いることにより算出される。すなわち、測定対象T100の表面全域において各カラーパタンのエッジにおける各点Pの角度α、βを計算することにより、測定対象T100の距離、及び表面形状が測定される。
Figure JPOXMLDOC01-appb-M000001
Thereby, the distance to the point P of the measurement object T100 is calculated by the formula (1). The angle α from the imaging unit 200 is set for each projection angle of color patterns, which will be described later. Also, the angle β is calculated by using the positional information of the color pattern on the image captured by the imaging unit 200 . That is, the distance and surface shape of the measurement object T100 are measured by calculating the angles α and β of each point P at the edge of each color pattern over the entire surface of the measurement object T100.
Figure JPOXMLDOC01-appb-M000001
 図3は、本実施形態に係る測距装置1のより詳細な構成例を示すブロック図である。上述のように、投影部100は、例えばプロジェクタである。このプロジェクタは、例えば光源、液晶パネル、レンズで構成される。投影部100は、撮像部200で生成される同期信号に従い、測定物T100に対して第1カラーパタン(pattern1)と、第2カラーパタン(pattern2)とを順に投映する。 FIG. 3 is a block diagram showing a more detailed configuration example of the distance measuring device 1 according to this embodiment. As described above, the projection unit 100 is, for example, a projector. This projector is composed of, for example, a light source, a liquid crystal panel, and a lens. The projection unit 100 sequentially projects the first color pattern (pattern1) and the second color pattern (pattern2) onto the measurement object T100 according to the synchronization signal generated by the imaging unit 200 .
 液晶パネルは透過性を持ち、光源から発光された光を透過、若しくは遮断する素子を有する。この素子をマトリクス状に配置してプロジェクタに入力された画像を生成する。液晶パネルは赤(R)/緑(G)/青(B)の各色の画素がマトリックスに並んでおり、プロジェクタに入力された画像、すなわち第1カラーパタンと、第2カラーパタンとで順に駆動される。光源から発光した光は液晶パネルを通過し、レンズによりスクリーンに投映される。このように、本実施形態では、撮像部200で生成される同期信号に従い、測定物T100に対して第1カラーパタンと、第2カラーパタンとを順に投映する。なお、本実施形態では、赤、緑、青を単にR、G、Bと記す場合がある。 The liquid crystal panel is transmissive and has elements that transmit or block the light emitted from the light source. These elements are arranged in a matrix to generate an image input to the projector. The liquid crystal panel has red (R)/green (G)/blue (B) pixels arranged in a matrix, and is sequentially driven by the image input to the projector, that is, the first color pattern and the second color pattern. be done. Light emitted from the light source passes through the liquid crystal panel and is projected onto the screen by the lens. As described above, in this embodiment, the first color pattern and the second color pattern are sequentially projected onto the measurement object T100 in accordance with the synchronization signal generated by the imaging unit 200. FIG. In addition, in this embodiment, red, green, and blue may be simply described as R, G, and B.
 また、このプロジェクタでは、光源を点灯することにより、撮像部200の画素アレイ206と投影部100のスキャン速度の差異を抑制し、画素アレイ206と投影部100との同期動作を可能とする。例えば、本実施形態に係るプロジェクタは、光源の制御機能に加え、液晶パネルの第1カラーパタンと、第2カラーパタンとの更新を発光タイミングの前に行うことが可能である。なお、更新タイミングの詳細は後述する。また、プロジェクタに用いる画像生成素子は液晶パネルに代えてデジタルライトプロセッシング(DLP)等を用いてもよい。この場合、光源から発光した光はデジタルライトプロセッシング(DLP)に反射させ、レンズを通してスクリーンに投映させる。光源の発光によりプロジェクタの投映が制御可能であり、本実施形態に係る投影部100に用いることも可能である。 Also, in this projector, by turning on the light source, the difference in scanning speed between the pixel array 206 of the imaging unit 200 and the projecting unit 100 is suppressed, and the pixel array 206 and the projecting unit 100 can operate in synchronization. For example, in addition to the light source control function, the projector according to this embodiment can update the first color pattern and the second color pattern of the liquid crystal panel before the light emission timing. Details of the update timing will be described later. Further, digital light processing (DLP) or the like may be used instead of the liquid crystal panel for the image generation element used in the projector. In this case, light emitted from the light source is reflected by digital light processing (DLP) and projected onto a screen through a lens. Projection of the projector can be controlled by light emission of the light source, and can be used in the projection unit 100 according to the present embodiment.
 撮像部200は、レンズ202と、イメージセンサ204とを有する。測定対象T100の像はレンズ202を介してイメーイメージセンサ204に結像し、光電変換の原理により撮像される。すなわち、このイメージセンサ204は、画素アレイ206と、AD変換部208と、信号処理部210と、画像出力インターフェース212と、タイミング制御部214と、画素制御部216と、タイミング出力インターフェース218とを、有する。 The imaging unit 200 has a lens 202 and an image sensor 204 . An image of the object T100 to be measured is formed on the image sensor 204 via the lens 202, and captured by the principle of photoelectric conversion. That is, the image sensor 204 includes a pixel array 206, an AD converter 208, a signal processor 210, an image output interface 212, a timing controller 214, a pixel controller 216, and a timing output interface 218. have.
 図4は、画素アレイ206の構成例を示す模式図である。図4に示すように、画素アレイ206は、複数の撮像画素P206を行列状に配置して構成される。また、各撮像画素P206はRGB画素を有する。例えば、撮像画素P206は、赤色フィルタを介して光電変換する赤(R)色画素と、緑色フィルタを介して光電変換する緑(G)色画素と、青色フィルタを介して光電変換する青(B)色画素とを有する。 FIG. 4 is a schematic diagram showing a configuration example of the pixel array 206. As shown in FIG. As shown in FIG. 4, the pixel array 206 is configured by arranging a plurality of imaging pixels P206 in a matrix. Also, each imaging pixel P206 has an RGB pixel. For example, the imaging pixel P206 includes a red (R) pixel photoelectrically converted via a red filter, a green (G) pixel photoelectrically converted via a green filter, and a blue (B) pixel photoelectrically converted via a blue filter. ) color pixels.
 例えば、各画素は、フォトダイオードPDの光電変換によって蓄積された蓄積電荷(ここでは、光電子)をフローティングディフュージョンFDに転送し、蓄積するタイプを用いる(特許文献2参照)。このような、フォトトランジスタ(PD)とフローティングディフュージョンFDとが対となるタイプの画素を用いることにより、1番目の第1カラーパタンに対応する第1蓄積電荷量に応じた電圧と、2番目の第2カラーパタンに対応する第2蓄積電荷量に応じた電圧と、の差分を、一度にAD変換し、デジタル画素値を生成することが可能となる。画素アレイ206の画素及びAD変換の動作の詳細は後述する。 For example, each pixel uses a type that transfers accumulated charges (here, photoelectrons) accumulated by photoelectric conversion of the photodiode PD to the floating diffusion FD and accumulates them (see Patent Document 2). By using such a type pixel in which a phototransistor (PD) and a floating diffusion FD form a pair, a voltage corresponding to the first accumulated charge amount corresponding to the first color pattern and the second A voltage corresponding to the second accumulated charge amount corresponding to the second color pattern can be AD-converted at once to generate a digital pixel value. The details of the pixels of the pixel array 206 and the AD conversion operation will be described later.
 後述するように、本実施形態に係るカラーパタンは、赤色フィルタの透過帯域に対応する赤(red)光と、緑色フィルタの透過帯域に対応する緑(green)光と、青色フィルタの透過帯域に対応する青(blue)光と、の組み合わせで構成される。このため、赤(R)色画素は、カラーパタン中の青(blue)光と、緑(green)光とには、感度が抑制され、反応しないように構成されている。同様に、緑(G)色画素は、カラーパタン中の赤(red)光と、青(blue)光とには、感度が抑制され、反応しないように構成されている。同様に、青(B)色画素は、カラーパタン中の赤(red)光と、緑(green)光とには、感度が抑制され、反応しないように構成されている。すなわち、各撮像画素P206は、赤色帯域の光に感度を有し、緑色帯域、及び青色帯域への感度が赤色帯域の光より抑制された赤(R)色画素と、緑色帯域の光に感度を有し、赤色帯域、及び青色帯域への感度が緑色帯域の光より抑制された緑(G)色画素と、青色帯域の光に感度を有し、赤色帯域、及び緑色帯域への感度が青色帯域の光より抑制された青(B)色画素と、で構成される。 As will be described later, the color pattern according to the present embodiment includes red light corresponding to the transmission band of the red filter, green light corresponding to the transmission band of the green filter, and green light corresponding to the transmission band of the blue filter. and a corresponding blue light. For this reason, the red (R) color pixel is configured such that its sensitivity is suppressed and does not react to blue light and green light in the color pattern. Similarly, the green (G) color pixel is configured to have reduced sensitivity and not react to red and blue light in the color pattern. Similarly, the blue (B) color pixel is configured such that it has reduced sensitivity and does not react to red and green light in the color pattern. That is, each imaging pixel P206 has sensitivity to light in the red band, and a red (R) pixel whose sensitivity to green and blue bands is suppressed more than light in the red band, and sensitivity to light in the green band. and a green (G) pixel in which the sensitivity to the red band and the blue band is suppressed from the light in the green band, and a pixel having sensitivity to the light in the blue band and having sensitivity to the red band and the green band and blue (B) pixels that are suppressed from light in the blue band.
 なお、本実施形態では、処理の省電力化と高速化のために、トランジスタ(PD)とローティングディフュージョンFDとが対となるタイプの画素を用いるが、これに限定されない。例えば、第1カラーパタンと、第2カラーパタンとに対する画像信号を順に、逐次的なAD変換を行う場合には、光電変換素子で構成される一般的な画素を用いてもよい。 It should be noted that in this embodiment, in order to save power and speed up processing, a pixel of a type in which a transistor (PD) and a rolling diffusion FD are paired is used, but the present invention is not limited to this. For example, when sequentially performing AD conversion on the image signals for the first color pattern and the second color pattern, general pixels configured by photoelectric conversion elements may be used.
 また、本実施形態では、赤(R)色画素と、緑(G)色画素と、青(G)色画素と、で構成されるが、これに限定されない。例えば、赤外域の波長帯を3つの、分離した波長帯に分け、それぞれの波長帯のみに感度を有する画素を構成してもよい。この場合、カラーパタンは、3つの、分離した波長帯のそれぞれで構成される。この場合、人の目には視覚感度のない波長帯を測距に用いるので、人の視覚に感知されずに測距が可能となる。なお、本実施形態では、赤(R)色画素、緑(G)色画素、青(G)色画素を単にそれぞれ、R画素、G画素、B画素と称する場合がある。 Also, in the present embodiment, the pixels are composed of red (R) color pixels, green (G) color pixels, and blue (G) color pixels, but are not limited to this. For example, the infrared wavelength band may be divided into three separate wavelength bands, and a pixel having sensitivity only to each wavelength band may be configured. In this case, the color pattern consists of each of three separate wavelength bands. In this case, since the wavelength band to which the human eye has no visual sensitivity is used for distance measurement, the distance can be measured without being detected by human vision. In the present embodiment, the red (R) color pixel, green (G) color pixel, and blue (G) color pixel may simply be referred to as R pixel, G pixel, and B pixel, respectively.
 再び図4に示すように、画素制御部216は、横方向に配線された制御線(control wires)により各撮像画素P206を構成するR/G/B画素毎のシャッター(shutter)/ローティングディフュージョン(FD)転送/読み出し(read)などの制御を行う。制御用の制御線(control wires8~1)は横方向に共通化されており、上から下に行アドレス(vertical address8~1)が割り当てられている。 As shown in FIG. 4 again, the pixel control unit 216 controls the shutter/rotating diffusion for each of the R/G/B pixels that make up each imaging pixel P206 by horizontally wired control wires. (FD) Controls transfer/reading. The control lines (control wires 8-1) for control are shared in the horizontal direction, and row addresses (vertical addresses 8-1) are assigned from top to bottom.
 縦方向に配線された配線VSL1~VSLnはR/G/B画素のデータをAD変換部208(図3参照)に供給する信号線であり、制御線(control wires)によって選択された画素から蓄積信号電荷がAD変換部208に入力される。 Wirings VSL1 to VSLn arranged in the vertical direction are signal lines for supplying R/G/B pixel data to the AD converter 208 (see FIG. 3), and are accumulated from pixels selected by control wires. A signal charge is input to the AD converter 208 .
 AD変換部208には、画素アレイ206の各画素に蓄積された蓄積信号電荷に応じた電圧が、配線VSL1~VSLnを介して供給される。AD変換部208は、蓄積信号電荷に応じた電圧をAD変換し、デジタルデータであるデジタル画素信号に変換する。このAD変換部208は、例えば差分AD変換を行う。このAD変換部208は、露光により蓄積された各画素の蓄積信号電荷(光電荷)に応じた電圧をリニアに変換し、第1カラーパタンの撮像に対応する第1フレームの第1蓄積電荷量と、第2カラーパタンの撮像に対応する第2フレームの第2蓄積電荷量と、の差分をAD変換する制御を行うことが可能である。なお、AD変換部208の動作例の詳細も後述する。 A voltage corresponding to the accumulated signal charge accumulated in each pixel of the pixel array 206 is supplied to the AD converter 208 via the wirings VSL1 to VSLn. The AD conversion unit 208 AD-converts the voltage corresponding to the accumulated signal charge and converts it into a digital pixel signal, which is digital data. The AD converter 208 performs, for example, differential AD conversion. The AD conversion unit 208 linearly converts a voltage corresponding to the accumulated signal charge (photoelectric charge) of each pixel accumulated by exposure, and calculates the first accumulated charge amount of the first frame corresponding to the imaging of the first color pattern. and the second accumulated charge amount of the second frame corresponding to imaging of the second color pattern. Details of an operation example of the AD conversion unit 208 will also be described later.
 信号処理部210は、AD変換部208で生成されたデジタルの画像データを画像出力インターフェース212に供給する。この場合、信号処理部210は、ノイズ低減処理やしきい値処理などの各種の信号処理を行うことが可能である。 The signal processing unit 210 supplies the digital image data generated by the AD conversion unit 208 to the image output interface 212 . In this case, the signal processing unit 210 can perform various signal processing such as noise reduction processing and threshold processing.
 画像出力インターフェース212は、信号線へのレベル変換などを行う。そして、画像出力インターフェース212は、AD変換部208で生成された画像データを情報処理部300に供給する。 The image output interface 212 performs level conversion to the signal line. The image output interface 212 then supplies the image data generated by the AD converter 208 to the information processor 300 .
 タイミング制御部214は、投影部100から投影させる第1カラーパタン、及び第2カラーパタンのタイミングと、画素アレイ206の各画素における第1カラーパタン、及び第2カラーパタンの撮像タイミングと、AD変換部208のAD変換タイミングとを同期させる制御行う。より具体的には、タイミング制御部214は、AD変換部208に対しては差分AD変換の駆動タイミングを制御する。タイミング制御部214は、画素制御部216に対しては、画素アレイ206の各画素における第1カラーパタン、及び第2カラーパタンの撮像タイミングを制御する。 The timing control unit 214 controls the timing of the first color pattern and the second color pattern projected from the projection unit 100, the imaging timing of the first color pattern and the second color pattern in each pixel of the pixel array 206, and AD conversion. Control is performed to synchronize with the AD conversion timing of the unit 208 . More specifically, the timing control section 214 controls the drive timing of the differential AD conversion for the AD conversion section 208 . The timing control unit 214 controls the imaging timing of the first color pattern and the second color pattern in each pixel of the pixel array 206 for the pixel control unit 216 .
 画素制御部216は、上述のように、タイミング制御部214のタイミング制御に従い、画素アレイ206の各画素における第1カラーパタン、及び第2カラーパタンの撮像タイミングを制御する。より詳細には、画素アレイ206の各画素の初期化、フォトトランジスタ(PD)での光電変換、フォトトランジスタ(PD)からローティングディフュージョンFDへの転送などの制御タイミングを制御する。 The pixel control unit 216 controls the imaging timing of the first color pattern and the second color pattern in each pixel of the pixel array 206 according to the timing control of the timing control unit 214 as described above. More specifically, it controls control timing of initialization of each pixel of the pixel array 206, photoelectric conversion in the phototransistor (PD), transfer from the phototransistor (PD) to the rolling diffusion FD, and the like.
 タイミング出力インターフェース218は、タイミング制御部214で生成された同期タイミングの情報を含む制御信号を信号線へのレベル変換などを行う。そして、タイミング出力インターフェース218は、同期タイミングの情報を含む制御信号を情報処理部300に供給する。 The timing output interface 218 converts the level of the control signal including the synchronization timing information generated by the timing control unit 214 to the signal line. Then, the timing output interface 218 supplies the information processing section 300 with a control signal including information on the synchronization timing.
 情報処理部300は、タイミング入力インターフェース302と、パタン生成部304と、プロジェクタ出力インターフェース306と、発光制御部308と、画像入力インターフェース310と、第2信号処理部312、画像生成部314と、3次元計測演算部316と、データ出力インターフェース318とを、有する。タイミング入力インターフェース302は、タイミング制御部214の生成する同期タイミングの情報を含む制御信号を入力する。 The information processing unit 300 includes a timing input interface 302, a pattern generation unit 304, a projector output interface 306, a light emission control unit 308, an image input interface 310, a second signal processing unit 312, an image generation unit 314, 3 It has a dimension measurement calculator 316 and a data output interface 318 . The timing input interface 302 inputs a control signal including synchronization timing information generated by the timing control section 214 .
 パタン生成部304は、タイミング制御部214の生成する同期タイミングに応じて、第1カラーパタン、及び第2カラーパタンを生成する。 The pattern generator 304 generates the first color pattern and the second color pattern according to the synchronization timing generated by the timing controller 214 .
 プロジェクタ出力インターフェース306は、パタン生成部304が生成した第1カラーパタン、及び第2カラーパタンの情報を含む画像信号を投影部100に供給する。発光制御部308は、タイミング制御部214の生成する同期タイミングに応じて、第1カラーパタン、及び第2カラーパタンの投影タイミングに対応させて、投影部100の光源の発光制御を行う。すなわち、発光制御部308は、発光タイミングの情報を含む発光制御信号を、プロジェクタ出力インターフェース306を介して投影部100に供給する。 The projector output interface 306 supplies an image signal including information on the first color pattern and the second color pattern generated by the pattern generation unit 304 to the projection unit 100 . The light emission control unit 308 controls the light emission of the light source of the projection unit 100 in correspondence with the projection timing of the first color pattern and the second color pattern according to the synchronization timing generated by the timing control unit 214 . That is, the light emission control unit 308 supplies a light emission control signal including information on light emission timing to the projection unit 100 via the projector output interface 306 .
 画像入力インターフェース310は、撮像部200が撮像した画像データを入力し、第2信号処理部312に供給する。第2信号処理部312は、撮像部200が撮像した画像データのRGB正規化、2値化処理を行う。これらの処理の詳細は後述する。 The image input interface 310 inputs image data captured by the imaging unit 200 and supplies the data to the second signal processing unit 312 . The second signal processing unit 312 performs RGB normalization and binarization processing of the image data captured by the imaging unit 200 . Details of these processes will be described later.
 画像生成部314は、RGB正規化した画像データに基づき、通常のRGBカラーの撮像画像データを生成する。そして、画像生成部314は、撮像画像データをデータ出力インターフェース318に供給する。この場合、画像生成部314は、ガンマ変換、周波数処理、ノイズ低減処理などを撮像画像データに施すことも可能である。 The image generation unit 314 generates normal RGB color captured image data based on the RGB-normalized image data. The image generator 314 then supplies the captured image data to the data output interface 318 . In this case, the image generator 314 can also perform gamma conversion, frequency processing, noise reduction processing, and the like on the captured image data.
 3次元計測演算部316は、2値化した画像データを用いて、第1カラーパタン、及び第2カラーパタンの投影角αと撮像部200が撮像した画像した画像における対応箇所におけるβとを取得する。そして、3次元計測演算部316は、(1)式に基づき、各点の距離dを演算し、3次元計測データを生成して、データ出力インターフェース318に供給する。このように3次元計測演算部316は、第1カラーパタン及び第2カラーパタンを撮像したデジタル画像からパタンを検出し、検出したパタンと投影部100が投影した第1カラーパタン、及第2カラーパタンの少なくともいずれかを対応させ、測定対象T100の各部分までの距離を生成する。なお、本実施形態に係る3次元計測演算部が計測部に対応する。 The three-dimensional measurement calculation unit 316 uses the binarized image data to obtain the projection angle α of the first color pattern and the second color pattern and β at the corresponding location in the image captured by the imaging unit 200. do. Then, the three-dimensional measurement calculation unit 316 calculates the distance d of each point based on the formula (1), generates three-dimensional measurement data, and supplies the data output interface 318 with the three-dimensional measurement data. In this way, the three-dimensional measurement calculation unit 316 detects patterns from digital images obtained by imaging the first color pattern and the second color pattern, and detects the detected patterns, the first color pattern projected by the projection unit 100, and the second color pattern. At least one of the patterns is associated to generate the distance to each portion of the measurement object T100. Note that the three-dimensional measurement calculation unit according to this embodiment corresponds to the measurement unit.
 データ出力インターフェース318は、画像生成部314から供給された撮像画像データ、3次元計測データを記憶部(ストレージ)、スクリーンなどに供給する。なお、3次元計測データは、測定対象T100の各部分までの距離を意味する。 The data output interface 318 supplies captured image data and three-dimensional measurement data supplied from the image generation unit 314 to a storage unit (storage), a screen, and the like. The three-dimensional measurement data means the distance to each part of the measurement target T100.
 ここで、図5、図6A及び図6Bに基づき、パタン生成部304が生成する第1カラーパタン、及び第2カラーパタンの例を詳細に説明する。図5は、第1カラーパタン、及び第2カラーパタンの色配置例を模式的に示す図である。上述のように、第1カラーパタン、及び第2カラーパタンは、順に繰り返し、投影部100により測定対象T100に投影される。図5では、第1カラーパタン、及び第2カラーパタンは、別行に図示しているが、第1カラーパタンと第2カラーパタンとは順に同じ位置に投影される。例えば、第1カラーパタン内においてC1で示すストライプの投影後には、ストライプC1と同じ位置に、第2カラーパタン内においてC6で示すストライプが投影される。C1~C6は後述するカラーコード(図6B参照)に対応する。 Here, examples of the first color pattern and the second color pattern generated by the pattern generation unit 304 will be described in detail with reference to FIGS. 5, 6A, and 6B. FIG. 5 is a diagram schematically showing a color arrangement example of the first color pattern and the second color pattern. As described above, the first color pattern and the second color pattern are sequentially repeated and projected onto the measurement target T100 by the projection unit 100. FIG. In FIG. 5, the first color pattern and the second color pattern are shown on separate lines, but the first color pattern and the second color pattern are projected in order to the same position. For example, after the projection of the stripe designated C1 in the first color pattern, the stripe designated C6 in the second color pattern is projected at the same position as the stripe C1. C1 to C6 correspond to color codes (see FIG. 6B), which will be described later.
 図6Aは、第1カラーパタン(patter1)、及び第2カラーパタン(patter2)の赤色(R:red)光と、緑色(G:green)光と、青色(B:blue)光と、の組合せを示す表である。図6Bは、カラーコード例を示す表である。例えばカラー番号C1には、赤色(red)、カラー番号C2には、ピンク色(pink)、カラー番号C3には、緑色(green)、カラー番号C4には、黄色(yellow)、カラー番号C5には、青色(blue)、カラー番号C6には、スカイブルー色(sky blue)が対応する。 FIG. 6A shows a combination of red (R: red) light, green (G: green) light, and blue (B: blue) light of the first color pattern (patter1) and the second color pattern (patter2). is a table showing FIG. 6B is a table showing color code examples. For example, color number C1 is red, color number C2 is pink, color number C3 is green, color number C4 is yellow, color number C5 is corresponds to blue, and color number C6 corresponds to sky blue.
 再び図6Aに示すように、図6Aのストライプ番号(Color bar No.)は、図5の各ストライプの位置に対応する。投影色(color)は、図6Bのカラーコード(C1~C6)に対応する色を示す。 As shown in FIG. 6A again, the stripe number (color bar number) in FIG. 6A corresponds to the position of each stripe in FIG. Projected colors (colors) indicate colors corresponding to the color codes (C1 to C6) in FIG. 6B.
 また、R/G/Bは、赤色(R:red)光と、緑色(G:green)光と、青色(B:blue)光と、の組合せを示す。例えば、赤色(red)光は、1/0/0で示され、赤色(R:red)光で構成される。同様に、ピンク色(pink)光は、1/0/1で示され、赤色(R:red)光と、青色(B:blue)光で構成される。同様に、緑色(green)光は、0/1/0で示され、緑色(G:green)光で構成される。同様に、黄色(yellow)光は、1/1/0で示され、赤色(R:red)光と、緑色(G:green)光で構成される。同様に、青色(blue)光は、0/0/1で示され、青色(B:blue)光で構成される。同様に、スカイブルー色(sky blue)光は、0/1/1で示され、緑色(G:green)光と、青色(B:blue)光とで構成される。 Also, R/G/B indicates a combination of red (R: red) light, green (G: green) light, and blue (B: blue) light. For example, red (red) light is indicated by 1/0/0 and consists of red (R:red) light. Similarly, pink light is indicated by 1/0/1 and is composed of red (R) light and blue (B) light. Similarly, green light is indicated by 0/1/0 and is composed of green (G) light. Similarly, yellow light is indicated by 1/1/0 and is composed of red (R) light and green (G) light. Similarly, blue light is indicated by 0/0/1 and is composed of blue (B) light. Similarly, sky blue light is indicated by 0/1/1 and is composed of green (G) light and blue (B) light.
 すなわち、第1カラーパタンにおけるR/G/Bが1/0/0で示されるカラー番号C1の赤色(red)光の後に、第2カラーパタンにおけるR/G/Bが0/1/1で示されるカラー番号C6のスカイブルー色(sky blue)が照射される。このように、第1カラーパタンと、第2カラーパタンとの同一投影領域に投影される光は、3つの波長帯域の光(赤色(R:red)光と、緑色(G:green)光と、青色(B:blue)光)の組合せで構成され、同一投影領域に対し、第1カラーパタンでは3つの波長帯域の光(赤色(R:red)光と、緑色(G:green)光と、青色(B:blue)光)の少なくとも1波長帯域(例えば赤色(red))の光を投影し、第2カラーパタンでは3つの波長帯域の光(赤色(R:red)光と、緑色(G:green)光と、青色(B:blue)光)のなかの残りの波長帯域の光(例えば、緑色(G:green)光と、青色(B:blue)光と)を投影する。換言すると、1/0/0[R/G/B]で構成される光と、排他的な0/1/1[R/G/B]で構成される光とが順に投影される。ここで排他的とは、第1カラーパタンでの1/0/0[R/G/B]の構成色と、第2カラーパタンでの0/1/1[R/G/B]の構成色とを加算すると1/1/1[R/G/B]となる関係を、本実施形態では排他的、或いは補完的と称することとする。 That is, after the red light of color number C1 in which R/G/B is 1/0/0 in the first color pattern, R/G/B in the second color pattern is 0/1/1. A sky blue color with color number C6 shown is illuminated. In this way, the light projected onto the same projection area of the first color pattern and the second color pattern includes light of three wavelength bands (red (R) light and green (G) light). , blue (B) light), and for the same projection area, in the first color pattern, light of three wavelength bands (red (R) light, green (G) light, and , blue (B) light) at least one wavelength band (for example, red) is projected, and the second color pattern projects light of three wavelength bands (red (R: red) light and green ( G (green) light and light in the remaining wavelength band in blue (B) light (eg, green (G) light and blue (B) light) are projected. In other words, light composed of 1/0/0 [R/G/B] and light composed of exclusive 0/1/1 [R/G/B] are projected in order. Here, exclusive means 1/0/0 [R/G/B] constituent colors in the first color pattern and 0/1/1 [R/G/B] constituent colors in the second color pattern. In the present embodiment, the relationship of 1/1/1 [R/G/B] when added to the color is referred to as exclusive or complementary.
 ここで排他的な組み合わせについてより具体的に説明する。図4に示すように撮像画素P206に示すRGB画素において、R画素は、第1カラーパタンにおいて、1/0/0[R/G/B]で構成されるカラー番号C1の赤色(red)光が投影されると反応、すなわち光電変換するが、第2カラーパタンにおいて、0/1/1[R/G/B]で構成されるカラー番号C6のスカイブルー色(sky blue)光が投影されると光電変換しない。 Here, the exclusive combination will be explained more specifically. As shown in FIG. 4, among the RGB pixels shown in the image pickup pixel P206, the R pixel is red light of color number C1 composed of 1/0/0 [R/G/B] in the first color pattern. When is projected, it reacts, that is, photoelectrically converted, but in the second color pattern, sky blue light of color number C6 composed of 0/1/1 [R/G/B] is projected. photoelectric conversion does not occur.
 一方で、G画素は、第1カラーパタンにおいて、1/0/0[R/G/B]で構成されるカラー番号C1の赤色(red)光が投影されると光電変換しないが、第2カラーパタンにおいて、0/1/1[R/G/B]で構成されるカラー番号C6のスカイブルー色(sky blue)光が投影されると光電変換する。同様に、B画素は、第1カラーパタンにおいて、1/0/0[R/G/B]で構成されるカラー番号C1の赤色(red)光が投影されると光電変換しないが、第2カラーパタンにおいて、0/1/1[R/G/B]で構成されるカラー番号C6のスカイブルー色(sky blue)光が投影されると光電変換する。 On the other hand, in the first color pattern, the G pixel does not perform photoelectric conversion when red light having a color number C1 composed of 1/0/0 [R/G/B] is projected. In the color pattern, photoelectric conversion is performed when sky blue light of color number C6 composed of 0/1/1 [R/G/B] is projected. Similarly, in the first color pattern, the B pixel does not perform photoelectric conversion when red light having a color number C1 composed of 1/0/0 [R/G/B] is projected. In the color pattern, photoelectric conversion is performed when sky blue light of color number C6 composed of 0/1/1 [R/G/B] is projected.
 すなわち、第1カラーパタンにおいて、R画素は光電変換するが、残りのG、B画素は光電変換しない。一方で、第2カラーパタンにおいて、R画素は光電変換しないが、残りのG、B画素は光電変換する。このように、排他的な関係では、第1カラーパタンと、第2カラーパタンとで光電変換するR/G/B画素が排他的に入れ替わる。これから分かるように、撮像画素P206(図4参照)内の赤色画素、緑色画素、及び青色画素のそれぞれは、第1カラーパタン、及び第2カラーパタンのいずれかに対してより多くの画素信号を生成する。 That is, in the first color pattern, R pixels are photoelectrically converted, but the remaining G and B pixels are not photoelectrically converted. On the other hand, in the second color pattern, R pixels are not photoelectrically converted, but the remaining G and B pixels are photoelectrically converted. Thus, in the exclusive relationship, the R/G/B pixels to be photoelectrically converted are exclusively exchanged between the first color pattern and the second color pattern. As can be seen, each of the red, green, and blue pixels in imaging pixel P206 (see FIG. 4) provides more pixel signals for either the first color pattern or the second color pattern. Generate.
 同様に、撮像画素P206に示すRGB画素において、R画素は、第1カラーパタンにおいて、1/0/1[R/G/B]で構成されるカラー番号C2のピンク色(pink)光が投影されると光電変換するが、第2カラーパタンにおいて、0/1/0[R/G/B]で構成されるカラー番号C3の緑色(green)光が投影されると光電変換しない。一方で、G画素は、1/0/1[R/G/B]で構成されるカラー番号C2のピンク色(pink)光が投影されると光電変換しないが、第2カラーパタンにおいて、0/1/0[R/G/B]で構成されるカラー番号C3の緑色(green)光が投影されると光電変換する。一方で、B画素は、1/0/1[R/G/B]で構成されるカラー番号C2のピンク色(pink)光が投影されると光電変換するが、第2カラーパタンにおいて、0/1/0[R/G/B]で構成されるカラー番号C3の緑色(green)光が投影されると光電変換しない。 Similarly, among the RGB pixels indicated by the imaging pixel P206, the R pixel projects pink light of color number C2 composed of 1/0/1 [R/G/B] in the first color pattern. However, in the second color pattern, when green light of color number C3 composed of 0/1/0 [R/G/B] is projected, photoelectric conversion is not performed. On the other hand, the G pixel does not perform photoelectric conversion when pink light having a color number C2 composed of 1/0/1 [R/G/B] is projected, but in the second color pattern, 0 /1/0 [R/G/B], and photoelectric conversion is performed when green light of color number C3 is projected. On the other hand, the B pixel photoelectrically converts when pink light of color number C2 composed of 1/0/1 [R/G/B] is projected, but in the second color pattern, 0 /1/0 [R/G/B], when green light with color number C3 is projected, photoelectric conversion is not performed.
 このように、ストライプ番号(Color bar No.)1~6に対応する各投影領域には、第1カラーパタンと、第2カラーパタンとで、(Color No.C1、Color No.C6)、(Color No.C2、Color No.C3)、(Color No.C3、Color No.C2)、(Color No.C4、Color No.C5)、(Color No.C5、Color No.C4)、(Color No.C6、Color No.C1)となるように、排他的に組み合わされる投影光が投影される。なお、組合せは排他的であればよく、これらの組み合わせに限定されない。 In this way, in each projection area corresponding to stripe numbers (Color bar No.) 1 to 6, the first color pattern and the second color pattern are (Color No. C1, Color No. C6), ( Color No. C2, Color No. C3), (Color No. C3, Color No. C2), (Color No. C4, Color No. C5), (Color No. C5, Color No. C4), (Color No. .C6, Color No. C1), the exclusively combined projection light is projected. Note that the combinations are not limited to these combinations as long as they are exclusive.
 図7は、第1カラーパタンの投影例を模式的に示す図である。観測点Pは色の変わり目の部分となる。図7に示すように、パタン生成部304では、カラー番号C1の赤色(red)光に左端から順にR1、R2、R3、R4、、、RnとID付けをする。同様に、パタン生成部304では、カラー番号C2からC6に対して、pi1~pin、G1~Gn、Y1~Yn、B1~Bn、sb1~sbn、などとID付けをする。 FIG. 7 is a diagram schematically showing a projection example of the first color pattern. The observation point P is the part where the color changes. As shown in FIG. 7, the pattern generator 304 assigns IDs R1, R2, R3, R4, . Similarly, the pattern generator 304 assigns IDs such as pi1 to pin, G1 to Gn, Y1 to Yn, B1 to Bn, sb1 to sbn, etc. to the color numbers C2 to C6.
 図7に示すように、物体上の着目ラインは、R3よりもR4に近いため、撮像部200から撮像しても着目ラインを容易に特定可能となる。上からの投影図T100cでもR4で正しいことがわかる。一方で、従来から一般的に用いられる白黒ストライプでは、着目ラインが、どのラインかを識別するのが困難となる。このようにカラーストライプを用いると、凸領域T100a等での測定精度をより向上させることが可能となる。 As shown in FIG. 7, since the line of interest on the object is closer to R4 than to R3, it is possible to easily identify the line of interest even if the imaging unit 200 captures an image. It can be seen that R4 is correct even in the projection view T100c from above. On the other hand, with black and white stripes that have been generally used, it is difficult to identify which line is the line of interest. By using color stripes in this way, it is possible to further improve the measurement accuracy in the convex regions T100a and the like.
 ここで、図8、及び図9に基づき、特定光に対して反射のない色調を有する領域T100aに対する処理効果を説明する。図8は、特定光に対して反射の無い領域T100aへの投映例を示す図である。図8のA図は、測定対象T100の凸領域T100aの正面図と、上面図を示す。凸領域T100aは、黄色を有する。このため、青色(B:blue)光は、凸領域T100aの黄色で吸収され、ほとんど反射されない。 Here, based on FIGS. 8 and 9, the processing effect for the region T100a having a color tone that does not reflect the specific light will be described. FIG. 8 is a diagram showing an example of projection onto a region T100a where the specific light is not reflected. FIG. 8A shows a front view and a top view of the convex region T100a of the measurement object T100. The convex region T100a has a yellow color. Therefore, blue (B) light is absorbed by the yellow of the convex region T100a and is hardly reflected.
 図8のB図は、測定対象T100の凸領域T100aに第1カラーパタンが投影されている状態を示す図である。図8のC図は、測定対象T100の凸領域T100aに第2カラーパタンが投影されている状態を示す図である。また、凸領域T100aにおいて黒色として撮像される領域B1、B2を示している。 FIG. 8B shows a state in which the first color pattern is projected onto the convex region T100a of the measurement object T100. FIG. 8C shows a state in which the second color pattern is projected onto the convex region T100a of the measurement target T100. Areas B1 and B2 that are imaged as black in the convex area T100a are also shown.
 図9は、図6Aで示した表に、各画素RGBの反応を撮像R/G/Bとして更に示した表である。図9で示すように青色(B:blue)光は、凸領域T100aの黄色で吸収され、ほとんど反射されない。このため、撮像R/G/Bで示すB画素の反応が0として示される。特に、投影0/0/1[R/G/B]で構成されるカラー番号C5(Color No.5)の青色(B:blue)光では、撮像0/0/0[R/G/B]となり、各RGB画素で光信号がほぼ取得されない状態となる。このため、撮像0/0/0[R/G/B]で示される撮像画素P206(図4参照)においては黒色領域B1(図8参照)として撮像される。 FIG. 9 is a table showing the reaction of each pixel RGB as imaging R/G/B in addition to the table shown in FIG. 6A. As shown in FIG. 9, blue (B) light is absorbed by the yellow of the convex region T100a and is hardly reflected. For this reason, the reaction of the B pixel indicated by imaging R/G/B is shown as 0. In particular, with blue (B: blue) light of color number C5 (Color No. 5) composed of projection 0/0/1 [R/G/B], imaging 0/0/0 [R/G/B] ], resulting in a state in which almost no optical signal is obtained from each of the RGB pixels. For this reason, the imaging pixel P206 (see FIG. 4) indicated by imaging 0/0/0 [R/G/B] is imaged as a black area B1 (see FIG. 8).
 一方で、上述のように、第1カラーパタンと、第2カラーパタンとで、(Color No.C4、Color No.C5)、(Color No.C5、Colorr No.C4)となるように、排他的(或いは補完的)に組み合わされる。このため、1/1/0[R/G/B]で構成されるカラー番号C4(Color No.C4)の黄色(yellow)光は、反射成分を有することとなる。 On the other hand, as described above, the first color pattern and the second color pattern are exclusive so that (Color No. C4, Color No. C5), (Color No. C5, Color No. C4) are combined objectively (or complementary). Therefore, yellow light of color number C4 (Color No. C4) composed of 1/1/0 [R/G/B] has a reflection component.
 図10は、比較例として、第1カラーパタンしか投影しない場合の表である。図10に示すように、第1カラーパタンしか投影しない場合には、投影0/0/1[R/G/B]で構成されるストライプ番号5(Color bar No.5)の青色(B:blue)光では、撮像0/0/0[R/G/B]となり、各RGB画素で光信号がほぼ取得されない状態となる。このため、ストライプ番号5の青色(B:blue)光が投影された領域は、対象物が黒色領域か、黄色領域かの判別ができなくなってしまう。これに対して、図9に示す様に、排他的に組み合わせた第1カラーパタンと、第2カラーパタンとを投影する場合には、凸領域T100aが着色されていても、第1カラーパタンと、第2カラーパタンとのいずれかで、反射光を撮像可能となる。このため、排他的に組み合わせた第1カラーパタンと、第2カラーパタンとを投影する場合には、対象物が黒色領域(或いは対象物が無限遠にある)か、黄色領域(特定色が吸収される領域)かの判別が可能となる。 FIG. 10 is a table in the case of projecting only the first color pattern as a comparative example. As shown in FIG. 10, when only the first color pattern is projected, blue (B: With blue) light, the imaging becomes 0/0/0 [R/G/B], and almost no optical signal is obtained from each RGB pixel. For this reason, it becomes impossible to distinguish whether the object is a black area or a yellow area in the area where the blue (B: blue) light of stripe number 5 is projected. On the other hand, as shown in FIG. 9, when the first color pattern and the second color pattern that are exclusively combined are projected, even if the convex region T100a is colored, the first color pattern , or the second color pattern. Therefore, in the case of projecting the first color pattern and the second color pattern that are exclusively combined, the object is either a black area (or the object is at infinity) or a yellow area (a specific color is absorbed). It is possible to determine whether the
 このように、第1カラーパタンと、第2カラーパタンとの同一投影領域に投影される光は、異なる波長帯域の光である。これにより、測定対象T100により、一方の波長帯域の光が吸収されても、他方の波長帯域の光が反射さるようにカラーパタンを構成可能となる。なお、本実施形態では、投影光を赤色(R:red)光と、緑色(G:green)光と、青色(B:blue)光と、の組合せとしたがこれに限定されない。例えば、第1カラーパタンと、第2カラーパタンとの同一投影領域に投影される光は、異なる波長帯域の光であればよい。すなわち、第1カラーパタンと、第2カラーパタンとの同一投影領域に投影される光は、一方の波長帯域の光が吸収されても、他方の波長帯域の光が反射さるように、補完的に構成すればよい。この場合、少なくとも波長帯を2つの波長帯域に分け、撮像部200(図2参照)を構成する画素をそれぞれの波長帯域のみに感度を有するように構成してもよい。 In this way, the light projected onto the same projection area for the first color pattern and the second color pattern are lights of different wavelength bands. Thus, a color pattern can be configured such that even if light in one wavelength band is absorbed by the measurement target T100, light in the other wavelength band is reflected. In this embodiment, the projection light is a combination of red (R: red) light, green (G: green) light, and blue (B: blue) light, but the present invention is not limited to this. For example, the light projected onto the same projection area for the first color pattern and the second color pattern may be light of different wavelength bands. That is, the light projected onto the same projection area of the first color pattern and the second color pattern is complementary so that even if the light of one wavelength band is absorbed, the light of the other wavelength band is reflected. should be configured to In this case, the wavelength band may be divided into at least two wavelength bands, and the pixels forming the imaging unit 200 (see FIG. 2) may be configured to have sensitivity only in each wavelength band.
 図11は、撮像画素P206の構成例を示す図である。図11に示すように、撮像画素P206を構成する各画素は、フォトダイオードPDと、トランジスタSW1、SW2、SW3、AMPとを有する。制御線(contorol wires)は、3本の制御線RST、TRG、SELで構成される。 FIG. 11 is a diagram showing a configuration example of the imaging pixel P206. As shown in FIG. 11, each pixel forming the imaging pixel P206 has a photodiode PD, transistors SW1, SW2, SW3, and AMP. The control wires consist of three control lines RST, TRG and SEL.
 フォトダイオードPDは、一端がグランドGNDに接続され、他端がトランジスタSW1の一端に接続される。このトランジスタSW1の他端は、フローティングディフュージョンFDに接続される。また、トランジスタSW2の一端はフローティングディフュージョンFDに接続され、他端は、トランジスタAMPの一端端に接続される。トランジスタAMPの一端端はVSS線に接続され、他端は、トランジスタSW3の一端に接続される。トランジスタSW3の他端は、VSL線に接続される。トランジスタSW1のゲートは制御線TRGに接続され、トランジスタSW2のゲートは制御線RSTに接続され、トランジスタSW3のゲートは制御線SELに接続され、トランジスタAMPのゲートはフローティングディフュージョンFDに接続される。なお、以下ではフォトダイオードPDを単にPDと記し、フローティングディフュージョンFDを単にFDと記す場合がある。 The photodiode PD has one end connected to the ground GND and the other end connected to one end of the transistor SW1. The other end of this transistor SW1 is connected to the floating diffusion FD. One end of the transistor SW2 is connected to the floating diffusion FD, and the other end is connected to one end of the transistor AMP. One end of the transistor AMP is connected to the VSS line, and the other end is connected to one end of the transistor SW3. The other end of transistor SW3 is connected to the VSL line. The gate of the transistor SW1 is connected to the control line TRG, the gate of the transistor SW2 is connected to the control line RST, the gate of the transistor SW3 is connected to the control line SEL, and the gate of the transistor AMP is connected to the floating diffusion FD. Note that, hereinafter, the photodiode PD may be simply referred to as PD, and the floating diffusion FD may be simply referred to as FD.
 図12は、AD変換部208を構成するADC回路の一例を示すブロック図である。図12に示すように、ADC回路(図4参照)は、複数のAZ回路と、コンパレータと、カウンタを有する。DAC回路は、内部にカウンタ(図示なし)を有し、問えばデクリメントしながらカウンタ値に応じた電圧値を出力する。 FIG. 12 is a block diagram showing an example of an ADC circuit forming the AD conversion section 208. As shown in FIG. As shown in FIG. 12, the ADC circuit (see FIG. 4) has multiple AZ circuits, comparators, and counters. The DAC circuit has an internal counter (not shown), and outputs a voltage value corresponding to the counter value while decrementing.
 AZ回路は、オートゼロの回路機構で指示された期間(図示なし)で入力された信号の電圧をAZ基準電圧に合わせ、入力と出力との間の電圧オフセットを保持する。このAZ回路は、指示された期間を過ぎても電圧オフセットを維持することが可能である。このADC回路は、DAC回路からの入力と、VSL線からの入力とに対する2つのAZ回路を有する、これらのAZ回路は、コンパレータに電圧オフセットを出力する。 The AZ circuit adjusts the voltage of the input signal to the AZ reference voltage during the period (not shown) indicated by the auto-zero circuitry, and holds the voltage offset between the input and the output. This AZ circuit can maintain the voltage offset beyond the indicated period. This ADC circuit has two AZ circuits for the input from the DAC circuit and the input from the VSL line.The AZ circuits output the voltage offset to the comparator.
 図13は、差分露光処理の概略図である。図13に基づき、差分露光処理例を説明する。図13に示すように、画像Im130は、イメージセンサ204の駆動タイミングを示す。横軸は時間を示し、縦軸は画素アレイ206の縦アドレス(Virtical address)(図4参照)を示す。このイメージセンサ204では、縦アドレスごとに順に処理される。すなわち、縦アドレスをインクリメントして処理する様子を斜めのグラフで示している。 FIG. 13 is a schematic diagram of differential exposure processing. An example of differential exposure processing will be described with reference to FIG. As shown in FIG. 13, the image Im130 indicates the driving timing of the image sensor 204. As shown in FIG. The horizontal axis indicates time, and the vertical axis indicates the vertical address of the pixel array 206 (see FIG. 4). The image sensor 204 sequentially processes each vertical address. That is, the oblique graph shows how the vertical address is incremented and processed.
 画像Im132は、プロジェクタ発光タイミングの制御信号であるフラッシュ発光(Flash strobe)信号を示している。画像Im134は、第1カラーパタン、及び第2カラーパタンの生成、更新タイミングをそれぞれ示している。画像Im136は、プロジェクタ発光タイミングを示している。画像Im138は、第1カラーパタン、及び第2カラーパタンの投影タイミングを示している。 An image Im132 shows a flash strobe signal, which is a control signal for the timing of projector light emission. An image Im134 indicates generation and update timings of the first color pattern and the second color pattern, respectively. An image Im136 indicates the timing of projector light emission. Image Im138 shows the projection timings of the first color pattern and the second color pattern.
 画像Im130に示すように画素アレイ206(図4参照)では、行毎に画素リセット(shutter)、FD転送、読み出し「read」およびAD変換が順に行われる。左のラインL130が画素リセット「shutter」のタイミングを示し、中央のラインL132が「FD転送」のタイミングを示し、右のラインL134が「read」のタイミングを示す。以下では、画素リセットをシャッターと記す場合がある。 As shown in the image Im130, in the pixel array 206 (see FIG. 4), pixel reset (shutter), FD transfer, readout "read", and AD conversion are sequentially performed for each row. The left line L130 indicates the pixel reset "shutter" timing, the center line L132 indicates the "FD transfer" timing, and the right line L134 indicates the "read" timing. Below, pixel reset may be described as a shutter.
 FD転送は画素リセット(shutter)と読み出し「read」の時間的に中央のタイミングで行われる。これにより、画素リセット(shutter)からFD転送、FD転送から読み出し「read」は合同な並行四辺形となる。これは、画素リセット(shutter)からFD転送までの露光時間と、FD転送から読み出し「read」までの露光時間が等しいことを示す。また、画素リセット(shutter)からFD転送までの露光を第1フレーム、FD転送から読み出し「read」までの露光を第2フレームとする。すなわち、第1フレームが第1カラーパタンの撮像に対応し、第2フレームが第2カラーパタンの撮像に対応する。 FD transfer is performed at the central timing of pixel reset (shutter) and readout "read". As a result, the FD transfer from the pixel reset (shutter) and the reading "read" from the FD transfer form a congruent parallelogram. This indicates that the exposure time from pixel reset (shutter) to FD transfer is equal to the exposure time from FD transfer to readout "read". The exposure from pixel reset (shutter) to FD transfer is defined as the first frame, and the exposure from FD transfer to readout "read" is defined as the second frame. That is, the first frame corresponds to imaging with the first color pattern, and the second frame corresponds to imaging with the second color pattern.
 イメージセンサ204内のタイミング生成部214(図3参照)は、画素リセット(shutter)の起動タイミング、FD転送起動タイミング、read起動タイミング、およびプロジェクタ発光タイミングであるフラッシュ発光(Flash strobe)タイミングを生成する。画素リセット(shutter)の起動タイミング、FD転送起動タイミング、read起動タイミング、およびプロジェクタ発光タイミングの情報を含む制御信号は、画素制御部216に供給され、画素制御のトリガとして、制御される。プロジェクタ発光タイミングの制御信号であるフラッシュ発光(Flash strobe)信号は、イメージセンサ204内のイメージセンサのタイミング出力I/F212から、情報処理部300のタイミング入力I/F302を経て、パタン生成部304と発光制御部308に供給される。 A timing generation unit 214 (see FIG. 3) in the image sensor 204 generates a pixel reset (shutter) startup timing, an FD transfer startup timing, a read startup timing, and a flash strobe timing that is a projector emission timing. . A control signal including information on pixel reset (shutter) activation timing, FD transfer activation timing, read activation timing, and projector light emission timing is supplied to the pixel control unit 216 and controlled as a trigger for pixel control. A flash strobe signal, which is a control signal for projector light emission timing, is sent from the timing output I/F 212 of the image sensor in the image sensor 204 to the pattern generation unit 304 via the timing input I/F 302 of the information processing unit 300. It is supplied to the light emission control unit 308 .
 投影部100から投映される第1カラーパタン、及び第2カラーパタンは、イメージセンサ204の撮影動作と同期させるため、イメージセンサ204内からプロジェクタ発光タイミングの制御信号であるフラッシュ発光(Flash strobe)信号を発行する。フラッシュ発光期間は、2つの露光のそれぞれで画素リセット(shutter)、FD転送、読み出し「read」の動作が停止したタイミングであり、2つのフラッシュ発光(Flash strobe)期間の長さを等しくする。 In order to synchronize the first color pattern and the second color pattern projected from the projection unit 100 with the photographing operation of the image sensor 204, a flash strobe signal, which is a control signal for the timing of projector light emission, is received from the image sensor 204. to be issued. The flash emission period is the timing at which the pixel reset (shutter), FD transfer, and readout "read" operations are stopped in each of the two exposures, and the lengths of the two flash emission (Flash strobe) periods are made equal.
 投影部100では、画面更新に時間がかかるため、光源発光しない期間に第1カラーパタン、及び第2カラーパタンの更新をそれぞれ行う。撮像部200から受け取ったフラッシュ発光(Flash strobe)信号がハイ(High)からロウ(Low)に変化したタイミングをトリガに画面更新を行う。予め、第1カラーパタン(Pattern1)の画面を映し、1度目のフラッシュ発光(Flash strobe)信号のハイ(High)からロウ(Low)の変化点で第2カラーパタン(Pattern2)の画面更新を行う。2度目のフラッシュ発光(Flash strobe)信号のハイ(High)からロウ(Low)の変化点で第1カラーパタン(Pattern 1)の画面更新を行い、以降、これを繰り返す。こように、投影部100の発光は、投影部100の光源を制御することにより行なわれる。光源には発光ダイオード(LED)、ハロゲンランプ、キセノンフラッシュ等を使うことが可能である。 Since it takes time to update the screen in the projection unit 100, the first color pattern and the second color pattern are updated during the period in which the light source does not emit light. Screen update is triggered by the timing at which the flash strobe signal received from the imaging unit 200 changes from high to low. The screen of the first color pattern (Pattern 1) is displayed in advance, and the screen of the second color pattern (Pattern 2) is updated at the changing point of the first flash strobe signal from High to Low. . The screen is updated with the first color pattern (Pattern 1) at the point where the second flash strobe signal changes from High to Low, and this is repeated thereafter. In this manner, the projection unit 100 emits light by controlling the light source of the projection unit 100 . A light emitting diode (LED), a halogen lamp, a xenon flash, or the like can be used as the light source.
 再び図11にもどり、撮像画素P206の各画素は、第1カラーパタンに対応する信号電荷と、第2カラーパタンに対応する信号電荷を順に蓄積し、VSL線に供給する。すなわち、これらの画素は、シャッター「shutter」、「FD転送」、読み出し「read」の駆動を順に行う。 Returning to FIG. 11 again, each pixel of the imaging pixel P206 sequentially accumulates signal charges corresponding to the first color pattern and signal charges corresponding to the second color pattern, and supplies them to the VSL line. In other words, these pixels sequentially drive the shutter "shutter", "FD transfer", and readout "read".
 ここで、図4を参照にしつつ、図14に基づき、シャッター「shutter」のタイミングを説明する。図14は、シャッターのタイミングチャート例を示す図である。上から順に制御線RST、TRG、SELの信号レベルを示している。制御線TRGがハイ(High)になると、トランジスタSW1が導通状態となり、フォトダイオードPDの電荷がFDに伝導される。この際に、制御線RSTをハイ(High)にすることによりトランジスタSW2が導通状態となり、フローティングディフュージョンFDの電荷がVSS線に伝導する。この制御によってフォトダイオードPDとフローティングディフュージョンFDの電荷がVSS線に流れ電荷がクリアされる。そして、制御線TRG、RSTを再びロウ(Low)にすることによりトランジスタSW1、SW2が遮断しフォトダイオードPDの電荷が流れない状態となり、シャッター「shutter」が終了して、露光蓄積が開始される。以下では遮断を非導通状態と記す場合がある。 Here, the timing of the shutter "shutter" will be described based on FIG. 14 while referring to FIG. FIG. 14 is a diagram illustrating an example of a shutter timing chart. Signal levels of the control lines RST, TRG, and SEL are shown in order from the top. When the control line TRG becomes high (High), the transistor SW1 becomes conductive, and the charge of the photodiode PD is conducted to the FD. At this time, by setting the control line RST to High, the transistor SW2 is brought into a conductive state, and the charge of the floating diffusion FD is conducted to the VSS line. By this control, the charges in the photodiode PD and the floating diffusion FD flow to the VSS line and the charges are cleared. Then, by setting the control lines TRG and RST to low again, the transistors SW1 and SW2 are cut off and the charge of the photodiode PD does not flow. . Below, interruption may be described as a non-conducting state.
 図15は、フローティングディフュージョンFDの転送タイミングチャート例を示す図である。上から順に制御線RST、TRG、SELの信号レベルを示している。制御線TRGをハイ(High)にすることより、トランジスタSW1が導通状態となり、フォトダイオードPDの電荷がフローティングディフュージョンFDに転送保持される。この後、制御線TRGをロウ(Low)にし、ランジスタSW1を遮断し、フォトダイオードPDの露光により蓄積される電荷が、それ以上FDに転送されない状態にする。これにより、第1カラーパタンに対応する第1フレームの露光の電荷を蓄えたフォトダイオードPDの電荷をフローティングディフュージョンFDに移動し保持する。 FIG. 15 is a diagram showing an example of a transfer timing chart of the floating diffusion FD. Signal levels of the control lines RST, TRG, and SEL are shown in order from the top. By setting the control line TRG to high, the transistor SW1 becomes conductive, and the charge of the photodiode PD is transferred and held in the floating diffusion FD. After that, the control line TRG is set to low to cut off the transistor SW1, so that the charges accumulated by the exposure of the photodiode PD are no longer transferred to the FD. As a result, the charge of the photodiode PD storing the charge of the exposure of the first frame corresponding to the first color pattern is moved to the floating diffusion FD and held therein.
 図16は、読み出し(read)のタイミングチャート例を示す図である。上から順に制御線RST、TRG、SELの信号レベルを示している。図14に示すように、読み出し(read)の制御では、オートゼロ「Auto zero」、FDリセット、FD転送、AD変換を連続的に行う。 FIG. 16 is a diagram showing an example of a read timing chart. Signal levels of the control lines RST, TRG, and SEL are shown in order from the top. As shown in FIG. 14, in read control, auto zero "Auto zero", FD reset, FD transfer, and AD conversion are continuously performed.
 オートゼロでは、制御線SELをハイ(High)にすることより、トランジスタSW3を導通状態にする。これにより、FDの電圧がトランジスタAMPの増幅作用により増幅され、VSLに伝導される。これによりFDに蓄積された第1フレームの露光蓄積の電荷量に応じた電圧がVSL線に伝導される。 In auto-zero, the transistor SW3 is brought into conduction by setting the control line SEL to High. As a result, the voltage of FD is amplified by the amplifying action of transistor AMP and conducted to VSL. As a result, a voltage corresponding to the charge amount of the first frame exposure accumulated in the FD is conducted to the VSL line.
 続けて、AD変換器208では、AZ回路の機構によりVSL線の電位とAZ基準電圧とのオフセットを記録、保持する。制御SELを再びロウ(Low)にすることによりトランジスタSW3を非導通状態にし、FDの電荷量の変化がVSL線に伝導しないように制御する。 Subsequently, the AD converter 208 records and holds the offset between the potential of the VSL line and the AZ reference voltage by means of the AZ circuit mechanism. By setting the control SEL low again, the transistor SW3 is rendered non-conducting, and the change in the charge amount of the FD is controlled not to be conducted to the VSL line.
 続けて、FDリセットは制御線RSTをハイ(High)にすることにより、トランジスタSW2を導通状態にする。これによりFDの電荷をVss線に導通してリセットする。FDをリセットし、制御線RSTを再びロウ(Low)とすることにより、トランジスタSW2を非導通状態にする。 Subsequently, the FD reset makes the transistor SW2 conductive by setting the control line RST to High. As a result, the charges in the FD are conducted to the Vss line for resetting. By resetting FD and setting the control line RST to low again, the transistor SW2 is rendered non-conductive.
 続けて、FD転送では、制御線TRGをハイ(High)にすることにより、トランジスタSW1を導通状態にする。これにより、フォトダイオードPDの電荷をFDに転送する。次に、第2フレーム、つまり第2カラーパタンにおいて露光蓄積されたフォトダイオードPDの電荷をFDに転送、保持する。 Subsequently, in the FD transfer, the transistor SW1 is made conductive by setting the control line TRG to High. This transfers the charge of the photodiode PD to the FD. Next, the charge of the photodiode PD exposed and accumulated in the second frame, that is, the second color pattern is transferred to the FD and held.
 続けて、AD変換では、制御線SELをHighハイ(High)にすることにより、トランジスタSW3を導通状態にする。これにより、FDの電圧をトランジスタAMPの増幅作用により増幅して、VSL線に電圧を伝導する。このように、第2フレームの露光蓄積の電荷量の情報を、VSL線を介して供給可能となる。AD変換器208では、このVSL線に伝導された電圧値をAD変換する。これにより、第1カラーパタンに対応する第1フレームにおける露光蓄積の電荷量に比例する電圧と、第2カラーパタンに対応する第2フレームにおける露光蓄積の電荷量に比例する電圧と、の差分がAD変換される。そして、AD変換の終了後SELをロウ(Low)にして、トランジスタSW3を非導通状態にする。 Subsequently, in the AD conversion, the control line SEL is set to High to turn on the transistor SW3. As a result, the voltage of the FD is amplified by the amplifying action of the transistor AMP, and the voltage is conducted to the VSL line. In this way, it is possible to supply the information of the amount of charge accumulated in exposure for the second frame via the VSL line. The AD converter 208 AD-converts the voltage value conducted to the VSL line. As a result, the difference between the voltage proportional to the amount of accumulated exposure charge in the first frame corresponding to the first color pattern and the voltage proportional to the amount of accumulated exposure charge in the second frame corresponding to the second color pattern is AD converted. After the AD conversion is completed, SEL is set to low to bring the transistor SW3 into a non-conducting state.
 図17、及び図18は、第1カラーパタン提示時のコンパレータ(図12)入力のVSL(AZ通過後)L170、DAC(AZ通過後)の波形を示す図である。図17は、第1カラーパタンに画素が感度を有する場合、すなわち光電変換する場合を示す図である。 17 and 18 are diagrams showing waveforms of VSL (after passing through AZ) L170 and DAC (after passing through AZ) of the comparator (FIG. 12) input when the first color pattern is presented. FIG. 17 is a diagram showing a case where pixels have sensitivity to the first color pattern, that is, a case of photoelectric conversion.
 上述のように、第1カラーパタン、第2カラーパタンは排他的に構成されるので、特定の1画素では第1カラーパタン、第2カラーパタンのいずれか片方でしか光電変換が行われない。図17のVSL(AZ通過後)L170に示すように、電荷は負のエネルギーを持つので、電荷が多いほど負の電圧が大きくなる。図17では下方向が電圧では負になりエネルギーは大きくなる。 As described above, the first color pattern and the second color pattern are configured exclusively, so photoelectric conversion is performed in only one of the first color pattern and the second color pattern in a specific pixel. As indicated by VSL (after passing through AZ) L170 in FIG. 17, charges have negative energy, so the more charges, the larger the negative voltage. In FIG. 17, the voltage in the downward direction becomes negative and the energy increases.
 第1カラーパタンに有効な光が存在するAD変換は、読み出し「read」(図13参照)の制御のときに行われる。図17の制御ステートは、図16における読み出し「read」のタイミングチャートに対応する。オートゼロ(AZ)ステートではFDに蓄積された電荷の電圧がVSL(AZ通過後)L170に現れる。そのため、第1フレームで撮像された第1カラーパタンにおける蓄積電荷に応じた値があるため、VSL(AZ通過後)L170の電圧は大きく下に振れる。VSL(AZ通過後)L170の電圧が安定したところでオートゼロのAZ回路(図12)を有効にし、VSL(AZ通過後)L170の電圧と、DAC(AZ通過後)の電圧とをAZ基準電圧に揃え、ここを基準点にする。AZ回路(図12)はその後、このオフセット値を維持しオフセット値を加算した出力をVSL(AZ通過後)L170、およびDAC(AZ通過後)に出力し続ける。なお、FD reset、FD転送ステートでは、VSLは高インピーダンス(Hi-Z)になっているので大きくは変化しない状態で維持される。 AD conversion in which there is effective light in the first color pattern is performed during readout "read" (see FIG. 13) control. The control states in FIG. 17 correspond to the read “read” timing chart in FIG. In the auto-zero (AZ) state, the voltage of charge accumulated in FD appears at VSL (after passing through AZ) L170. Therefore, since there is a value corresponding to the accumulated charge in the first color pattern imaged in the first frame, the voltage of VSL (after passing through AZ) L170 greatly swings downward. When the voltage of VSL (after passing through AZ) L170 is stabilized, the auto-zero AZ circuit (Fig. 12) is enabled, and the voltage of VSL (after passing through AZ) L170 and the voltage of DAC (after passing through AZ) are set to the AZ reference voltage. Align and use this as a reference point. The AZ circuit (FIG. 12) then maintains this offset value and continues to output the output with the offset value added to VSL (after passing through AZ) L170 and DAC (after passing through AZ). In the FD reset and FD transfer states, VSL is in high impedance (Hi-Z), so it is maintained in a state where it does not change significantly.
 続くAD変換ステートでは第2フレームの値が読み出される。第2フレームは第2カラーパタンに対応する光は撮像さえず、環境光の反射に起因する露光が行われる。このため、VSL(AZ通過後)電圧L180は、第1フレームのVSL線の電圧に対して小さくなり、大きく上に振れる。VSL(AZ通過後)電圧L170が安定した後に、AD変換が実行される。DAC回路(図12参照)の出力を電圧の高い方から低い方へリニアに変化させるシングルスロープAD変換を示している。ADCカウンタはDACのスロープスタートで0から一定間隔でインクリメント(increment)する。VSL(AZ通過後)電圧L170とDAC電圧とが交差するところでコンパレータが反転し、カウンタのカウントを止める。このカウンタの値がデジタルデータ(Digital data)となり信号処理部210に供給される。カウンタとDAC回路のスロープは同期しており、AZ基準電圧のカウント値は一定であり予め設定されている。このため、コンパレータが反転したときのカウント値からAZ基準電圧のカウンタ値を引くことによって、ADコンバータの結果を正負の値として取得可能となる。図17に示すように、画素が第1カラーパタン(pattern 1)に感度がある場合には、負の値が結果になる In the subsequent AD conversion state, the value of the second frame is read. In the second frame, light corresponding to the second color pattern is not imaged, and exposure due to reflection of ambient light is performed. Therefore, the VSL (after passing through the AZ) voltage L180 becomes smaller than the voltage of the VSL line in the first frame and swings upward greatly. AD conversion is performed after the VSL (after passing through AZ) voltage L170 is stabilized. This shows single-slope AD conversion that linearly changes the output of the DAC circuit (see FIG. 12) from higher voltage to lower voltage. The ADC counter increments at regular intervals from 0 at the slope start of the DAC. At the intersection of the VSL (after AZ) voltage L170 and the DAC voltage, the comparator flips and the counter stops counting. The value of this counter becomes digital data and is supplied to the signal processing unit 210 . The slopes of the counter and the DAC circuit are synchronized, and the count value of the AZ reference voltage is constant and preset. Therefore, by subtracting the counter value of the AZ reference voltage from the count value when the comparator is inverted, the AD converter result can be obtained as a positive or negative value. As shown in FIG. 17, if the pixel is sensitive to the first color pattern (pattern 1), a negative value will result.
 図18は、第2カラーパタンに画素が感度を有する場合、すなわち光電変換する場合を示す図である。AD変換の方法は、前述した図17と同等の手順である。特定画素の光電変換素子への透過光量は、第1フレームが多く(暗く)、第2フレームが少ない(明るい)ためAZステートのVSL(AZ通過後)電圧L180の振幅は小さく、AD変換ステートでは下の方向に振れるAD変換の結果は正の値となる。このように、特定の画素において、第1カラーパタン(pattern1)に感度がある場合は負の値となり、第2カラーパタン(pattern2)に感度がある場合は正の値になる。 FIG. 18 is a diagram showing the case where the pixels have sensitivity to the second color pattern, that is, the case of photoelectric conversion. The AD conversion method is the same procedure as in FIG. 17 described above. The amount of light transmitted to the photoelectric conversion element of a specific pixel is large (dark) in the first frame and small (bright) in the second frame. The result of AD conversion that swings downward is a positive value. In this way, a specific pixel has a negative value when it is sensitive to the first color pattern (pattern1), and a positive value when it is sensitive to the second color pattern (pattern2).
 これらから分かるように、R/G/Bの画素(図4参照)の結果を総合的に用いることにより、第1カラーパタン(pattern1)と第2カラーパタン(pattern2)とのそれぞれのカラーコードを1度のAD変換により判別することが可能となる。また第1カラーパタン(pattern1)と第2カラーパタン(pattern2)との配色の組み合わせと、差分AD変換を行うイメージセンサ204とにより、ノイズとなる環境光の影響の軽減をしつつ、2つのカラーパタンを同時に取得することが可能となる。 As can be seen from these, by comprehensively using the results of the R/G/B pixels (see FIG. 4), the color codes of the first color pattern (pattern1) and the second color pattern (pattern2) can be obtained. It is possible to make the determination by one AD conversion. In addition, the combination of the first color pattern (pattern1) and the second color pattern (pattern2) and the image sensor 204 that performs differential AD conversion reduce the influence of ambient light that becomes noise, and the two colors are combined. It is possible to acquire patterns at the same time.
 再び図13を参照すると、第1カラーパタン(pattern1)を含む第1フレーム、第2カラーパタン(pattern 2)を含む第2フレームの期間は、同時に環境光による反射光を露光している。これらは測定の際にはノイズとなる。このため、測距の測定の際にノイズとなる光を抑えるため暗室のような環境を作るのだが、本発明では影響光の影響が軽減されるため、必ずしも暗室を構築しなくて済むため、測定環境の自由度が増加する。 Referring to FIG. 13 again, the period of the first frame including the first color pattern (pattern1) and the period of the second frame including the second color pattern (pattern2) are simultaneously exposed to reflected light from ambient light. These become noise in the measurement. For this reason, a darkroom-like environment is created in order to suppress the light that becomes noise during distance measurement. The degree of freedom in the measurement environment increases.
 以下に、より詳細に、本実施形態による環境光反射の抑制について説明する。ここで、第1フレーム期間の環境光反射の露光をN1(ノイズ)、第2フレーム期間の環境光反射の露光をN2(ノイズ)とする。また、第1カラーパタン(pattern 1)の色をP1C、第2カラーパタンの色をP2Cとすると、(2)式に示す様に、第1フレームではP1C+N1、第2フレームではP2C+N2が露光される。 Suppression of ambient light reflection according to this embodiment will be described in more detail below. Here, let N1 (noise) be the exposure of the reflected ambient light in the first frame period, and N2 (noise) be the exposure of the reflected ambient light in the second frame period. Assuming that the color of the first color pattern (pattern 1) is P1C and the color of the second color pattern is P2C, as shown in equation (2), P1C+N1 is exposed in the first frame and P2C+N2 is exposed in the second frame. .
 ここで測定シーンは静止物体で環境光の変化が無いことを仮定できるため、N1=N2となるので、差分AD変換を行うことにより、第2フレームの露光から第1フレームの露光の減算を行うことが可能となる。
Figure JPOXMLDOC01-appb-M000002
Since it can be assumed that the scene to be measured is a stationary object and there is no change in ambient light, N1=N2. Therefore, by performing differential AD conversion, the exposure of the first frame is subtracted from the exposure of the second frame. becomes possible.
Figure JPOXMLDOC01-appb-M000002
 このように、環境光反射の露光成分N1、N2が消え、カラーパタンP1C、P2Cの露光成分の情報だけが抽出される。また、本実施形態のカラーパタンでは色成分に分離すると、各色に於いて第1カラーパタン(pattern 1)、第2カラーパタン(pattern 2)のいずれかは、発光しない0となる。このため、本実施形態に係るイメージセンサ204は、上述したようにR/G/Bのいずれかを通すカラーフィルターを用いて色を分離しているため、R/G/Bの色ごとでは、第1カラーパタン(pattern 1)、又は、第2カラーパタン(pattern 2)のいずれかの発光に対応する値しか存在しなくなる。例えば、上述のように、Color bar Noが1番の場合、P1Cの色成分(R、G、B)は(1、0、0)で、P2Cの色成分(R、G、B)は(0、1、1)になる。これを色成分ごとに式2で解くと、(-1、1、1)となる。ここで、P1Cは負の値であるので-1を1とし、正の値を0に置き換えると、(1、0、0)となり、P2Cは正の値であるので1を1とし負の値を0に置き換えると、(0、1、1)となり、Color bar Noの2つの色を判別することが可能となる。 In this way, the exposure components N1 and N2 of the ambient light reflection disappear, and only the information of the exposure components of the color patterns P1C and P2C is extracted. Further, when the color patterns of this embodiment are separated into color components, either the first color pattern (pattern 1) or the second color pattern (pattern 2) in each color is 0, which does not emit light. For this reason, the image sensor 204 according to the present embodiment separates colors using color filters that pass any one of R, G, and B as described above. Only the values corresponding to the emission of either the first color pattern (pattern 1) or the second color pattern (pattern 2) are present. For example, as described above, when Color bar No. is number 1, the P1C color components (R, G, B) are (1, 0, 0), and the P2C color components (R, G, B) are ( 0, 1, 1). Solving this for each color component using Equation 2 yields (-1, 1, 1). Here, since P1C is a negative value, -1 is replaced with 1 and positive values are replaced with 0, resulting in (1, 0, 0), and since P2C is a positive value, 1 is replaced with 1 and a negative value is replaced with 0, it becomes (0, 1, 1), and it becomes possible to distinguish between the two colors of Color bar No.
 また、通常動作であれば、第1期間と第2期間との2回のAD変換を行い、算術的に減算処理することも可能となる。しかし、AD変換には多くの電力を消費し、データ量が多く、算術処理にも電力を消費してしまう。一方で、本実施形態では、AD変換を1度に行うことにより消費電力およびデータ量、算術処理を削減可能としている。なお、本実施形態では、アナログ回路により差分を取得しているが、対応する処理をデジタル演算によって行っても良い。 In addition, in the case of normal operation, AD conversion can be performed twice in the first period and the second period, and arithmetic subtraction processing can be performed. However, AD conversion consumes a lot of power, the amount of data is large, and arithmetic processing also consumes power. On the other hand, in this embodiment, power consumption, data amount, and arithmetic processing can be reduced by performing AD conversion at once. In addition, in this embodiment, the difference is obtained by the analog circuit, but the corresponding processing may be performed by digital calculation.
 図19は、図6Aで示したカラーパタンを反射率(R:50%、G:100%、B:75%)の物体に投映した場合の輝度を示す表である。すなわち、物体の反射率がR(赤)50%、G(緑)100%、B(青)75%の物体に投影したときの表面の明るさを示す。ここでは、値を、0~255の分解能としている。 FIG. 19 is a table showing luminance when the color pattern shown in FIG. 6A is projected onto an object with reflectance (R: 50%, G: 100%, B: 75%). That is, it represents the brightness of the surface when projected onto an object whose reflectance is R (red) 50%, G (green) 100%, and B (blue) 75%. Here, the value is a resolution of 0-255.
 図20は、図6Aで示したカラーパタンを反射率(R:50%、G:100%、B:75%)の物体に投映し差分AD変換をした結果を示す表である。ここでは、値を、-255~255の分解能としている。第1カラーパタン(pattern 1)は負の数値、第2カラーパタン(pattern 2)は正の数値になる。この-255~255の範囲を分解能とした(R、G、B)の値が画像信号として、第2信号処理部312に供給される。 FIG. 20 is a table showing the result of performing differential AD conversion by projecting the color pattern shown in FIG. 6A onto an object with reflectance (R: 50%, G: 100%, B: 75%). Here, the value is a resolution of -255 to 255. The first color pattern (pattern 1) has a negative numerical value, and the second color pattern (pattern 2) has a positive numerical value. The values of (R, G, B) with resolution in the range of -255 to 255 are supplied to the second signal processing unit 312 as an image signal.
 第2信号処理部312は、差分AD変換後の値ADが、しきい値TH(正の数とする)より大きいとき、すなわち、AD>THのときは結果を1とし、しきい値THに-1をかけたときより小さいとき、すなわち、AD<-THのときは結果を-1とする。また、値ADが、-TH≦AD≦THのときには結果を0とする。しきい値THはノイズを考慮した数値に設定することによりノイズの影響を除外することができる。デバイスの能力にもよるがここではTH=10とした結果を図21に示す。すなわち、通常の撮影状態では、反射率、物体への距離は未知の値であり、距離による減衰、その他のノイズを考慮し、0値周辺にしきい値を用意し、しきい値以上の絶対値のある部分を有意な光と判別することとする。 The second signal processing unit 312 sets the result to 1 when the value AD after the differential AD conversion is larger than the threshold value TH (assumed to be a positive number), that is, when AD>TH, and sets the value to the threshold value TH. If it is smaller than the result of multiplication by -1, that is, if AD<-TH, the result is -1. The result is 0 when the value AD satisfies -TH≤AD≤TH. The influence of noise can be excluded by setting the threshold value TH to a numerical value that takes noise into account. FIG. 21 shows the results when TH=10, although it depends on the capability of the device. In other words, in normal shooting conditions, the reflectance and the distance to the object are unknown values. A portion of is determined to be significant light.
 図21は、有意なカラー成分に変換した結果を示す表である。すなわち、第2信号処理部312が分AD変換後の値に基づき、有意なカラー成分に変換した結果例を示す表である。 FIG. 21 is a table showing the results of conversion into significant color components. That is, it is a table showing an example of the results of conversion into significant color components by the second signal processing unit 312 based on the values after minute AD conversion.
 また、図22は、図6Aで示したパタンを本実施形態に係る差分AD変換を通した場合の結果を示す表である。理論差分AD変換後は(2)式にしたがい、第2カラーパタン(pattern2)から第1カラーパタン(pattern1)を引いた計算結果を示している。図21、22の結果から分かるように、理論差分AD変換後の行の値は、第2信号処理部312が生成した値と一致する。このように、反射率を考慮したイメージセンサによる測定により、パタンの測定が可能となる。 Also, FIG. 22 is a table showing the result when the pattern shown in FIG. 6A is subjected to differential AD conversion according to this embodiment. After the theoretical difference AD conversion, the calculation result obtained by subtracting the first color pattern (pattern1) from the second color pattern (pattern2) according to the equation (2) is shown. As can be seen from the results of FIGS. 21 and 22, the row values after theoretical difference AD conversion match the values generated by the second signal processing unit 312 . In this way, the pattern can be measured by the measurement by the image sensor in consideration of the reflectance.
 図23は、青色光を反射しない物体を測定した結果を示す表である。図23に示すように、0が含まれたときは、測定物体の反射が無い色を特定し、これによりカラーストライプの番号(Color bar No)の範囲を絞ることが可能となる。例えば、測定結果が、{R、G、B}={-1、1、0}を示す場合、B(青)の反射がない物体であり、これは投影された光のカラーストライプの組み合わせ(Color bar 組み合わせ)(図19参照)が、1番か2番であることを示す。 FIG. 23 is a table showing the results of measuring an object that does not reflect blue light. As shown in FIG. 23, when 0 is included, it is possible to specify the color without reflection of the measurement object, thereby narrowing down the range of color stripe numbers (Color bar No.). For example, if the measurement shows {R, G, B}={−1, 1, 0}, then the object has no reflection of B (blue), which is the combination of color stripes of projected light ( Color bar combination) (see FIG. 19) is number 1 or number 2.
 第2信号処理部312は、これを元に、この測定点の周囲状況、例えば、左隣、右隣の色結果等で推測して補完していくことが可能となる。第2信号処理部312では、このColor bar No.1の左隣は、6番であるが、2番の左隣は1番であること、2番の右隣は3番であるが、1番の右隣は2番であることなどの特徴を用いて補完推測を行うことが可能となる。例えば第2信号処理部312は、ストライプの場合は色の変化点が測定ポイントになるので、1番と2番との境目、2番と3番との境目等の検出した変化点を全ての画像位置と、どのColor bar No.の境目かの情報を、測距データとして3次元計測演算部316(図3参照)に送ることが可能となる。3次元計測演算部316は、画像位置から撮像部200と測定点の角度を割り出し、また、Color bar No.情報から投影部100と測定点の角度を割り出し、予め定めておいた投影部100と撮像部200間の距離Lの3つの情報から三角測量により測定点の距離を計算する。3次元計測演算部316は、全ての測定ポイントPで距離を計算し、そのデータをデータ出力I/F318を介して、ストレージに保存したり、ディスプレイに表示したり、する。 Based on this, the second signal processing unit 312 can estimate and complement the surrounding conditions of this measurement point, for example, the color result of the left and right neighbors. In the second signal processing unit 312, the left neighbor of Color bar No. 1 is No. 6, but the left neighbor of No. 2 is No. 1, and the right neighbor of No. 2 is No. 3, but 1 Complementary guessing can be performed using features such as the fact that the number to the right of the number is the number 2. For example, the second signal processing unit 312 detects all the detected change points, such as the boundary between No. 1 and No. 2, the boundary between No. 2 and No. 3, etc., because the measurement points are the color change points in the case of stripes. Image position and which Color bar No. It is possible to send information on the boundary between the two as distance measurement data to the three-dimensional measurement calculation unit 316 (see FIG. 3). The three-dimensional measurement calculation unit 316 calculates the angle between the imaging unit 200 and the measurement point from the image position, and also calculates the color bar number. The angle between the projection unit 100 and the measurement point is calculated from the information, and the distance to the measurement point is calculated by triangulation from three pieces of information, namely, the predetermined distance L between the projection unit 100 and the imaging unit 200 . The three-dimensional measurement calculation unit 316 calculates distances at all measurement points P, and stores the data in storage or displays the data on a display via the data output I/F 318 .
 一方、比較例(図10参照)に示すように、反射がない場合は、その色の結果は0となる。例えば、{R、G、B}={1、0、0}となった場合、従来例では、測定物体が全ての色を反射したのか、G(緑)の反射がないのか、B(青)の反射がないのかの区別がつかなくなる。したがって、1番の可能性、B(青)が欠損した2番である可能性、G(緑)が欠損した4番である可能性が残る。このため、周囲の測定結果で推測する際の、情報の正確さが本実施形態に対して劣る結果となる。また、5番のように、光が反射して帰ってこない場合は、物体の形状を確認することが出来ないため、物体が欠けて見える測定結果となる。 On the other hand, as shown in the comparative example (see FIG. 10), the result for that color is 0 when there is no reflection. For example, when {R, G, B}={1, 0, 0}, in the conventional example, whether the object to be measured reflects all colors, does not reflect G (green), or reflects B (blue). ) becomes indistinguishable from the absence of reflection. Therefore, the possibility of No. 1, the possibility of No. 2 with B (blue) missing, and the possibility of No. 4 with G (green) missing remain. For this reason, the accuracy of information when estimating from the measurement results of the surroundings is inferior to that of the present embodiment. Also, when the light is reflected and does not return, as in No. 5, the shape of the object cannot be confirmed, resulting in a measurement result in which the object appears to be missing.
 このように、比較例では測距のために1つのカラーパタンしか投影しないために、物体の色は特定困難となる。一方で本実施形態では、2つのカラーパタンによりR、G、Bの三原色を照射しているため、測定物体の色再現が可能となる。 In this way, since only one color pattern is projected for distance measurement in the comparative example, it is difficult to identify the color of the object. On the other hand, in this embodiment, since the three primary colors of R, G, and B are irradiated with two color patterns, it is possible to reproduce the color of the object to be measured.
 図20を再び参照すると、図20では、反射率、R:50%、G:100%、B:75%の物体に、照射して反射してくる色ごとの輝度を示している。数値は0~255までの範囲で正規化した表現である。このため、50%は127となり、100%は255となり、75%は191となる。この測定結果を用いて色再現を行うためには、各色輝度を絶対値に変換することで測定物体の色を得ることが可能となる。例えば第2信号処理部312(図3参照)は、図20では、Color bar Noの1番では、{-127、255、191}であるので負の値の絶対値をとることにより、{127、255、191}を生成することが可能となる。 Referring to FIG. 20 again, FIG. 20 shows the brightness of each color reflected by illuminating an object with reflectance of R: 50%, G: 100%, and B: 75%. Numerical values are normalized expressions in the range from 0 to 255. Thus, 50% is 127, 100% is 255, and 75% is 191. In order to perform color reproduction using this measurement result, it is possible to obtain the color of the measured object by converting the luminance of each color into an absolute value. For example, the second signal processing unit 312 (see FIG. 3) takes {−127, 255, 191} for Color bar No. 1 in FIG. , 255, 191}.
 一方で、測定物体に白色光を当てた場合を考える。白色の色成分は{255、255、255}であり、測定物体の反射率は{50%、100%、75%}である。これらを色成分ごとに乗算し小数点以下を切り捨てると、{127、255、191}という色になる。これは図20において、Color bar Noの1番から計算した値と一致する。第2信号処理部312(図3参照)は、画像生成部314(図3参照)は、これらの一連の変換を、イメージセンサ204で得た全ての画素に対して行い、画像生成部314(図3参照)に供給する。そして、画像生成部314は、RGB{127、255、191}などの撮像画素P206毎のデジタル画素信号に基づき、2次元のRGB画像データを生成し、データ出力I/F318を介して、ストレージに記録したり、ディスプレイに表示したりする。 On the other hand, consider the case where white light is applied to the measurement object. The white color components are {255, 255, 255} and the reflectance of the measurement object is {50%, 100%, 75%}. When these are multiplied for each color component and truncated after the decimal point, the colors {127, 255, 191} are obtained. This matches the value calculated from Color bar No. 1 in FIG. The second signal processing unit 312 (see FIG. 3) and the image generating unit 314 (see FIG. 3) perform these series of transformations on all pixels obtained by the image sensor 204, and the image generating unit 314 (see FIG. 3) (See Fig. 3). Then, the image generation unit 314 generates two-dimensional RGB image data based on digital pixel signals for each imaging pixel P206 such as RGB {127, 255, 191}, and outputs the data to the storage via the data output I/F 318. record or display.
 図24は、ド・ブラウン(de bruijn)により生成したカラーストライプパタン(color bar No.)を示す表である。図24に示すように、ド・ブラウンの色配列や、マトリックスタイルパタンでも、本実施形態に係るカラーパタンを構成可能となる。図25は、ド・ブラウン列のアルゴリズムによる循環パタンの拡張の例を示す図である。図24、25に示すように、例えば、第1カラーパタン(pattern1)の中で、左が赤(red)で右が緑(green)のパタンは一箇所だけであり、同様に他の色の組み合わせも一箇所だけになる。 FIG. 24 is a table showing color stripe patterns (color bar numbers) generated by de Bruijn. As shown in FIG. 24, the color pattern according to the present embodiment can be constructed even with a de Brown color arrangement or a matrix tile pattern. FIG. 25 is a diagram showing an example of extension of a cyclic pattern by the de Brownian sequence algorithm. As shown in FIGS. 24 and 25, for example, in the first color pattern (pattern1), there is only one pattern with red on the left and green on the right. Only one combination is possible.
 ここでも、上述の実施形態と同様に、色成分ごとに発光の有り、無で構成してもよい。すなわち排他的な組み合わせで作成される。このため、第2カラーパタン(pattern2)は第1カラーパタン(pattern1)で用いられなかった色成分により構成される。第2カラーパタン(pattern2)においても隣接した2つの色の組み合わせは全てユニークとなる。これらは30の色の循環となり一番右、例えば第1カラーパタン(pattern1)のスカイブルー(sky blue)の右は赤(red)となり、再び30の色の循環を繰り返す。繰り返しによってつながった色もユニークな組み合わせとなる。図24で示すカラーストライプパタンでも上述の処理が可能である。但し、左隣または右隣の組み合わせを見て、図24のド・ブラウン(de bruijn)列により改良したカラーストライプパタンに一致するカラーストライプパタンの番号(Color bar no)を検索して特定する。特定の色成分の反射がないなどの場合も同様に両隣の色の組み合わせで補完を行うことが可能である。なお、6色の循環に比べて数が増えるので、測定粒度を保ったまま視度差を広げたり、また、1つ1つの色の幅を狭くすることにより測定粒度を高めたり、することが可能となる。 Also here, as in the above-described embodiment, each color component may be configured to emit light or not. That is, it is created by an exclusive combination. Therefore, the second color pattern (pattern2) is composed of color components not used in the first color pattern (pattern1). In the second color pattern (pattern2) as well, every combination of two adjacent colors is unique. These become a cycle of 30 colors, and the rightmost color, for example, sky blue of the first color pattern (pattern 1) becomes red, and the cycle of 30 colors is repeated again. Colors connected by repetition also become unique combinations. The above processing is also possible with the color stripe pattern shown in FIG. However, looking at the combination of left neighbors or right neighbors, the number (color bar no) of the color stripe pattern that matches the improved color stripe pattern is searched and identified by the de Bruijn column in FIG. Similarly, when there is no reflection of a specific color component, it is possible to perform complementation with a combination of adjacent colors. In addition, since the number increases compared to the circulation of six colors, it is possible to widen the diopter difference while maintaining the measurement granularity, or to increase the measurement granularity by narrowing the width of each color. It becomes possible.
 図26は、縦列方向に色の変化を拡張したカラータイルパタンを示す表である。カラータイル番号(Color tile No.)図27は、図26のカラータイルパタン例の配置を示す図である。図27の数字はカラータイル番号(Color tile No.)に対応する。 FIG. 26 is a table showing color tile patterns in which color changes are expanded in the column direction. Color Tile No. FIG. 27 is a diagram showing the arrangement of the color tile pattern example of FIG. The numbers in FIG. 27 correspond to color tile numbers.
 図26、27に示すように、水平方向の色の順番はそのままにして、縦方向にも同じ順番で並べることも可能である。このようなパタンを1単位として横方向、縦方向に並べる。ここでも、本実施形態により、色成分ごとに発光の有り、発光の無しの組み合わせで作成され、第2カラーパタン(pattern2)は第1カラーパタン(pattern1)で用いなかった色成分により構成する。第2カラーパタン(pattern 2)においても隣接した2つの色の組み合わせは全てユニークとなる。カラータイルパタンにおいても、既に説明した方法により処理を行うことが可能である。但し、特定の色成分の反射がないなどの場合、左隣または右隣の色による推測に加えて、上または下の色による推測を用いることが出来る。これにより反射不足等の障害時に、補完演算のための情報が増えるので、補完の正確性がより向上する。また、カラータイルパタンでもド・ブラウン式や別の循環パタンを用いてもよい。 As shown in FIGS. 26 and 27, it is possible to leave the order of the colors in the horizontal direction unchanged and arrange them in the same order in the vertical direction. Such patterns are arranged as one unit in the horizontal and vertical directions. Here, according to the present embodiment, each color component is created by a combination of light emission and non-light emission, and the second color pattern (pattern2) is composed of the color components not used in the first color pattern (pattern1). All combinations of two adjacent colors are unique in the second color pattern (pattern 2) as well. Color tile patterns can also be processed by the method already described. However, in cases such as when there is no reflection of a specific color component, in addition to estimation based on the left or right adjacent color, estimation based on the upper or lower color can be used. As a result, when there is an obstacle such as lack of reflection, the information for the complementary calculation increases, so the accuracy of the complementary operation is further improved. Also, the color tile pattern may be de Brown's method or another circulating pattern.
 図28は、カウンタ差分、デジタル差分の読み出し(read)のタイミングチャートを示す図である。図28は、AD変換を2回行う例を示す図である。上から順に制御線RST、TRG、SELの信号レベルを示している。図28に示すように、読み出し(read)の制御では、オートゼロ「Auto zero」、AD変換、FDリセット、FD転送、AD変換を連続的に行う。なお、シャッター(shutter)ステート、FD転送ステートの制御は、図14、15の例と同等である。一方で、読み出し(read)ステートの制御は異なる。図28に示すように、オートゼロ「Auto zero(2) 」、AD変換(2)、FDリセット、FD転送、AD変換(3)が連続して行われる。 FIG. 28 is a diagram showing a timing chart for reading the counter difference and the digital difference. FIG. 28 is a diagram illustrating an example of performing AD conversion twice. Signal levels of the control lines RST, TRG, and SEL are shown in order from the top. As shown in FIG. 28, in read control, auto zero "Auto zero", AD conversion, FD reset, FD transfer, and AD conversion are performed continuously. Note that the control of the shutter state and FD transfer state is the same as in the examples of FIGS. On the other hand, the read state control is different. As shown in FIG. 28, auto zero "Auto zero (2)", AD conversion (2), FD reset, FD transfer, and AD conversion (3) are performed in succession.
 オートゼロ「Auto zero(2)」では画素(図11参照)では、トランジスタSW11を非導通状態とし、AD変換部208のAD変換器(図4参照)はAZ回路の機構によりVSL線の電圧とAZ基準電圧とのオフセットを記録、保持する。AD変換(2)ステートでは、制御線SELをハイ(High)にすることによりトランジスタSW3を導通状態とし、FDの電圧をトランジスタAMPにより増幅させた値でVSL線に伝導する。これにより、FDに蓄積された第1フレームの露光蓄積の電荷量に応じた電圧がVSL線に伝導される。AD変換器(図4参照)はVSL線の電圧に対して一回目のAD変換を行う。これは、即ち第1フレームで蓄積された電荷量に比例する電圧をAD変換していることとなる。制御線SELを再びロウ(Low)にすることにより、トランジスタSW3を非導通状態とし、FDの電荷量の変化がVSL線に影響しないようにする。 In the auto zero "Auto zero (2)", the transistor SW11 of the pixel (see FIG. 11) is turned off, and the AD converter (see FIG. 4) of the AD conversion unit 208 changes the voltage of the VSL line and AZ Record and store the offset from the reference voltage. In the AD conversion (2) state, by setting the control line SEL to High, the transistor SW3 is rendered conductive, and the voltage of the FD is transmitted to the VSL line with a value amplified by the transistor AMP. As a result, a voltage corresponding to the amount of charge accumulated in the first frame exposure accumulated in the FD is conducted to the VSL line. The AD converter (see FIG. 4) performs the first AD conversion on the voltage of the VSL line. This means that the voltage proportional to the amount of charge accumulated in the first frame is AD-converted. By driving the control line SEL low again, the transistor SW3 is rendered non-conductive so that the change in the amount of charge on the FD does not affect the VSL line.
 FDリセットは制御線RSTをハイ(High)にすることにより、トランジスタSW2を導通状態とし、FDの電荷をVss線に流してリセットする。FDのリセット後に、制御線RSTを再びロウ(Low)にして、トランジスタSW2を非導通状態とする。FD転送は、制御線TRGをハイ(High)にすることにより、トランジスタSW1を導通状態とし、PDの電荷をFDに転送する。これにより、第2フレームの露光蓄積されたPDの電荷がFDに転送され、保持される。 The FD reset is performed by setting the control line RST to High to bring the transistor SW2 into a conducting state and to flow the charge of the FD to the Vss line for resetting. After resetting the FD, the control line RST is set to low again to turn off the transistor SW2. In the FD transfer, the transistor SW1 is made conductive by setting the control line TRG to High to transfer the charge of the PD to the FD. As a result, the charges of the PD exposed and accumulated in the second frame are transferred to the FD and held.
 AD変換(3)では、制御線SELをハイ(High)にし、トランジスタSW3を導通状態とする。これにより、FDの電圧をトランジスタAMPにより増幅して、VSL線に伝導させる。これにより、第2フレームにおける露光蓄積の電荷量に比例する電圧を、VSL線を介して伝導できる。AD変換器では、この値のAD変換を行う。これは、すなわち第2フレームにおける露光蓄積の電荷量に比例する電圧をAD変換していることとなる。AD変換の終了後制御線SELをロウ(Low)にして、トランジスタSW3を非導通状態とする。 In AD conversion (3), the control line SEL is set to High to bring the transistor SW3 into a conductive state. Thereby, the voltage of FD is amplified by the transistor AMP and conducted to the VSL line. This allows a voltage proportional to the amount of charge accumulated in the exposure in the second frame to be conducted through the VSL line. The AD converter performs AD conversion of this value. This means that the voltage proportional to the amount of charge accumulated in exposure in the second frame is AD-converted. After AD conversion is completed, the control line SEL is set to low to bring the transistor SW3 into a non-conducting state.
 図29は、図28のタイミングチャートに対応するADCのコンパレータ入力のVSL(AZ通過後)、DAC(AZ通過後)の波形を示す図である。図29は、特定画素が第1カラーパタンに感度を有する場合である。図29のVSL(AZ通過後)L290に示すように、電荷は負のエネルギーを持つので電荷が多いほど負の電圧が大きくなる。図29では下方向が電圧では負になりエネルギーは大きくなる。 FIG. 29 is a diagram showing VSL (after passing through AZ) and DAC (after passing through AZ) waveforms of ADC comparator inputs corresponding to the timing chart of FIG. FIG. 29 shows a case where a specific pixel has sensitivity to the first color pattern. As indicated by VSL (after passing through AZ) L290 in FIG. 29, since the charge has negative energy, the more the charge, the larger the negative voltage. In FIG. 29, the voltage in the downward direction becomes negative and the energy increases.
 オートゼロ(AZ)ステートではVSL(AZ通過後)L290とDAC(AZ通過後)の電圧をAZ基準電圧に揃え、ここを基準点にする。AZ回路はその後、このオフセット値を維持しオフセット値を加算した出力をVSL(AZ通過後)L290およびDAC(AZ通過後)に出力し続ける。AD変換(2)ステートではFDに蓄積された電荷の電圧がVSLに現れる。そのため、第1フレームでの蓄積電荷に投影光に応じた蓄積量があるためVSL(AZ通過後) L290は大きく下に振れる。VSL(AZ通過後) L290が安定したところで一回目のAD変換を行い、予め0にクリアしていたADCカウンタでカウントする。 In the auto-zero (AZ) state, the voltages of VSL (after passing through AZ) L290 and DAC (after passing through AZ) are aligned with the AZ reference voltage, and this is used as the reference point. The AZ circuit then maintains this offset value and continues to output the offset-added output to VSL (after passing through AZ) L290 and DAC (after passing through AZ). In the AD conversion (2) state, the voltage of the charge accumulated in FD appears on VSL. Therefore, since the accumulated charge in the first frame has an accumulated amount corresponding to the projection light, VSL (after passing through AZ) L290 swings downward greatly. VSL (after passing through AZ) When L290 stabilizes, perform the first AD conversion and count with the ADC counter that was cleared to 0 in advance.
 DACの出力を電圧の高い方から低い方へリニアに変化させるシングルスロープAD変換を示している。ただし、本実施形態ではカウンタによる差分処理を行うため、ここではカウンタをデクリメント(decrement)する。VSL(AZ通過後)とDAC(AZ通過後)の電圧が交差するところでコンパレータが反転しADCカウンタは停止し値を保持する。 It shows a single-slope AD conversion that linearly changes the DAC output from higher voltage to lower voltage. However, in this embodiment, the counter is decremented because the difference processing is performed by the counter. When the voltages of VSL (after passing through AZ) and DAC (after passing through AZ) intersect, the comparator is inverted and the ADC counter stops and holds the value.
 FDリセット(reset)、FD転送ステートではVSL線L290は高インピーダンス(Hi-Z)になっているので大きくは変化しない状態が維持される。続くAD変換ステートでは第2フレームの値が読み出される。第2フレームでは第2カラーパタンに対する感度がなく、環境光の反射の露光成分が主として蓄積電荷に変換される。このため、第1フレームの値に対して蓄積電荷量に応じた電圧が小さくなる。このように、AZ基準電圧に対して小さく下の位置に振れる。図29に示すように、第1フレームに大きな値があるので結果として上に振れることになる。VSL(AZ通過後)L290が安定した後に、2回目のAD変換を実行する。ADCカウンタは一回目のAD変換の値から今度は、インクリメント(increment)する。VSL線の電圧とDACが交差するところでコンパレータが反転しカウンタのカウントを止める。一回目のAD変換で負の方向にカウントしていたので、このカウンタの値が2つのフレームの差分結果となり信号処理部210に供給されるデクリメント(decrement)の値の方が大きいため結果は負の値となる。 In the FD reset (reset) and FD transfer states, the VSL line L290 is at high impedance (Hi-Z), so a state that does not change significantly is maintained. In the subsequent AD conversion state, the value of the second frame is read. In the second frame, there is no sensitivity to the second color pattern, and the exposure component of reflected ambient light is primarily converted to stored charge. Therefore, the voltage corresponding to the accumulated charge amount becomes smaller than the value of the first frame. In this way, it swings slightly downward with respect to the AZ reference voltage. As shown in FIG. 29, the large value in the first frame results in an upward swing. After the VSL (after passing through the AZ) L290 is stabilized, the second AD conversion is performed. The ADC counter increments from the value of the first AD conversion. At the intersection of the voltage on the VSL line and the DAC, the comparator flips and the counter stops counting. Since the first AD conversion counted in the negative direction, the value of this counter is the result of the difference between the two frames. is the value of
 図30は、特定画素が第2カラーパタンに感度を有する場合を示す図である。AD変換の方法は、図29に示した手順と同等である。第1フレームが暗く第2フレームが明るいためAD変換(2)ステートのVSL(AZ通過後)L300の値は小さくなり、AZ基準電圧から小さく下に振れ、AD変換(2)では大きく下に振れて、インクリメント(increment)の値の方が大きいため結果は正の値となる。これによって図17、18と同等の結果を得られ、以降の処理は図17、18と同じように行うことが可能となる。なお、本実施形態ではADCカウンタのインクリメント(increment)とデクリメント(decrement)による実装を行ったが、回路削減のためにAD変換(2)ステートもインクリメント(increment)を行い、AD変換(3)ステートの前に反転することによって差分処理を実行してもよい。また、AD変換(2)時にインクリメント(increment)で第1フレームのAD変換を行い、その値を信号処理部に送ってもよい。その後に、AD変換(3)時の前にADCを0にクリアし、第2フレームのAD変換を行い、その値を信号処理部に送り、信号処理部は2つの差分をデジタル回路で実行させてもよい。 FIG. 30 is a diagram showing a case where specific pixels have sensitivity to the second color pattern. The AD conversion method is the same as the procedure shown in FIG. Since the 1st frame is dark and the 2nd frame is bright, the value of VSL (after passing through AZ) L300 in the AD conversion (2) state becomes small, swings slightly downward from the AZ reference voltage, and swings greatly downward in AD conversion (2). , the result is positive because the increment value is greater. 17 and 18, and subsequent processing can be performed in the same manner as in FIGS. In this embodiment, the ADC counter is incremented and decremented. Difference processing may be performed by inverting before . Further, AD conversion of the first frame may be performed in increments during AD conversion (2), and the value may be sent to the signal processing unit. After that, the ADC is cleared to 0 before the AD conversion (3), the AD conversion of the second frame is performed, the value is sent to the signal processing section, and the signal processing section executes the difference between the two in a digital circuit. may
 以上説明したように、本実施形態によれば、投影部100が、第1カラーパタンと、第2カラーパタンとを順に測定対象T100に投影し、情報処理部300が、第1カラーパタンと、第2カラーパタンとに基づき、測定対象T100までの距離を測定する。この場合、第1カラーパタンと、第2カラーパタンとの同一投影領域に投影される光は、異なる波長帯域の光である。これにより、測定対象T100により、一方の波長帯域の光が吸収されても、他方の波長帯域の光が反射さるようにカラーパタンを構成可能となる。 As described above, according to the present embodiment, the projection unit 100 sequentially projects the first color pattern and the second color pattern onto the measurement object T100, and the information processing unit 300 projects the first color pattern, Based on the second color pattern, the distance to the measurement object T100 is measured. In this case, the light projected onto the same projection area of the first color pattern and the light of the second color pattern are light of different wavelength bands. Thus, a color pattern can be configured such that even if light in one wavelength band is absorbed by the measurement target T100, light in the other wavelength band is reflected.
 また、第1カラーパタンと、第2カラーパタンとの同一投影領域に投影される光が3つの波長帯域の光の組合せで構成され、同一投影領域に対し、第1カラーパタンでは3つの波長帯域の光の少なくとも1波長帯域の光を投影し、第2カラーパタンでは3つの波長帯域の光のなかの残りの波長帯域の光を投影することとした。これにより、測定対象T100により、3つの波長帯域の光のいずれかが吸収されても、他の波長帯域の光が反射さるので、測定対象T100までの距離測定が可能となる。また、3つの波長帯域の光のそれぞれの反射強度を画素信号として生成可能となり、測定対象T100の有する色の情報を生成可能となる。 Further, the light projected onto the same projection area of the first color pattern and the second color pattern is composed of a combination of light of three wavelength bands, and the first color pattern has three wavelength bands for the same projection area. In the second color pattern, light in at least one wavelength band is projected, and light in the remaining wavelength bands among the light in the three wavelength bands is projected in the second color pattern. As a result, even if any of the light in the three wavelength bands is absorbed by the measurement object T100, the light in the other wavelength bands is reflected, making it possible to measure the distance to the measurement object T100. In addition, it becomes possible to generate the reflection intensity of each of the light in the three wavelength bands as a pixel signal, and to generate the color information of the measurement object T100.
 なお、本技術は以下のような構成を取ることができる。 This technology can be configured as follows.
(1) 所定の第1カラーパタンと、所定の第2カラーパタンとを順に測定対象に投影する投影部と、
 前記第1カラーパタンと、前記第2カラーパタンとに基づき、前記測定対象までの距離を測定する情報処理部と、
 を備え、
 前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、異なる波長帯域の光である、測距装置。
(1) a projection unit that sequentially projects a predetermined first color pattern and a predetermined second color pattern onto an object to be measured;
an information processing unit that measures the distance to the measurement object based on the first color pattern and the second color pattern;
with
The distance measuring device, wherein light projected onto the same projection area of the first color pattern and the light of the second color pattern are light of different wavelength bands.
(2) 前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、3つの波長帯域の光の組合せで構成され、前記同一投影領域に対し、前記第1カラーパタンでは前記3つの波長帯域の光の少なくとも1波長帯域の光を投影し、前記第2カラーパタンでは前記3つの波長帯域の光のなかの残りの波長帯域の光を投影する、(1)に記載の測距装置。 (2) The light projected onto the same projection area of the first color pattern and the second color pattern is composed of a combination of light of three wavelength bands, and the first color pattern and the second color pattern are projected onto the same projection area. (1), wherein at least one of the three wavelength bands of light is projected in the pattern, and the remaining wavelength bands of the three wavelength bands of light are projected in the second color pattern; Range finder as described.
(3) 前記3つの波長帯域の光のそれぞれは、赤色帯域、緑色帯域、及び青色帯域の光のいずれかである、(2)に記載の測距装置。 (3) The distance measuring device according to (2), wherein each of the light in the three wavelength bands is one of light in the red band, green band, and blue band.
(4) 前記第1カラーパタンと、前記第2カラーパタンとを投影された前記測定対象を順に撮像する撮像部を、に備える、(3)に記載の測距装置。 (4) The distance measuring device according to (3), further comprising an imaging unit that sequentially images the object to be measured on which the first color pattern and the second color pattern are projected.
(5) 前記撮像部は、
 前記投影部における前記第1カラーパタンと、前記第2カラーパタンとの投影タイミングと、前記撮像部の撮像タイミングを制御する制御信号を生成するタイミング生成部を有する、(4)に記載の測距装置。
(5) The imaging unit
The distance measurement according to (4), further comprising a timing generation unit that generates a control signal for controlling projection timing of the first color pattern and the second color pattern in the projection unit and imaging timing of the imaging unit. Device.
(6) 前記撮像部は、
 赤色帯域の光に感度を有し、緑色帯域、及び青色帯域への感度が赤色帯域の光より抑制された赤色画素と、緑色帯域の光に感度を有し、赤色帯域、及び青色帯域への感度が緑色帯域の光より抑制された緑色画素と、青色帯域の光に感度を有し、赤色帯域、及び緑色帯域への感度が青色帯域の光より抑制された青色画素と、で構成される撮像画素が2次元状に配置された画素アレイを有する、(5)に記載の測距装置。
(6) The imaging unit
A red pixel having sensitivity to light in the red band and having less sensitivity to green and blue bands than light in the red band, and a pixel having sensitivity to light in the green band and having sensitivity to red and blue bands Green pixels whose sensitivity is suppressed from light in the green band, and blue pixels which have sensitivity to light in the blue band and whose sensitivity to red and green bands is suppressed from light in the blue band. The distance measuring device according to (5), which has a pixel array in which imaging pixels are arranged two-dimensionally.
(7) 前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれは、前記第1カラーパタン、及び前記第2カラーパタンのいずれかに対してより多くの画素信号を生成する、(6)に記載の測距装置。 (7) Each of the red pixel, the green pixel, and the blue pixel in the imaging pixel generates more pixel signals for either the first color pattern or the second color pattern , (6).
(8) 前記撮像部は、
 前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれに対して、前記第1カラーパタンに応じて蓄積された第1蓄積電荷量と、前記第2カラーパタンに応じて蓄積された蓄積電荷量と、の差分に応じた電圧をデジタル画素信号に変換する、(6)に記載の測距装置。
(8) The imaging unit
A first accumulated charge amount accumulated according to the first color pattern and accumulated according to the second color pattern for each of the red pixel, the green pixel, and the blue pixel in the imaging pixel The distance measuring device according to (6), which converts a voltage corresponding to a difference between the accumulated charge amount and the accumulated charge amount into a digital pixel signal.
(9) 前記タイミング生成部が生成する制御信号に基づき、
 前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれは、前記第1カラーパタンに対する撮像時間と、前記第2カラーパタンに対する撮像時間とが、等しくなるように制御される、(6)に記載の測距装置。
(9) Based on the control signal generated by the timing generator,
each of the red pixel, the green pixel, and the blue pixel in the imaging pixel is controlled such that the imaging time for the first color pattern and the imaging time for the second color pattern are equal; (6) A distance measuring device according to the above.
(10) 前記情報処理部は、
 前記第1カラーパタンに対する前記撮像画素の出力信号と、前記第2カラーパタンに対する前記撮像画素の出力信号と、に基づき、前記測定対象のカラー画像を生成する画像生成部を有する、(6)乃至(9)のいずれか一項に記載の測距装置。
(10) The information processing unit
(6) to an image generation unit that generates a color image of the measurement target based on the output signal of the imaging pixel for the first color pattern and the output signal of the imaging pixel for the second color pattern; The distance measuring device according to any one of (9).
(11) 前記情報処理部は、
 前記デジタル画素信号に基づき、前記第1カラーパタン及び前記第2カラーパタンを撮像したデジタル画像から順にパタンを検出し、前記検出したパタンと前記投影部が投影した前記第1カラーパタン、及前記第2カラーパタンの少なくともいずれかを対応させ、前記測定対象の各部分までの距離を生成する計測部を有する、(8)に記載の測距装置。
(11) The information processing unit
Based on the digital pixel signal, patterns are sequentially detected from a digital image of the first color pattern and the second color pattern, and the detected pattern, the first color pattern projected by the projection unit, and the second color pattern are detected. The distance measuring device according to (8), further comprising a measuring unit that corresponds to at least one of two color patterns and generates a distance to each part of the measurement object.
(12) 前記第1カラーパタン、及び前記第2カラーパタンは、複数の区画領域で構成され、各区画領域のそれぞれの光の色は、前記赤色帯域、前記緑色帯域、及び前記青色帯域の光のいずれか、又は、前記赤色帯域、前記緑色帯域、及び前記青色帯域の光の中の2つに組み合わせのいずれかである、(2)乃至(4)のいずれか一項に記載の測距装置。 (12) The first color pattern and the second color pattern are composed of a plurality of partitioned regions, and the color of light in each partitioned region is the light of the red band, the green band, and the blue band. or any combination of two of the red band, the green band, and the blue band light. Device.
(13) 前記複数の区画領域は、グループ化されており、グループ内の区画領域ごとの光の色は全て異なる、(12)に記載の測距装置。 (13) The distance measuring device according to (12), wherein the plurality of partitioned areas are grouped, and the color of light for each partitioned area within the group is different.
(14) 前記複数の区画領域において、隣接した2つの色の組み合わせは全て異なる、(12)に記載の測距装置。 (14) The distance measuring device according to (12), wherein in the plurality of partitioned areas, the combinations of two adjacent colors are all different.
(15) 前記複数の区画領域の色は、ド・ブラウン列による循環パタンである、(14)に記載の測距装置。 (15) The distance measuring device according to (14), wherein the colors of the plurality of partitioned areas are cyclic patterns according to the De Brown sequence.
(16) 前記複数の区画領域は、行列状に2次元配置されたタイルパタンである、(12)乃至(15)のいずれか一項に記載の測距装置。 (16) The distance measuring device according to any one of (12) to (15), wherein the plurality of partitioned areas are tile patterns arranged two-dimensionally in a matrix.
(17) 前記撮像部は、
 前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれに対して、前記第1カラーパタンに応じて蓄積された第1蓄積電荷量に応じた電圧を第1デジタル信号に変換し、前記第2カラーパタンに応じて蓄積された蓄積電荷量に応じた電圧を第2デジタル信号に変換し、それぞれの差分をデジタル画素信号とする、(6)に記載の測距装置。
(17) The imaging unit
For each of the red pixel, the green pixel, and the blue pixel in the imaging pixel, a voltage corresponding to a first accumulated charge amount accumulated according to the first color pattern is converted into a first digital signal. and converts a voltage corresponding to the amount of accumulated charge accumulated according to the second color pattern into a second digital signal, and uses the difference between them as a digital pixel signal.
(18) 前記情報処理部は、
 前記第1カラーパタン、及び前記第2カラーパタンを生成するパタン生成部を有し、前記第1カラーパタン、及び前記第2カラーパタンの情報を有する信号を前記投影部に供給する、(1)乃至(17)のいずれか一項に記載の測距装置。
(18) The information processing unit
(1) including a pattern generation unit that generates the first color pattern and the second color pattern, and supplying a signal having information of the first color pattern and the second color pattern to the projection unit; The distance measuring device according to any one of (17) to (17).
(19) 前記撮像部は、
 赤外域の波長帯域を3つの波長帯域に分け、それぞれの波長帯域のみに感度を有する画素で構成され、
 前記第1カラーパタン、及び前記第2カラーパタンは、前記赤外域の前記3つの波長帯域のそれぞれで構成される、(1)に記載の測距装置。
(19) The imaging unit
The infrared wavelength band is divided into three wavelength bands, and consists of pixels that are sensitive only to each wavelength band,
The distance measuring device according to (1), wherein the first color pattern and the second color pattern are configured in each of the three wavelength bands of the infrared region.
(20) 前記撮像部は、
 少なくとも波長帯域を2つの波長帯域に分け、それぞれの波長帯域のみに感度を有する画素で構成される、(1)に記載の測距装置。
(20) The imaging unit
The distance measuring device according to (1), wherein the wavelength band is divided into at least two wavelength bands, and is composed of pixels having sensitivity only in each wavelength band.
(21) 所定の第1カラーパタンと、所定の第2カラーパタンとを順に測定対象に投影する投影工程と、
 前記第1カラーパタンと、前記第2カラーパタンとに基づき、前記測定対象までの距離を測定する情報処理工程と、
 を備え、
 前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、ことなる波長帯域の光である、測距方法。
(21) a projection step of sequentially projecting a predetermined first color pattern and a predetermined second color pattern onto a measurement object;
an information processing step of measuring a distance to the measurement target based on the first color pattern and the second color pattern;
with
The light projected onto the same projection area for the first color pattern and the second color pattern is different from the light projected onto the same projection area for the first color pattern and the second color pattern. A ranging method that uses light in a wavelength band.
 本開示の態様は、上述した個々の実施形態に限定されるものではなく、当業者が想到しうる種々の変形も含むものであり、本開示の効果も上述した内容に限定されない。すなわち、特許請求の範囲に規定された内容およびその均等物から導き出される本開示の概念的な思想と趣旨を逸脱しない範囲で種々の追加、変更および部分的削除が可能である。 Aspects of the present disclosure are not limited to the individual embodiments described above, but include various modifications that can be conceived by those skilled in the art, and the effects of the present disclosure are not limited to the above-described contents. That is, various additions, changes, and partial deletions are possible without departing from the conceptual idea and spirit of the present disclosure derived from the content defined in the claims and equivalents thereof.
 1:測距装置、100:投影部、200:撮像部、206:画素アレイ、214:タイミング制御部、300:情報処理部、304:パタン生成部、316:3次元計測演算部、314:画像生成部、P206:撮像画素。 1: distance measuring device, 100: projection unit, 200: imaging unit, 206: pixel array, 214: timing control unit, 300: information processing unit, 304: pattern generation unit, 316: three-dimensional measurement calculation unit, 314: image Generation unit, P206: imaging pixels.

Claims (21)

  1.  所定の第1カラーパタンと、所定の第2カラーパタンとを順に測定対象に投影する投影部と、
     前記第1カラーパタンと、前記第2カラーパタンとに基づき、前記測定対象までの距離を測定する情報処理部と、
     を備え、
     前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、異なる波長帯域の光である、測距装置。
    a projection unit that sequentially projects a predetermined first color pattern and a predetermined second color pattern onto an object to be measured;
    an information processing unit that measures the distance to the measurement object based on the first color pattern and the second color pattern;
    with
    The distance measuring device, wherein light projected onto the same projection area of the first color pattern and the light of the second color pattern are light of different wavelength bands.
  2.  前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、3つの波長帯域の光の組合せで構成され、前記同一投影領域に対し、前記第1カラーパタンでは前記3つの波長帯域の光の少なくとも1波長帯域の光を投影し、前記第2カラーパタンでは前記3つの波長帯域の光のなかの残りの波長帯域の光を投影する、請求項1に記載の測距装置。 The light projected onto the same projection area of the first color pattern and the second color pattern is composed of a combination of light of three wavelength bands. The measurement according to claim 1, wherein at least one of three wavelength bands of light is projected, and the remaining wavelength bands of the three wavelength bands of light are projected in the second color pattern. distance device.
  3.  前記3つの波長帯域の光のそれぞれは、赤色帯域、緑色帯域、及び青色帯域の光のいずれかである、請求項1に記載の測距装置。 The distance measuring device according to claim 1, wherein each of the three wavelength bands of light is one of red band, green band, and blue band light.
  4.  前記第1カラーパタンと、前記第2カラーパタンとを投影された前記測定対象を順に撮像する撮像部を、
     更に備える、請求項1に記載の測距装置。
    an imaging unit that sequentially images the measurement object onto which the first color pattern and the second color pattern are projected;
    2. A ranging device according to claim 1, further comprising a ranging device.
  5.  前記撮像部は、
     前記投影部における前記第1カラーパタンと、前記第2カラーパタンとの投影タイミングと、前記撮像部の撮像タイミングとを制御するタイミング制御部を有する、請求項4に記載の測距装置。
    The imaging unit is
    5. The distance measuring device according to claim 4, further comprising a timing control section for controlling projection timing of said first color pattern and said second color pattern in said projection section and imaging timing of said imaging section.
  6.  前記撮像部は、
     赤色帯域の光に感度を有し、緑色帯域、及び青色帯域への感度が赤色帯域の光より抑制された赤色画素と、緑色帯域の光に感度を有し、赤色帯域、及び青色帯域への感度が緑色帯域の光より抑制された緑色画素と、青色帯域の光に感度を有し、赤色帯域、及び緑色帯域への感度が青色帯域の光より抑制された青色画素と、で構成される撮像画素が2次元状に配置された画素アレイを有する、請求項5に記載の測距装置。
    The imaging unit is
    A red pixel having sensitivity to light in the red band and having less sensitivity to green and blue bands than light in the red band, and a pixel having sensitivity to light in the green band and having sensitivity to red and blue bands Green pixels whose sensitivity is suppressed from light in the green band, and blue pixels which have sensitivity to light in the blue band and whose sensitivity to red and green bands is suppressed from light in the blue band. 6. The distance measuring device according to claim 5, having a pixel array in which imaging pixels are two-dimensionally arranged.
  7.  前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれは、前記第1カラーパタン、及び前記第2カラーパタンのいずれかに対してより多くの画素信号を生成する、請求項6に記載の測距装置。 3. Each of said red pixel, said green pixel, and said blue pixel in said imaging pixel generates more pixel signals for either said first color pattern or said second color pattern. 7. The distance measuring device according to 6.
  8.  前記撮像部は、
     前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれに対して、前記第1カラーパタンに応じて蓄積された第1蓄積電荷量に応じた電圧と、前記第2カラーパタンに応じて蓄積された第2蓄積電荷量に応じた電圧と、の差分に応じたデジタル画素信号に変換する、請求項7に記載の測距装置。
    The imaging unit is
    voltage corresponding to a first accumulated charge amount accumulated according to the first color pattern and the second color pattern for each of the red pixel, the green pixel, and the blue pixel in the imaging pixel; 8. The distance measuring device according to claim 7, wherein the digital pixel signal is converted into a digital pixel signal according to the difference between the voltage corresponding to the second accumulated charge amount accumulated according to .
  9.  前記タイミング制御部が生成する制御信号に基づき、
     前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれは、前記第1カラーパタンに対する撮像時間と、前記第2カラーパタンに対する撮像時間とが、等しくなるように制御される、請求項6に記載の測距装置。
    Based on the control signal generated by the timing control unit,
    each of the red pixel, the green pixel, and the blue pixel in the imaging pixel is controlled such that the imaging time for the first color pattern and the imaging time for the second color pattern are equal; The distance measuring device according to claim 6.
  10.  前記情報処理部は、
     前記第1カラーパタンに対する前記撮像画素の出力信号と、前記第2カラーパタンに対する前記撮像画素の出力信号と、に基づき、前記測定対象のカラー画像を生成する画像生成部を有する、請求項6に記載の測距装置。
    The information processing unit
    7. The method according to claim 6, further comprising an image generation unit that generates a color image of the measurement object based on the output signal of the imaging pixel for the first color pattern and the output signal of the imaging pixel for the second color pattern. Range finder as described.
  11.  前記情報処理部は、
     前記デジタル画素信号に基づき、前記第1カラーパタン及び前記第2カラーパタンを撮像したデジタル画像から順にパタンを検出し、前記検出したパタンと前記投影部が投影した前記第1カラーパタン、及前記第2カラーパタンの少なくともいずれかを対応させ、前記測定対象の各部分までの距離を生成する計測部を有する、請求項8に記載の測距装置。
    The information processing unit
    Based on the digital pixel signal, patterns are sequentially detected from a digital image of the first color pattern and the second color pattern, and the detected pattern, the first color pattern projected by the projection unit, and the second color pattern are detected. 9. The distance measuring device according to claim 8, further comprising a measuring unit that associates at least one of two color patterns with the measuring unit and generates a distance to each portion of the measurement target.
  12.  前記第1カラーパタン、及び前記第2カラーパタンは、複数の区画領域で構成され、各区画領域のそれぞれの光の色は、前記赤色帯域、前記緑色帯域、及び前記青色帯域の光のいずれか、又は、前記赤色帯域、前記緑色帯域、及び前記青色帯域の光の中の2つに組み合わせのいずれかである、請求項3に記載の測距装置。 The first color pattern and the second color pattern are composed of a plurality of partitioned regions, and the color of light in each partitioned region is any one of the red band, the green band, and the blue band. or a combination of the two of the red band, the green band and the blue band light.
  13.  前記複数の区画領域は、グループ化されており、グループ内の区画領域ごとの光の色は全て異なる、請求項12に記載の測距装置。 13. The distance measuring device according to claim 12, wherein the plurality of partitioned areas are grouped, and the color of light for each partitioned area within the group is different.
  14.  前記複数の区画領域において、隣接した2つの色の組み合わせは全て異なる、請求項12に記載の測距装置。 13. The distance measuring device according to claim 12, wherein combinations of two adjacent colors in the plurality of partitioned areas are all different.
  15.  前記複数の区画領域の色は、ド・ブラウン列による循環パタンである、請求項14に記載の測距装置。 15. The distance measuring device according to claim 14, wherein the colors of the plurality of divided areas are cyclic patterns according to the De Brown sequence.
  16.  前記複数の区画領域は、行列状に2次元配置されたタイルパタンである、請求項12に記載の測距装置。 The distance measuring device according to claim 12, wherein the plurality of partitioned areas are tile patterns arranged two-dimensionally in a matrix.
  17.  前記撮像部は、
     前記撮像画素内の前記赤色画素、前記緑色画素、及び前記青色画素のそれぞれに対して、前記第1カラーパタンに応じて蓄積された第1蓄積電荷量に応じた電圧を第1デジタル信号に変換し、前記第2カラーパタンに応じて蓄積された第2蓄積電荷量に応じた電圧を第2デジタル信号に変換し、それぞれの差分をデジタル画素信号とする、請求項6に記載の測距装置。
    The imaging unit is
    For each of the red pixel, the green pixel, and the blue pixel in the imaging pixel, a voltage corresponding to a first accumulated charge amount accumulated according to the first color pattern is converted into a first digital signal. 7. The distance measuring device according to claim 6, wherein a voltage corresponding to a second accumulated charge amount accumulated according to said second color pattern is converted into a second digital signal, and a difference between them is used as a digital pixel signal. .
  18.  前記情報処理部は、
     前記第1カラーパタン、及び前記第2カラーパタンを生成するパタン生成部を有し、前記第1カラーパタン、及び前記第2カラーパタンの情報を有する信号を前記投影部に供給する、請求項1に記載の測距装置。
    The information processing unit
    2. The projector according to claim 1, further comprising a pattern generation section for generating said first color pattern and said second color pattern, and supplying a signal having information of said first color pattern and said second color pattern to said projection section. The distance measuring device according to .
  19.  前記撮像部は、
     赤外域の波長帯を3つの波長帯に分け、それぞれの波長帯のみに感度を有する画素で構成され、
     前記第1カラーパタン、及び前記第2カラーパタンは、前記赤外域の前記3つの波長帯のそれぞれで構成される、請求項1に記載の測距装置。
    The imaging unit is
    The wavelength band of the infrared region is divided into three wavelength bands, and it is composed of pixels that are sensitive only to each wavelength band,
    2. The distance measuring device according to claim 1, wherein said first color pattern and said second color pattern are respectively composed of said three wavelength bands in said infrared region.
  20.  前記撮像部は、
     少なくとも波長帯域を2つの波長帯域に分け、それぞれの波長帯域のみに感度を有する画素で構成される、請求項1に記載の測距装置。
    The imaging unit is
    2. The distance measuring device according to claim 1, wherein at least a wavelength band is divided into two wavelength bands and composed of pixels having sensitivity only in each wavelength band.
  21.  所定の第1カラーパタンと、所定の第2カラーパタンとを順に測定対象に投影する投影工程と、
     前記第1カラーパタンと、前記第2カラーパタンとに基づき、前記測定対象までの距離を測定する情報処理工程と、
     を備え、
     前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、前記第1カラーパタンと、前記第2カラーパタンとの同一投影領域に投影される光は、異なる波長帯域の光である、測距方法。
    a projection step of sequentially projecting a predetermined first color pattern and a predetermined second color pattern onto a measurement target;
    an information processing step of measuring a distance to the measurement target based on the first color pattern and the second color pattern;
    with
    Light projected onto the same projection area of the first color pattern and the second color pattern has different wavelengths from light projected onto the same projection area of the first color pattern and the second color pattern. A ranging method that is band light.
PCT/JP2022/011891 2021-06-15 2022-03-16 Distance measurement device and distance measurement method WO2022264576A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-099680 2021-06-15
JP2021099680A JP2022191058A (en) 2021-06-15 2021-06-15 Distance measurement device and distance measurement method

Publications (1)

Publication Number Publication Date
WO2022264576A1 true WO2022264576A1 (en) 2022-12-22

Family

ID=84527017

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/011891 WO2022264576A1 (en) 2021-06-15 2022-03-16 Distance measurement device and distance measurement method

Country Status (2)

Country Link
JP (1) JP2022191058A (en)
WO (1) WO2022264576A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001330417A (en) * 2000-05-19 2001-11-30 Tatsuo Sato Three-dimensional shape measuring method and apparatus using color pattern light projection
JP2004257769A (en) * 2003-02-24 2004-09-16 Nec San-Ei Instruments Ltd Multi-color infrared imaging apparatus and infrared energy data processing method
JP2013057761A (en) * 2011-09-07 2013-03-28 Olympus Corp Distance measuring device, imaging device, and distance measuring method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001330417A (en) * 2000-05-19 2001-11-30 Tatsuo Sato Three-dimensional shape measuring method and apparatus using color pattern light projection
JP2004257769A (en) * 2003-02-24 2004-09-16 Nec San-Ei Instruments Ltd Multi-color infrared imaging apparatus and infrared energy data processing method
JP2013057761A (en) * 2011-09-07 2013-03-28 Olympus Corp Distance measuring device, imaging device, and distance measuring method

Also Published As

Publication number Publication date
JP2022191058A (en) 2022-12-27

Similar Documents

Publication Publication Date Title
KR102525828B1 (en) digital pixel image sensor
US9215449B2 (en) Imaging and processing using dual clocks
KR102481774B1 (en) Image apparatus and operation method thereof
US20140028804A1 (en) 3d imaging apparatus
EP4053500B1 (en) Object recognition system, signal processing method of object recognition system, and electronic device
CN108351523A (en) Stereocamera and the depth map of structure light are used using head-mounted display
JP2021032763A (en) Distance measuring system and electronic apparatus
TW201625904A (en) Optical distance measurement system with dynamic exposure time
JP6839089B2 (en) Endoscope device, how to operate the endoscope device, and recording medium
JP2014115107A (en) Device and method for measuring distance
JP2007281556A (en) Imaging element, imaging apparatus, and imaging system
US10628951B2 (en) Distance measurement system applicable to different reflecting surfaces and computer system
TWI737582B (en) Camera and inspection device
WO2022264576A1 (en) Distance measurement device and distance measurement method
JP2021004760A (en) Ranging device having external light illuminance measurement function and external light illuminance measurement method
US20160317098A1 (en) Imaging apparatus, image processing apparatus, and image processing method
JP6716295B2 (en) Processing device, imaging device, processing method, program, and recording medium
JP6927294B2 (en) Measuring device, measuring method and measuring program
JP6088013B2 (en) Camera system and method for inspecting and / or measuring objects
US9906705B2 (en) Image pickup apparatus
JP7341145B2 (en) Device that images the skin
JP2018500576A (en) Optical measurement configuration
JP2021127998A (en) Distance information acquisition device and distance information acquisition method
WO2024116745A1 (en) Image generation device, image generation method, and image generation program
Fujimoto et al. Structured light of flickering patterns having different frequencies for a projector-event-camera system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22824573

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE