WO2023171203A1 - Imaging device - Google Patents

Imaging device Download PDF

Info

Publication number
WO2023171203A1
WO2023171203A1 PCT/JP2023/003972 JP2023003972W WO2023171203A1 WO 2023171203 A1 WO2023171203 A1 WO 2023171203A1 JP 2023003972 W JP2023003972 W JP 2023003972W WO 2023171203 A1 WO2023171203 A1 WO 2023171203A1
Authority
WO
WIPO (PCT)
Prior art keywords
distance
imaging
light
subject
imaging device
Prior art date
Application number
PCT/JP2023/003972
Other languages
French (fr)
Japanese (ja)
Inventor
邦博 今村
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2023171203A1 publication Critical patent/WO2023171203A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication

Definitions

  • the present invention relates to an imaging device that can measure the distance to a subject.
  • imaging devices that measure the distance to a subject by irradiating the subject with light.
  • This type of imaging device uses, for example, a method that uses a stereo camera to detect parallax to measure the distance to the subject, or a method that measures the time difference between projecting light and receiving the reflected light (TOF: Time Of Flight).
  • TOF Time Of Flight
  • a method to do so is used.
  • Patent Document 1 describes an imaging device that irradiates a subject with light having a unique pattern (intensity distribution).
  • the subject to be imaged may be composed of a transparent member such as glass and a member located behind the transparent member.
  • a transparent member such as glass
  • a member located behind the transparent member since light is reflected by both the transparent member in the front and the member behind it, it is difficult to properly measure the distance to the transparent member.
  • the present invention is capable of appropriately measuring the distance to the transparent member when the subject is composed of a transparent member and a member located behind the transparent member, and furthermore, it is possible to appropriately measure the distance to the transparent member. It is an object of the present invention to provide an imaging device that can also measure the distance to a certain member.
  • An imaging device includes a projection unit that projects light with a substantially uniform polarization direction, an imaging unit that images a subject onto which the light is projected, and processes a captured image acquired by the imaging unit. and a signal processing unit that measures the distance to the subject.
  • the imaging unit includes an imaging lens, a polarizing filter that extracts light having the same polarization direction as the light emitted from the projection unit, and an imaging element that receives light from the subject via the imaging lens and the polarizing filter. and.
  • the signal processing unit measures the distance for each region on the captured image without limiting the distance to be measured to one.
  • the polarizing filter that extracts light having the same polarization direction as the polarization direction of the light projected from the projection unit is disposed in the imaging unit, so that the polarization filter that extracts light with the same polarization direction as the polarization direction of the light projected from the projection unit is disposed in the imaging unit.
  • light is more easily received by the image sensor. Therefore, when the subject is composed of a transparent member and a member located behind the transparent member, the distance to the transparent member can be appropriately detected.
  • the signal processing unit measures the distance without limiting the distance to be measured to one, so in addition to the distance to the transparent member, it also measures the distance based on the light reflected by the member behind it. Can be obtained. Therefore, when the subject is composed of a transparent member and a member located at the back, the distance to the member located at the back can also be measured.
  • the present invention when a subject is composed of a transparent member and a member located behind the transparent member, it is possible to appropriately measure the distance to the transparent member, and furthermore, it is possible to appropriately measure the distance to the transparent member. It is possible to provide an imaging device that can also measure the distance to a member located in the area.
  • FIG. 1 is a diagram showing the configuration of an imaging device according to an embodiment.
  • FIGS. 2A and 2B are diagrams each schematically showing a method of setting pixel blocks for a captured image according to the embodiment.
  • FIGS. 3A to 3D are diagrams each schematically showing a process of searching a reference image for a matching pixel block that matches a target pixel block on a captured image, according to an embodiment.
  • FIG. 4 is a diagram schematically illustrating an imaging state of pattern light in a case where the subject is composed of a transparent member and a member located behind the transparent member, according to the embodiment.
  • FIG. 5 is a graph illustrating the relationship between the search position and the correlation value when pattern light from two members is incident on one pixel block, according to the embodiment.
  • FIG. 6 is a flowchart illustrating processing for obtaining distance to a subject according to the embodiment.
  • FIG. 7 is a flowchart illustrating a distance acquisition process to a subject according to modification example 1.
  • FIG. 8 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 2.
  • FIG. 9 is a graph illustrating the relationship between the search position and the correlation value when pattern light from two members is incident on one pixel block according to Modification Example 2.
  • FIG. 10 is a diagram for explaining interpolation processing according to modification example 3.
  • FIG. 11 is a diagram showing the configuration of an imaging device according to modification example 4.
  • FIG. 12 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 4.
  • the X-axis direction is the direction in which the projection section and the imaging section are lined up
  • the positive Z-axis direction is the imaging direction of the imaging section.
  • FIG. 1 is a diagram showing the configuration of an imaging device 1.
  • the imaging device 1 includes a projection section 10, an imaging section 20, and an image processing section 30.
  • the projection unit 10 projects pattern light in which light is distributed in a predetermined pattern onto the field of view of the imaging unit 20.
  • the direction in which the pattern light is projected by the projection unit 10 is the positive direction of the Z-axis.
  • the projection unit 10 includes a light source 11 , a collimator lens 12 , a pattern generator 13 , a projection lens 14 , and a polarizing filter 15 .
  • the light source 11 emits light of a predetermined wavelength.
  • the light source 11 is, for example, an LED (Light Emitting Diode).
  • the light source 11 emits light in an infrared wavelength band, for example.
  • the light source 11 may be another type of light source such as a semiconductor laser.
  • the collimator lens 12 converts the light emitted from the light source 11 into substantially parallel light.
  • the pattern generator 13 generates pattern light having a unique pattern (intensity distribution) using the light emitted from the light source 11.
  • the pattern generator 13 is a transmissive optical diffraction element (DOE).
  • DOE transmissive optical diffraction element
  • An optical diffraction element (DOE) for example, has a diffraction pattern with a predetermined number of steps on its entrance surface. Due to the diffraction effect of this diffraction pattern, the laser beam that has entered the diffractive optical element (pattern generator 13) is split into a plurality of beams and converted into a predetermined pattern of light.
  • the generated pattern is a pattern that can maintain uniqueness for each pixel block 212, which will be described later.
  • the pattern generated by the optical diffraction element is a pattern in which a plurality of dot areas (hereinafter referred to as "dots"), which are light passage areas, are randomly distributed.
  • the pattern generated by the optical diffraction element (DOE) is not limited to a pattern of dots, and may be any other pattern.
  • the pattern generator 13 may be a reflective optical diffraction element or a photomask.
  • the pattern generator 13 may be a device that generates a fixed pattern of pattern light based on a control signal, such as a DMD (Digital Mirror Device) or a liquid crystal display.
  • the projection lens 14 projects the pattern light generated by the pattern generator 13.
  • the projection lens 14 does not need to be one lens, and may be configured by combining a plurality of lenses. Further, instead of the projection lens 14, a concave reflecting mirror may be used.
  • the optical axis of the projection lens 14 is parallel to the Z axis.
  • the polarizing filter 15 selectively transmits light in a predetermined polarization direction and blocks light in a polarization direction perpendicular to this polarization direction.
  • the polarizing filter 15 selectively transmits light with a polarization direction parallel to the X-axis and blocks light with a polarization direction parallel to the Y-axis direction. Therefore, the light transmitted through the polarizing filter 15 has a substantially uniform polarization direction.
  • the position of the polarizing filter 15 is not limited to the rear side of the projection lens 14 (positive side of the Z-axis) when viewed from the light source 11, but may be between the light source 11 and the projection lens 14. Furthermore, the polarizing filter 15 does not necessarily have to be an independent member, but may be a surface of another member through which light from the light source 11 passes, such as the surface of the projection lens 14, the collimator lens 12, or the pattern generator 13.
  • the polarizing film may be integrally formed with the polarizing film.
  • the polarizing filter 15 may be omitted.
  • the light source 11 laser light source
  • the polarizing filter 15 may be arranged so that the direction of linearly polarized light is parallel to the X axis.
  • the imaging unit 20 images the subject onto which the pattern light is projected.
  • the imaging unit 20 includes an imaging element 21, an imaging lens 22, and a polarizing filter 23.
  • the image sensor 21 is a CMOS image sensor.
  • the image sensor 21 may be a CCD.
  • a filter that selectively transmits light in the wavelength band emitted by the light source 11 is formed on the imaging surface of the image sensor 21 .
  • a similar filter may be arranged within the imaging section 20 separately from the imaging element 21.
  • the imaging lens 22 focuses light from the viewing range onto the imaging surface of the imaging element 21.
  • the optical axis of the imaging lens 22 is parallel to the Z-axis. That is, the optical axis of the imaging lens 22 and the optical axis of the projection lens 14 are parallel to each other.
  • the optical axis of the imaging lens 22 may be inclined with respect to the optical axis of the projection lens 14 so that the optical axis of the imaging lens 22 approaches the optical axis of the projection lens 14 in the positive Z-axis direction.
  • the polarizing filter 23 selectively transmits light in the same polarization direction as the light (pattern light) emitted from the projection unit 10, and blocks light in a polarization direction perpendicular to this polarization direction.
  • the polarizing filter 23 selectively transmits light in a polarization direction parallel to the X-axis and blocks light in a polarization direction parallel to the Y-axis direction.
  • the position of the polarizing filter 23 is not limited to the latter side of the imaging lens 22 (positive side of the Z-axis) when viewed from the imaging element 21, but may be between the imaging element 21 and the imaging lens 22. Further, the polarizing filter 23 does not necessarily have to be an independent member, and may have a configuration in which a polarizing film is integrally formed on the surface of the imaging lens 22 through which the light from the light source 11 passes.
  • the image processing section 30 includes a signal processing section 31, a light source driving section 32, an imaging processing section 33, and a communication interface 34.
  • the signal processing section 31 includes an arithmetic processing circuit such as a microcomputer and a memory, and controls each section according to a predetermined program.
  • the signal processing unit 31 also processes pixel signals input from the imaging processing unit 33 to calculate the distance to the subject.
  • the signal processing unit 31 holds a reference image in which dots are distributed in a pattern similar to the pattern light projected from the projection unit 10.
  • the reference image corresponds to a captured image when the subject is located at a predetermined distance.
  • the signal processing unit 31 compares and processes this reference image with the captured image captured by the imaging processing unit 33, searches for stereo corresponding points, and searches for stereo corresponding points up to the subject in the direction corresponding to each pixel block on the captured image. Calculate distance.
  • the signal processing unit 31 sets a pixel block from which the distance is to be obtained (hereinafter referred to as a "target pixel block”) on the captured image, and sets a pixel block corresponding to this target pixel block, that is, a target pixel block.
  • a pixel block that best matches (hereinafter referred to as a "compatible pixel block”) is searched for in a search range defined on the reference image.
  • the signal processing unit 31 then processes pixels between a pixel block located at the same position as the target pixel block on the reference image (hereinafter referred to as "reference pixel block") and the matching pixel block extracted from the reference image by the above search.
  • the amount of deviation is obtained, and the distance to the subject at the position of the target pixel block is calculated from the obtained amount of pixel deviation by calculation based on the triangulation method.
  • the signal processing unit 31 performs such distance calculation for all pixels (pixel signals) of the captured image from the image sensor 21.
  • the signal processing unit 31 transmits one screen worth of distance (distance image) acquired through this processing to an external device via the communication interface 34.
  • the signal processing unit 31 may perform the distance calculation process using a semiconductor integrated circuit consisting of an FPGA (Field Programmable Gate Array). Alternatively, this processing may be performed by other semiconductor integrated circuits such as a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), and an ASIC (Application Specific Integrated Circuit).
  • FPGA Field Programmable Gate Array
  • DSP Digital Signal Processor
  • GPU Graphics Processing Unit
  • ASIC Application Specific Integrated Circuit
  • the light source driving section 32 drives the light source 11 under control from the signal processing section 31.
  • the image processing unit 33 controls the image sensor 21 and performs processing such as brightness correction and camera calibration on the pixel signal of the captured image output from the image sensor 21.
  • FIGS. 2(a) and 2(b) are diagrams schematically showing a method of setting the pixel block 212 for the image sensor 21.
  • FIG. 2(a) shows a method of setting the pixel blocks 212 for the entire imaging surface
  • FIG. 2(b) shows an enlarged view of a part of the imaging surface.
  • the imaging surface of the image sensor 21 is divided into a plurality of pixel blocks 212 each including a predetermined number of pixel regions 211.
  • the pixel area 211 is an area corresponding to one pixel on the image sensor 21.
  • one pixel block 212 is composed of nine pixel regions 211 arranged in three rows and three columns.
  • the number of pixel regions 211 included in one pixel block 212 is not limited to this.
  • pixel block 212 includes a portion of the projected dot group. Because the dots are distributed in a random pattern. The distribution of dots included in each pixel block 212 is also unique. Thereby, the distance to the subject in each pixel block 212 is appropriately calculated by searching for stereo corresponding points.
  • FIGS. 3(a) to 3(d) schematically illustrate the process of searching a reference image for a compatible pixel block that matches the target pixel block TB1 (pixel block 212 from which distance is to be obtained) on the captured image.
  • FIG. 3(a) to 3(d) schematically illustrate the process of searching a reference image for a compatible pixel block that matches the target pixel block TB1 (pixel block 212 from which distance is to be obtained) on the captured image.
  • the signal processing unit 31 sets a search range R0 on the reference image.
  • a starting position ST1 of the search range R0 is set to a position corresponding to the target pixel block TB1 on the reference image.
  • the search range R0 extends from the start position ST1 in the direction of separation between the projection unit 10 and the imaging unit 20 by a predetermined number of pixels (the number of pixels corresponding to the distance detection range).
  • the vertical width of the search range R0 is the same as the vertical width of the target pixel block TB1.
  • the signal processing unit 31 sets the start position ST1 as the search position.
  • the signal processing unit 31 sets a reference pixel block RB1 having the same size as the target pixel block TB1 at this search position, and calculates a correlation value between the target pixel block TB1 and the reference pixel block RB1.
  • the correlation value is, for example, a value obtained by calculating differences in pixel values (luminance) for mutually corresponding pixel areas of the target pixel block TB1 and reference pixel block RB1, and integrating all the absolute values of the calculated differences. (SAD).
  • the correlation value may be obtained as a value (SSD) obtained by integrating all the squared values of the differences.
  • the method of calculating the correlation value is not limited to these, and other calculation methods may be used as long as a correlation value that is an index of the correlation between the target pixel block TB1 and the reference pixel block RB1 can be obtained. .
  • the signal processing unit 31 sets the next search position, as shown in FIG. 3(b). Specifically, the signal processing unit 31 sets a position obtained by shifting the previous search position by one pixel toward the end of the search range R0 as the current search position. Then, the signal processing unit 31 calculates the correlation value between the reference pixel block RB1 and the target pixel block TB1 at the current search position by processing similar to the above.
  • the signal processing unit 31 repeatedly executes similar processing while shifting the search position by one pixel in the terminal direction.
  • 3(c) shows a state in which the reference pixel block RB1 is set at the previous search position from the last search position in the search range R0
  • FIG. 3(d) shows the state in which the reference pixel block RB1 is set at the last search position in the search range R0. This shows a state in which the reference pixel block RB1 is set.
  • the signal processing unit 31 specifies the search position where the correlation value is the peak among the sequentially set search positions, and determines the pixel shift between the specified search position and the start position ST1. Based on the amount, the distance to the subject is calculated by triangulation.
  • the signal processing unit 31 repeats the same process for all target pixel blocks TB1 on the captured image. After calculating the distances to the subject for all target pixel blocks TB1 in this way, the signal processing unit 31 transmits a distance image in which the calculated distances are mapped to each target pixel block TB1 to an external device via the communication interface 34. do.
  • FIG. 4 is a diagram schematically showing the imaging state of the patterned light L0 when the subject 40 is composed of a transparent member 41 such as glass and a member 42 located behind the transparent member 41.
  • the patterned light L0 is reflected on both the surface of the member 41 and the member 42 located deep therein. That is, part of the pattern light L0 is specularly reflected on the surface of the member 41, and the rest is transmitted through the member 41.
  • the pattern light L0 that has passed through the member 41 is diffusely reflected on the surface of the member 42 on the back side, and is directed toward the member 41 on the front side. Thereafter, part of this pattern light passes through the member 41.
  • the imaging lens 22 of the imaging unit 20 captures the pattern light specularly reflected on the surface of the front member 41 and the pattern light diffusely reflected on the surface of the rear member 42. Therefore, these two types of pattern light are guided to the image sensor 21. At this time, in the image sensor 21, the above two types of pattern light may be incident on the same pixel block 212.
  • this pixel block 212 includes dots DT included in the light portion L1 and dots DT included in the light portion L2.
  • FIG. 5 is a graph illustrating the relationship between the search position and the correlation value when pattern light from two members 41 and 42 is incident on one pixel block 212.
  • SAD or SSD is assumed as the correlation value on the vertical axis.
  • the search position P1 in FIG. 4 is a search position where the dot pattern of the light portion L1 specularly reflected on the surface of the front member 41 matches the reference image
  • the search position P2 is the search position on the back side in FIG.
  • the dot pattern of the light portion L2 that is specularly reflected on the surface of the member 42 is the search position that matches the reference image.
  • the threshold Th0 is used to prevent this search position from being detected as a search position corresponding to the distance to the subject when the correlation value reaches a peak due to the influence of noise or the like.
  • the threshold Th0 may be a fixed value, or may be obtained by multiplying the maximum value, average value, median value, or mode of the correlation values obtained at all search positions in the search range R0 by a predetermined ratio, Alternatively, it may be set to a value obtained by adding or subtracting a predetermined value.
  • FIG. 5 shows an example in which two search positions P1 and P2 are obtained for the members 41 and 42, the dots DT from the members 41 and 42 overlap each other, so that It is also possible that the uniqueness of the dot pattern is reduced.
  • the peak of the correlation value included in the range smaller than the threshold Th0 may not be obtained.
  • the distance to the pixel block 212 may be interpolated using the surrounding pixel blocks 212. As a result, a distance image in which distances are mapped to all pixel blocks 212 can be obtained.
  • FIG. 6 is a flowchart showing the process of obtaining the distance to the subject.
  • the signal processing unit 31 acquires a captured image from the image sensor 21 (S101). Next, the signal processing unit 31 executes the corresponding point search process shown in FIGS. 3A to 3D on the target pixel block TB1 on the captured image (S102). Then, the signal processing unit 31 determines whether a plurality of corresponding points (compatible search positions) are obtained in the corresponding point search process (S103). That is, it is determined whether there are a plurality of correlation value peaks in a range smaller than the threshold value Th0 shown in FIG. 5, and thereby a plurality of search positions matching the target pixel block TB1 have been obtained.
  • the signal processing unit 31 selects the corresponding point with the shortest pixel shift from the reference position (starting position ST1 in FIG. 3(a)). The point is selected as a corresponding point for distance acquisition (S104). Then, the signal processing unit 31 calculates the distance in the target pixel block TB1 using the triangulation method from the pixel shift amount of the selected corresponding point with respect to the start position ST1 (S105).
  • the signal processing unit 31 calculates a triangle from the pixel shift amount of the obtained corresponding point with respect to the starting position ST1.
  • the distance in the target pixel block TB1 is calculated by the surveying method (S105).
  • the signal processing unit 31 returns the process to step S106 without calculating the distance to the target pixel block TB1. Proceed. In this case, the signal processing unit 31 sets a flag indicating that the distance cannot be obtained for the target pixel block TB1.
  • the corresponding point selected in step S104 is the corresponding point with the shortest pixel shift from the start position ST1, in the example of FIG.
  • the dot pattern is a search position that matches the reference image.
  • the imaging unit 20 is configured to mainly extract pattern light specularly reflected by the subject using the polarizing filter 23
  • the determination in step S109 is YES and there is only one corresponding point. If the corresponding point is obtained, there is a high possibility that the corresponding point is a search position where the dot pattern of the light portion L1 specularly reflected on the surface of the member 41 closest to the front matches the reference image.
  • the distance is calculated for the search position (S104 or S109: YES) where the dot pattern of the light portion L1 specularly reflected on the surface of the nearest member 41 matches the reference image (S104 or S109: YES). S105) If such a search position is not obtained (S109: NO), the process proceeds to step S106 without calculating the distance.
  • the signal processing unit 31 After performing the processing on the target pixel block TB1, the signal processing unit 31 determines whether distance calculation processing has been performed on all the pixel blocks 212 on the captured image (S106). If the processing has not been completed for all the pixel blocks 212 (S106: NO), the signal processing unit 31 sets the next pixel block 212 as the target pixel block TB1, and executes the processing from step S102 onwards.
  • the signal processing unit 31 calculates the distance for the pixel blocks 212 for which the distance was not obtained, that is, the determination in step S109 is completed.
  • a distance interpolation process is executed for the flagged pixel block 212 with a NO answer (S107). More specifically, the signal processing unit 31 interpolates the distance to the pixel block 212 for which the distance could not be obtained using distances around the pixel block 212. As a result, a distance image in which distances are associated with all the pixel blocks 212 is constructed.
  • the signal processing unit 31 transmits the distance image configured in this way to an external device via the communication interface 34 (S108). Thereby, the signal processing unit 31 ends the process of FIG.
  • the polarizing filter 23 that extracts light having the same polarization direction as that of the light projected from the projection section 10 is disposed in the imaging section 20, so that the projection section is mirror-reflected on the surface of the subject.
  • the light from 10 is more easily received by the image sensor 21. Therefore, as shown in FIG. 4, when the subject 40 is composed of a transparent member 41 and a member 42 located behind the transparent member 41, the distance to the transparent member 41 can be appropriately detected.
  • the signal processing unit 31 selects the shortest distance as the measurement result. (S104, S105). Thereby, as explained with reference to FIG. 5, the distance to the transparent member 41 can be appropriately detected.
  • the projection unit 10 uses the pattern generator 13 to generate pattern light with a predetermined intensity distribution and projects it onto the subject, and the signal processing unit 31 generates pattern light with a predetermined intensity distribution when the subject is located at a predetermined distance.
  • the distance to the subject is measured by comparing the reference image, which is the captured image, with the captured image captured by the imaging unit 20. Thereby, even if the reflectance and light absorption rate of the subject surface are uniform, the distance to each position on the subject surface can be measured. Moreover, the distance to the transparent member 41 can be appropriately detected by the process shown in FIG.
  • the projection unit 10 includes a polarization filter 15 for generating substantially uniformly polarized light. Thereby, even if the polarization direction of the light source 11 is random, light can be projected onto the subject in a substantially uniform polarization direction.
  • FIG. 7 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 1.
  • step S104 of the flowchart of FIG. 6 is replaced with step S111, and step S109 of FIG. 6 is omitted.
  • the processing in other steps in FIG. 7 is similar to the processing in the corresponding steps in FIG.
  • the signal processing unit 31 detects the pixel deviation from the starting position ST1. The corresponding point with the longest is selected as the corresponding point for distance acquisition (S111). Then, the signal processing unit 31 calculates the distance in the target pixel block TB1 using the triangulation method from the pixel shift amount of the selected corresponding point with respect to the start position ST1 (S105).
  • the signal processing unit 31 advances the process to step S106 without calculating the distance to the target pixel block TB1. In this case, the signal processing unit 31 sets a flag indicating that the distance cannot be obtained for the target pixel block TB1. Regarding this target pixel block TB1, the distance value is interpolated in step 107 as in the above embodiment.
  • step S103 if the determination in step S103 is NO, only one corresponding point may be obtained for the target pixel block TB1.
  • the imaging unit 20 is configured to mainly extract pattern light specularly reflected by the subject using the polarizing filter 23, when only one corresponding point is obtained in this way, The corresponding point is likely to be a search position where the dot pattern of the light portion L1 specularly reflected on the surface of the member 41 closest to the user matches the reference image. Therefore, the distance acquired from the corresponding point is likely to be the distance to the transparent member 41 in the front, rather than the distance to the member 42 on the back side. Therefore, if this distance is included in the measurement result, a distance image that includes only the distance to the member 42 on the back side cannot be obtained with high accuracy.
  • step S109 of FIG. 6 is omitted, and if a plurality of corresponding points are not extracted for the target pixel block TB1, the distance is uniformly obtained for the target pixel block TB1. treated as if it had not happened. This makes it easier for the distance image to include only the distance to the member 42 on the back side, and it is possible to improve the accuracy of the distance image.
  • step S109 may be omitted and the process may proceed to step S106 if the determination in step S103 is NO.
  • This process also makes it possible to accurately acquire a distance image that includes the distance to the transparent member 41 in the foreground.
  • FIG. 8 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 2.
  • steps S103 and S111 in the flowchart of FIG. 7 are replaced with steps S112 and S113.
  • the processing in other steps in FIG. 8 is similar to the processing in the corresponding steps in FIGS. 6 and 7.
  • the signal processing unit 31 converts the corresponding point into a corresponding point for distance acquisition. (S113). Then, the signal processing unit 31 calculates the distance in the target pixel block TB1 using the triangulation method from the pixel shift amount of the selected corresponding point with respect to the start position ST1 (S105).
  • the signal processing unit 31 advances the process to step S106 without calculating the distance to the target pixel block TB1. In this case, the signal processing unit 31 sets a flag indicating that the distance cannot be obtained for the target pixel block TB1. Regarding this target pixel block TB1, the distance value is interpolated in step 107 as in the above embodiment.
  • the setting range in step S112 is set based on a command received from an external device via the communication interface 34, for example.
  • the external device is a robot arm that grips an article and places it on an object on a belt conveyor
  • the imaging device 1 can be installed in the gripping section of the robot arm.
  • the placed object is the subject.
  • This placed object has, for example, a transparent member 41 and a member 42 at the back thereof as shown in FIG.
  • the robot arm roughly calculates the distance between the gripping part and the object (corresponding to the distance to the transparent member 41 that is closest to the front of the object) based on the current position of the gripping part and the position of the object. Then, a predetermined distance range including this distance is transmitted to the signal processing section 31.
  • the signal processing unit 31 sets the search range corresponding to the received distance range as the set range in step S112.
  • the robot arm processes a signal to determine the approximate value of the distance between the gripping part and the object to be placed (corresponding to the distance to the transparent member 41 that is closest to the front of the object) or the position of the gripping part and the object to be placed. It may also be transmitted to the section 31.
  • the signal processing unit 31 sets a predetermined search range centered on the search position corresponding to the distance to the nearest transparent member 41 to the range set in step S112 based on the received information. Just set it.
  • the setting range ⁇ W was set so that the distance to the transparent member 41 located closest to the front was obtained, but the setting range ⁇ W was set so that the distance to the member 42 on the far side was obtained. ⁇ W may be set.
  • the corresponding point (search position P2) corresponding to the distance to the far side member is selected as the distance acquisition target. Thereby, the distance to the member 42 located at the back of the transparent member 41 can be appropriately detected.
  • FIG. 10 is a diagram for explaining interpolation processing according to modification example 3.
  • the range of the pixel block 212 used for interpolation is expanded from the range of the pixel blocks 212 surrounding the pixel block 212 to be interpolated.
  • a range R12 that is wider than the range R11 in which diffuse reflection is normally assumed to be dominant due to fine irregularities, etc. is set around the pixel block 212 to be interpolated, and this range R12 is The distance of the pixel block 212 to be interpolated is interpolated from the distance of the included pixel block 212.
  • a conventionally known interpolation method such as linear interpolation may be applied.
  • the range to be interpolated is expanded, so that the distance to the pixel block 212 to be interpolated can be interpolated more appropriately. Therefore, the quality of the distance image in which the distance to the surface of the transparent member 41 is mapped can be improved.
  • an irradiation position changing unit that changes the irradiation position of the pattern light on the subject is arranged in the imaging device 1. Furthermore, in modification example 4, an imaging position changing unit that changes the imaging position of the imaging unit 20 with respect to the subject is arranged in the imaging device 1.
  • FIG. 11 is a diagram showing the configuration of the imaging device 1 according to modification example 4.
  • the projection unit 10 is supported by a support shaft 51 parallel to the Y-axis.
  • the support shaft 51 is fixed to the center of the gear 52.
  • the gear 52 meshes with a gear 54 fixed to a drive shaft of a motor 53.
  • Motor 53 is, for example, a stepping motor.
  • the gear 52 rotates and the support shaft 51 rotates.
  • the projection unit 10 rotates about the support shaft 51 in parallel to the XZ plane.
  • the support shaft 51, the gear 52, the motor 53, and the gear 54 constitute an irradiation position changing unit 50 that changes the irradiation position of the pattern light onto the subject.
  • the imaging unit 20 is supported by a support shaft 61 parallel to the Y-axis.
  • the support shaft 61 is fixed to the center of the gear 62.
  • the gear 62 meshes with a gear 64 fixed to the drive shaft of a motor 63.
  • Motor 63 is, for example, a stepping motor.
  • the gear 62 rotates and the support shaft 61 rotates.
  • the imaging unit 20 rotates about the support shaft 61 in parallel to the XZ plane. Thereby, the imaging position of the imaging unit 20 with respect to the subject 40 is changed in the X-axis direction.
  • the support shaft 61, the gear 62, the motor 63, and the gear 64 constitute an imaging position changing unit 60 that changes the imaging position of the imaging unit 20 with respect to the subject.
  • the dot pattern of the pattern light that enters each pixel block 212 on the image sensor 21 changes.
  • both the dot pattern of the pattern light specularly reflected on the surface of the transparent member 41 and the dot pattern of the pattern light diffusely reflected on the surface of the member 42 behind it change.
  • the combination of dot patterns of the pattern light incident from the transparent member 41 and the member 42 changes before and after the irradiation position of the pattern light on the subject is changed by the irradiation position changing unit 50.
  • the dot pattern specificity can be restored by changing the irradiation position.
  • a plurality of corresponding points search positions P1, P2 may be obtained. Therefore, the distance to the pixel block 212 can be obtained without interpolation processing using the corresponding points obtained after changing the irradiation position.
  • the signal processing unit 31 may further take into consideration the change in the irradiation position by the irradiation position changing unit 50 and perform distance calculation processing based on the corresponding points.
  • FIG. 12 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 4.
  • the signal processing section 31 further controls the irradiation position changing section 50.
  • the signal processing unit 31 first executes the processes of steps S101 to S106 and S109 in the same manner as in FIG. 6, with the projection unit 10 set at the same position as in the above embodiment without changing the irradiation position.
  • the signal processing unit 31 drives the motor 53 of the irradiation position changing unit 50 by a predetermined number of steps to change the irradiation position of the pattern light (S121).
  • the signal processing unit 31 remeasures the distance for the pixel block 212 for which the distance could not be obtained in the processes of steps S101 to S106 and S109 (S122).
  • the signal processing unit 31 performs the same stereo corresponding point search as in step S102 again on these pixel blocks 212, and calculates the distances in steps S103 to S105 and S109.
  • the signal processing unit 31 performs interpolation processing on this pixel block 212 (S107). Once the distances have been set for all the pixel blocks 212 in this way, the signal processing unit 31 transmits one screen worth of distance images to the external device via the communication interface 34.
  • the distance can be obtained by, for example, the processing of steps S102 to S105 and S109 in FIG.
  • the distance to the pixel block 212 that did not exist can be obtained without interpolation processing by the processing in steps S121 and S122. Therefore, the quality of the distance image can be improved.
  • the irradiation position is changed only once, but the irradiation position may be changed multiple times and the distance to the pixel block 212 for which the distance could not be obtained may be remeasured. .
  • the number of pixel blocks 212 for which distances could not be finally obtained is reduced, and the number of pixel blocks 212 targeted for the interpolation process in step S107 is reduced. Therefore, the quality of the distance image can be further improved.
  • the process of re-measuring the distance in response to a change in the irradiation position may also be applied to the processes in FIGS. 7 and 8. Thereby, the quality of the distance image obtained by these processes can be improved.
  • the irradiation position is changed by driving the motor 53 several steps, but the irradiation position may be changed significantly to the extent that the pattern light projection range is changed to another range. . This allows the distance to the subject to be measured over a wider range.
  • the signal processing section 31 may drive the imaging position changing section 60 so that the pattern light is projected onto the imaging device 21.
  • the signal processing unit 31 stores in advance a table in which the drive direction and drive amount (number of steps) of the motor 53 are associated with the drive direction and drive amount (step number) of the motor 63, and based on this table, , it is sufficient to drive the imaging position changing unit 60 by the driving direction and driving amount corresponding to the driving of the irradiation position changing unit 50.
  • the projection position and the imaging position are changed by rotating the projection unit 10 and the imaging unit 20 about the support shafts 51 and 61 parallel to the Y axis.
  • the changing method is not limited to this.
  • the projection unit 10 and the imaging unit 20 may be moved in the X-axis direction or the Y-axis direction, or the subject 40 may be moved in the X-axis direction or the Y-axis direction.
  • an XY stage 70 is used as a mounting table on which the subject 40 is mounted.
  • the configuration may be such that the subject 40 is tilted from a state parallel to the XY plane, or the relative distance between the subject 40 and the projection unit 10 and the imaging unit 20 may be changed.
  • FIGS. 6 and 7 may be performed in parallel to obtain the distances from the subject 40 to the members 41 and 42 at the same time.
  • a plurality of setting ranges may be set and the distances to the members 41 and 42 may be acquired at the same time.
  • the number of imaging units 20 does not necessarily have to be one, and a plurality of imaging units 20 that capture images from mutually different directions may be arranged.
  • the emission wavelength of the light source 11 does not necessarily have to be in the infrared wavelength band, but may be in the visible wavelength band, for example.
  • the search range of the stereo corresponding point search is in the row direction, but the search range may be in the column direction, or the search range may be in a combination of rows and columns. good.
  • the image processing unit 33 may correct the distortion caused by the distortion of the imaging lens 22 on the captured image, and may also correct the distortion caused by the distortion of the imaging lens 22 on the captured image. Distortion may be corrected by distortion or the like.
  • a correlation value is obtained for each pixel block 212, but correlation values between pixel blocks 212 may be further obtained through processing such as parabola fitting, and a search for corresponding points may be performed. .
  • the configuration of the imaging device 1 is not limited to the configuration shown in the above embodiment, and for example, a photosensor array in which a plurality of photosensors are arranged in a matrix may be used as the imaging element 21. .
  • the distance to the subject is calculated by searching for stereo corresponding points between the reference image held in advance and the captured image, but the distance to the subject is calculated by searching for stereo corresponding points between the reference image held in advance and the captured image.
  • One of the images may be used as a reference image, and the distance to the subject may be calculated by searching for stereo corresponding points between the other captured image and the reference image.
  • these two imaging units have the same configuration as the imaging unit 20 in FIG. 1 .
  • the projection unit 10 may be arranged so as to project pattern light onto a range where the imaging fields of these two imaging units overlap.
  • the polarizing filters arranged in each imaging section may be arranged so as to extract light in the same polarization direction as the light emitted from the projection section 10.
  • the number of imaging units is not limited to two, and three or more imaging units that capture images from different directions may be arranged.
  • the distance measurement method does not necessarily have to be a measurement method using stereo corresponding point search, and may be a measurement method using TOF, for example.
  • a TOF configuration for performing distance measurement using a so-called flash method can be realized.
  • the projection lens 14 may have a diffusion effect such as a concave lens.
  • the signal processing unit 31 causes the light source 11 to emit pulsed light, and measures the distance to the subject based on the time difference between the timing of this emission and the timing of reception of reflected light from the subject in each pixel of the image sensor 21. . That is, each pixel area becomes a unit area for distance measurement.
  • both the reflected light reflected from the surface of the transparent member 41 and the reflected light reflected from the member 42 located further therein are incident.
  • the signal processing unit 31 obtains the above-mentioned time differences based on these two reflected lights, and calculates a distance based on each time difference. Then, the signal processing unit 31 selects the shorter one, the longer one, or the one included in a predetermined setting range from the two types of distances obtained for each pixel, and generates a distance image.
  • the distance to the transparent member 41 or 42 can be appropriately acquired, as in the above embodiment and Modifications 1 and 2.
  • the TOF method does not necessarily have to be a flash method, and a method in which a circular beam is scanned two-dimensionally or a method in which a line beam is scanned in the short axis direction may be used.
  • the pattern generator 13 and projection lens 14 are omitted from the configuration of FIG. 1, and a configuration for scanning the beam is added.
  • the light parallelized by the collimator lens 12 is scanned two-dimensionally by an optical deflector such as a MEMS mirror.
  • an optical deflector such as a MEMS mirror.
  • a cylindrical lens is placed after the collimator lens 12 to generate the line beam, and the line beam is scanned in the short axis direction near the focal position of the cylindrical lens. optical deflectors are arranged.
  • the light source 11 may be a laser light source.
  • the polarizing filter 15 may be omitted from the projection section 10 and the laser light source may be arranged such that the direction of linearly polarized light from the laser light source is along the direction of the polarizing filter 23 of the imaging section 20.
  • Imaging device 10 Projection unit 15 Polarizing filter 20 Imaging unit 21 Imaging element 22 Imaging lens 23 Polarizing filter 31 Signal processing unit 40 Subject 50 Irradiation position changing unit 60 Imaging position changing unit ⁇ W setting range

Abstract

An imaging device (1) comprises a projection unit (10) that projects light having a substantially uniform polarization direction, an imaging unit (20) that captures a subject onto which the light is projected, and a signal processing unit (31) that processes a captured image obtained by the imaging unit (20) to measure the distance to the subject. The imaging unit (20) comprises an imaging lens (22), a polarizing filter (23) that extracts light having the same polarization direction as the light emitted from the projection unit (10), and an image sensor (21) that receives light from the subject through the imaging lens (22) and the polarizing filter (23). The signal processing unit (31) measures the distance for each area on the captured image without limiting the number of distances to measurement subjects to one.

Description

撮像装置Imaging device
 本発明は、被写体までの距離を測定可能な撮像装置に関する。 The present invention relates to an imaging device that can measure the distance to a subject.
 従来、被写体に光を照射して被写体までの距離を測定する撮像装置が知られている。この種の撮像装置では、たとえば、ステレオカメラにより視差を検出して被写体までの距離を測定する方式や、光を投射してからその反射光を受光するまでの時間差(TOF:Time Of Flight)を検出して被写体までの距離を測定する方式、あるいは、特異なパターン(強度分布)を有する光を被写体に照射し、基準画像に対するパターンの画素ずれ量を視差として検出して被写体までの距離を測定する方式等が用いられる。 Conventionally, imaging devices are known that measure the distance to a subject by irradiating the subject with light. This type of imaging device uses, for example, a method that uses a stereo camera to detect parallax to measure the distance to the subject, or a method that measures the time difference between projecting light and receiving the reflected light (TOF: Time Of Flight). A method of detecting and measuring the distance to the subject, or a method of illuminating the subject with light with a unique pattern (intensity distribution) and measuring the distance to the subject by detecting the amount of pixel shift of the pattern with respect to the reference image as parallax. A method to do so is used.
 以下の特許文献1には、特異なパターン(強度分布)を有する光を被写体に照射する方式の撮像装置が記載されている。 Patent Document 1 below describes an imaging device that irradiates a subject with light having a unique pattern (intensity distribution).
特許第6657880号公報Patent No. 6657880
 これら各方式の撮像装置では、撮像対象の被写体が、ガラス等の透明な部材とその奥にある部材とで構成される場合がある。この場合、手前の透明な部材とその奥の部材の両方で光が反射されるため、透明な部材に対する距離を適正に測定するのが困難である。 In each of these types of imaging devices, the subject to be imaged may be composed of a transparent member such as glass and a member located behind the transparent member. In this case, since light is reflected by both the transparent member in the front and the member behind it, it is difficult to properly measure the distance to the transparent member.
 かかる課題に鑑み、本発明は、被写体が、透明な部材とその奥にある部材とで構成される場合に、透明な部材までの距離を適正に測定でき、さらには、透明な部材の奥にある部材までの距離をも測定可能な撮像装置を提供することを目的とする。 In view of such problems, the present invention is capable of appropriately measuring the distance to the transparent member when the subject is composed of a transparent member and a member located behind the transparent member, and furthermore, it is possible to appropriately measure the distance to the transparent member. It is an object of the present invention to provide an imaging device that can also measure the distance to a certain member.
 本発明の主たる態様に係る撮像装置は、略均一な偏光方向の光を投射する投射部と、前記光が投射された被写体を撮像する撮像部と、前記撮像部により取得された撮像画像を処理して前記被写体までの距離を計測する信号処理部と、を備える。前記撮像部は、撮像レンズと、前記投射部から出射される光と同じ偏光方向の光を抽出する偏光フィルタと、前記撮像レンズおよび前記偏光フィルタを経由した前記被写体からの光を受光する撮像素子と、を備える。前記信号処理部は、前記撮像画像上の領域ごとに、計測対象の距離を1つに制限することなく、距離の計測を行う。 An imaging device according to a main aspect of the present invention includes a projection unit that projects light with a substantially uniform polarization direction, an imaging unit that images a subject onto which the light is projected, and processes a captured image acquired by the imaging unit. and a signal processing unit that measures the distance to the subject. The imaging unit includes an imaging lens, a polarizing filter that extracts light having the same polarization direction as the light emitted from the projection unit, and an imaging element that receives light from the subject via the imaging lens and the polarizing filter. and. The signal processing unit measures the distance for each region on the captured image without limiting the distance to be measured to one.
 本態様に係る撮像装置によれば、投射部から投射される光の偏光方向と同じ偏光方向の光を抽出する偏光フィルタが撮像部に配置されるため、被写体表面で鏡面反射された投射部からの光が撮像素子により受光されやすくなる。このため、被写体が、透明な部材とその奥にある部材とで構成される場合に、透明な部材までの距離を適正に検出できる。また、信号処理部は、計測対象の距離を1つに制限することなく、距離の計測を行うため、透明な部材までの距離の他、その奥にある部材で反射された光に基づく距離も取得できる。よって、被写体が、透明な部材とその奥にある部材とで構成される場合に、奥にある部材までの距離をも測定することができる。 According to the imaging device according to this aspect, the polarizing filter that extracts light having the same polarization direction as the polarization direction of the light projected from the projection unit is disposed in the imaging unit, so that the polarization filter that extracts light with the same polarization direction as the polarization direction of the light projected from the projection unit is disposed in the imaging unit. light is more easily received by the image sensor. Therefore, when the subject is composed of a transparent member and a member located behind the transparent member, the distance to the transparent member can be appropriately detected. In addition, the signal processing unit measures the distance without limiting the distance to be measured to one, so in addition to the distance to the transparent member, it also measures the distance based on the light reflected by the member behind it. Can be obtained. Therefore, when the subject is composed of a transparent member and a member located at the back, the distance to the member located at the back can also be measured.
 以上のとおり、本発明によれば、被写体が、透明な部材とその奥にある部材とで構成される場合に、透明な部材までの距離を適正に測定でき、さらには、透明な部材の奥にある部材までの距離をも測定可能な撮像装置を提供できる。 As described above, according to the present invention, when a subject is composed of a transparent member and a member located behind the transparent member, it is possible to appropriately measure the distance to the transparent member, and furthermore, it is possible to appropriately measure the distance to the transparent member. It is possible to provide an imaging device that can also measure the distance to a member located in the area.
 本発明の効果ないし意義は、以下に示す実施形態の説明により更に明らかとなろう。ただし、以下に示す実施形態は、あくまでも、本発明を実施化する際の一つの例示であって、本発明は、以下の実施形態に記載されたものに何ら制限されるものではない。 The effects and significance of the present invention will become clearer from the description of the embodiments shown below. However, the embodiment shown below is merely one example of implementing the present invention, and the present invention is not limited to what is described in the embodiment below.
図1は、実施形態に係る、撮像装置の構成を示す図である。FIG. 1 is a diagram showing the configuration of an imaging device according to an embodiment. 図2(a)および図2(b)は、それぞれ、実施形態に係る、撮像画像に対する画素ブロックの設定方法を模式的に示す図である。FIGS. 2A and 2B are diagrams each schematically showing a method of setting pixel blocks for a captured image according to the embodiment. 図3(a)~図3(d)は、それぞれ、実施形態に係る、撮像画像上の対象画素ブロックに適合する適合画素ブロックを、基準画像上において探索する処理を模式的に示す図である。FIGS. 3A to 3D are diagrams each schematically showing a process of searching a reference image for a matching pixel block that matches a target pixel block on a captured image, according to an embodiment. . 図4は、実施形態に係る、被写体が、透明な部材とその奥にある部材とで構成される場合の、パターン光の撮像状態を模式的に示す図である。FIG. 4 is a diagram schematically illustrating an imaging state of pattern light in a case where the subject is composed of a transparent member and a member located behind the transparent member, according to the embodiment. 図5は、実施形態に係る、1つの画素ブロックに対し、2つの部材からのパターン光が入射した場合の探索位置と相関値の関係を例示するグラフである。FIG. 5 is a graph illustrating the relationship between the search position and the correlation value when pattern light from two members is incident on one pixel block, according to the embodiment. 図6は、実施形態に係る、被写体に対する距離の取得処理を示すフローチャートである。FIG. 6 is a flowchart illustrating processing for obtaining distance to a subject according to the embodiment. 図7は、変更例1に係る、被写体に対する距離の取得処理を示すフローチャートである。FIG. 7 is a flowchart illustrating a distance acquisition process to a subject according to modification example 1. 図8は、変更例2に係る、被写体に対する距離の取得処理を示すフローチャートである。FIG. 8 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 2. 図9は、変更例2に係る、1つの画素ブロックに対し、2つの部材からのパターン光が入射した場合の探索位置と相関値の関係を例示するグラフである。FIG. 9 is a graph illustrating the relationship between the search position and the correlation value when pattern light from two members is incident on one pixel block according to Modification Example 2. 図10は、変更例3に係る、補間処理を説明するための図である。FIG. 10 is a diagram for explaining interpolation processing according to modification example 3. 図11は、変更例4に係る、撮像装置の構成を示す図である。FIG. 11 is a diagram showing the configuration of an imaging device according to modification example 4. 図12は、変更例4に係る、被写体に対する距離の取得処理を示すフローチャートである。FIG. 12 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 4.
 ただし、図面はもっぱら説明のためのものであって、この発明の範囲を限定するものではない。 However, the drawings are solely for illustrative purposes and do not limit the scope of the invention.
 以下、本発明の実施形態について、図面を参照して説明する。便宜上、各図には、互いに直交するX、Y、Z軸が付記されている。X軸方向は、投射部および撮像部の並び方向であり、Z軸正方向は撮像部の撮像方向である。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. For convenience, mutually orthogonal X, Y, and Z axes are shown in each figure. The X-axis direction is the direction in which the projection section and the imaging section are lined up, and the positive Z-axis direction is the imaging direction of the imaging section.
 図1は、撮像装置1の構成を示す図である。 FIG. 1 is a diagram showing the configuration of an imaging device 1.
 撮像装置1は、投射部10と、撮像部20と、画像処理部30とを備える。 The imaging device 1 includes a projection section 10, an imaging section 20, and an image processing section 30.
 投射部10は、撮像部20の視野範囲に、所定パターンで光が分布するパターン光を投射する。投射部10によるパターン光の投射方向は、Z軸正方向である。投射部10は、光源11と、コリメータレンズ12と、パターン生成器13と、投射レンズ14と、偏光フィルタ15と、を備える。 The projection unit 10 projects pattern light in which light is distributed in a predetermined pattern onto the field of view of the imaging unit 20. The direction in which the pattern light is projected by the projection unit 10 is the positive direction of the Z-axis. The projection unit 10 includes a light source 11 , a collimator lens 12 , a pattern generator 13 , a projection lens 14 , and a polarizing filter 15 .
 光源11は、所定波長の光を出射する。光源11は、たとえば、LED(LightEmitting Diode)である。光源11は、たとえば、赤外の波長帯の光を出射する。光源11は、半導体レーザ等の他の種類の光源であってもよい。コリメータレンズ12は、光源11から出射された光を、略平行光に変換する。 The light source 11 emits light of a predetermined wavelength. The light source 11 is, for example, an LED (Light Emitting Diode). The light source 11 emits light in an infrared wavelength band, for example. The light source 11 may be another type of light source such as a semiconductor laser. The collimator lens 12 converts the light emitted from the light source 11 into substantially parallel light.
 パターン生成器13は、光源11から出射された光により、特異なパターン(強度分布)を有するパターン光を生成する。本実施形態では、パターン生成器13は、透過型の光学回折素子(DOE)である。光学回折素子(DOE)は、たとえば、その入射面に所定ステップ数の回折パターンを有する。この回折パターンによる回折作用により、回折光学素子(パターン生成器13)に入射したレーザ光は、複数に分光され、所定パターンの光に変換される。生成されるパターンは、後述の画素ブロック212ごとに唯一性を維持できるパターンである。 The pattern generator 13 generates pattern light having a unique pattern (intensity distribution) using the light emitted from the light source 11. In this embodiment, the pattern generator 13 is a transmissive optical diffraction element (DOE). An optical diffraction element (DOE), for example, has a diffraction pattern with a predetermined number of steps on its entrance surface. Due to the diffraction effect of this diffraction pattern, the laser beam that has entered the diffractive optical element (pattern generator 13) is split into a plurality of beams and converted into a predetermined pattern of light. The generated pattern is a pattern that can maintain uniqueness for each pixel block 212, which will be described later.
 本実施形態では、光学回折素子(DOE)により生成されるパターンは、光の通過領域である複数のドット領域(以下、「ドット」という)がランダムに分布したパターンである。但し、光学回折素子(DOE)生成されるパターンは、ドットによるパターンに限られるものではなく、他のパターンであってもよい。また、パターン生成器13は、反射型の光学回折素子であってもよく、あるいは、フォトマスクであってもよい。あるいは、パターン発生器13は、DMD(Digital Mirror Device)や液晶ディスプレイ等の、制御信号により固定パターンのパターン光を生成する機器であってもよい。 In this embodiment, the pattern generated by the optical diffraction element (DOE) is a pattern in which a plurality of dot areas (hereinafter referred to as "dots"), which are light passage areas, are randomly distributed. However, the pattern generated by the optical diffraction element (DOE) is not limited to a pattern of dots, and may be any other pattern. Furthermore, the pattern generator 13 may be a reflective optical diffraction element or a photomask. Alternatively, the pattern generator 13 may be a device that generates a fixed pattern of pattern light based on a control signal, such as a DMD (Digital Mirror Device) or a liquid crystal display.
 投射レンズ14は、パターン生成器13により生成されたパターン光を投射する。投射レンズ14は、1つのレンズでなくてもよく、複数のレンズが組み合わされて構成されてもよい。また、投射レンズ14に代えて、凹面形状の反射ミラーが用いられてもよい。投射レンズ14の光軸は、Z軸に平行である。 The projection lens 14 projects the pattern light generated by the pattern generator 13. The projection lens 14 does not need to be one lens, and may be configured by combining a plurality of lenses. Further, instead of the projection lens 14, a concave reflecting mirror may be used. The optical axis of the projection lens 14 is parallel to the Z axis.
 偏光フィルタ15は、所定の偏光方向の光を選択的に透過させ、この偏光方向に垂直な偏光方向の光を遮断する。ここでは、偏光フィルタ15は、X軸に平行な偏光方向の光を選択的に透過させ、Y軸方向に平行な偏光方向の光を遮断する。したがって、偏光フィルタ15を透過した光は、略均一な偏光方向となる。 The polarizing filter 15 selectively transmits light in a predetermined polarization direction and blocks light in a polarization direction perpendicular to this polarization direction. Here, the polarizing filter 15 selectively transmits light with a polarization direction parallel to the X-axis and blocks light with a polarization direction parallel to the Y-axis direction. Therefore, the light transmitted through the polarizing filter 15 has a substantially uniform polarization direction.
 偏光フィルタ15の配置位置は、光源11から見て投射レンズ14の後段側(Z軸正側)に限られるものではなく、光源11と投射レンズ14との間であってもよい。また、偏光フィルタ15は、必ずしも、独立した部材でなくてもよく、投射レンズ14の表面や、コリメータレンズ12またはパターン発生器13の表面等、光源11からの光が通過する他の部材の表面に偏光膜が一体的に形成された構成であってもよい。 The position of the polarizing filter 15 is not limited to the rear side of the projection lens 14 (positive side of the Z-axis) when viewed from the light source 11, but may be between the light source 11 and the projection lens 14. Furthermore, the polarizing filter 15 does not necessarily have to be an independent member, but may be a surface of another member through which light from the light source 11 passes, such as the surface of the projection lens 14, the collimator lens 12, or the pattern generator 13. The polarizing film may be integrally formed with the polarizing film.
 また、光源11が半導体レーザ等のレーザ光源である場合、偏光フィルタ15は省略されてもよい。この場合、光源11(レーザ光源)は、直線偏光の方向がX軸に平行となるように配置されればよい。 Furthermore, if the light source 11 is a laser light source such as a semiconductor laser, the polarizing filter 15 may be omitted. In this case, the light source 11 (laser light source) may be arranged so that the direction of linearly polarized light is parallel to the X axis.
 撮像部20は、パターン光が投射された被写体を撮像する。撮像部20は、撮像素子21と、撮像レンズ22と、偏光フィルタ23とを備える。 The imaging unit 20 images the subject onto which the pattern light is projected. The imaging unit 20 includes an imaging element 21, an imaging lens 22, and a polarizing filter 23.
 撮像素子21は、CMOSイメージセンサである。撮像素子21がCCDであってもよい。撮像素子21の撮像面には、光源11の出射波長帯の光を選択的に透過させるフィルタが形成される。これに代えて、同様のフィルタが、撮像素子21とは別に撮像部20内に配置されてもよい。 The image sensor 21 is a CMOS image sensor. The image sensor 21 may be a CCD. A filter that selectively transmits light in the wavelength band emitted by the light source 11 is formed on the imaging surface of the image sensor 21 . Alternatively, a similar filter may be arranged within the imaging section 20 separately from the imaging element 21.
 撮像レンズ22は、視野範囲からの光を撮像素子21の撮像面に集光する。撮像レンズ22の光軸は、Z軸に平行である。すなわち、撮像レンズ22の光軸と投射レンズ14の光軸とは、互いに平行である。撮像レンズ22の光軸がZ軸正方向において投射レンズ14の光軸に近づくように、撮像レンズ22の光軸が投射レンズ14の光軸に対して傾いていてもよい。 The imaging lens 22 focuses light from the viewing range onto the imaging surface of the imaging element 21. The optical axis of the imaging lens 22 is parallel to the Z-axis. That is, the optical axis of the imaging lens 22 and the optical axis of the projection lens 14 are parallel to each other. The optical axis of the imaging lens 22 may be inclined with respect to the optical axis of the projection lens 14 so that the optical axis of the imaging lens 22 approaches the optical axis of the projection lens 14 in the positive Z-axis direction.
 偏光フィルタ23は、投射部10から出射される光(パターン光)と同じ偏光方向の光を選択的に透過させ、この偏光方向に垂直な偏光方向の光を遮断する。ここでは、偏光フィルタ23は、X軸に平行な偏光方向に光を選択的に透過させ、Y軸方向に平行な偏光方向に光を遮断する。 The polarizing filter 23 selectively transmits light in the same polarization direction as the light (pattern light) emitted from the projection unit 10, and blocks light in a polarization direction perpendicular to this polarization direction. Here, the polarizing filter 23 selectively transmits light in a polarization direction parallel to the X-axis and blocks light in a polarization direction parallel to the Y-axis direction.
 偏光フィルタ23の配置位置は、撮像素子21から見て撮像レンズ22の後段側(Z軸正側)に限られるものではなく、撮像素子21と撮像レンズ22との間であってもよい。また、偏光フィルタ23は、必ずしも、独立した部材でなくてもよく、光源11からの光が通過する撮像レンズ22の表面に偏光膜が一体的に形成された構成であってもよい。 The position of the polarizing filter 23 is not limited to the latter side of the imaging lens 22 (positive side of the Z-axis) when viewed from the imaging element 21, but may be between the imaging element 21 and the imaging lens 22. Further, the polarizing filter 23 does not necessarily have to be an independent member, and may have a configuration in which a polarizing film is integrally formed on the surface of the imaging lens 22 through which the light from the light source 11 passes.
 画像処理部30は、信号処理部31と、光源駆動部32と、撮像処理部33と、通信インタフェース34と、を備える。 The image processing section 30 includes a signal processing section 31, a light source driving section 32, an imaging processing section 33, and a communication interface 34.
 信号処理部31は、マイクロコンピュータ等の演算処理回路およびメモリを備え、所定のプログラムに従って、各部を制御する。また、信号処理部31は、撮像処理部33から入力される画素信号を処理して、被写体までの距離の算出を行う。 The signal processing section 31 includes an arithmetic processing circuit such as a microcomputer and a memory, and controls each section according to a predetermined program. The signal processing unit 31 also processes pixel signals input from the imaging processing unit 33 to calculate the distance to the subject.
 信号処理部31は、投射部10から投射されるパターン光と同様のパターンでドットが分布する基準画像を保持する。基準画像は、予め規定された距離に被写体があるときの撮像画像に相当する。信号処理部31は、この基準画像と、撮像処理部33で撮像された撮像画像とを比較処理して、ステレオ対応点探索を行い、撮像画像上の各画素ブロックに対応する方向の被写体までの距離を算出する。 The signal processing unit 31 holds a reference image in which dots are distributed in a pattern similar to the pattern light projected from the projection unit 10. The reference image corresponds to a captured image when the subject is located at a predetermined distance. The signal processing unit 31 compares and processes this reference image with the captured image captured by the imaging processing unit 33, searches for stereo corresponding points, and searches for stereo corresponding points up to the subject in the direction corresponding to each pixel block on the captured image. Calculate distance.
 すなわち、信号処理部31は、距離の取得対象とされる画素ブロック(以下、「対象画素ブロック」という)を撮像画像上に設定し、この対象画素ブロックに対応する画素ブロック、すなわち、対象画素ブロックに最も適合する画素ブロック(以下、「適合画素ブロック」という)を、基準画像上に規定した探索範囲において探索する。そして、信号処理部31は、基準画像上において対象画素ブロックと同じ位置にある画素ブロック(以下、「基準画素ブロック」という)と、上記探索により基準画像から抽出した適合画素ブロックとの間の画素ずれ量を取得し、取得した画素ずれ量から、三角測量法に基づく演算により、対象画素ブロックの位置における被写体までの距離を算出する処理を行う。 That is, the signal processing unit 31 sets a pixel block from which the distance is to be obtained (hereinafter referred to as a "target pixel block") on the captured image, and sets a pixel block corresponding to this target pixel block, that is, a target pixel block. A pixel block that best matches (hereinafter referred to as a "compatible pixel block") is searched for in a search range defined on the reference image. The signal processing unit 31 then processes pixels between a pixel block located at the same position as the target pixel block on the reference image (hereinafter referred to as "reference pixel block") and the matching pixel block extracted from the reference image by the above search. The amount of deviation is obtained, and the distance to the subject at the position of the target pixel block is calculated from the obtained amount of pixel deviation by calculation based on the triangulation method.
 信号処理部31は、このような距離算出を撮像素子21からの撮像画像の全画素(画素信号)に対して行う。信号処理部31は、この処理により取得した1画面分の距離(距離画像)を、通信インタフェース34を介して、外部装置に送信する。 The signal processing unit 31 performs such distance calculation for all pixels (pixel signals) of the captured image from the image sensor 21. The signal processing unit 31 transmits one screen worth of distance (distance image) acquired through this processing to an external device via the communication interface 34.
 信号処理部31は、距離の算出処理を、FPGA(Field Programmable Gate Array)からなる半導体集積回路により実行してもよい。あるいは、この処理が、DSP(Digital Signal Processor)、GPU(Graphics Processing Unit)およびASIC(Application Specific IntegratedCircuit)などの他の半導体集積回路により実行されてもよい。 The signal processing unit 31 may perform the distance calculation process using a semiconductor integrated circuit consisting of an FPGA (Field Programmable Gate Array). Alternatively, this processing may be performed by other semiconductor integrated circuits such as a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), and an ASIC (Application Specific Integrated Circuit).
 光源駆動部32は、信号処理部31からの制御により、光源11を駆動する。撮像処理部33は、撮像素子21を制御するとともに、撮像素子21から出力される撮像画像の画素信号に対して、輝度補正およびカメラ校正などの処理を行う。 The light source driving section 32 drives the light source 11 under control from the signal processing section 31. The image processing unit 33 controls the image sensor 21 and performs processing such as brightness correction and camera calibration on the pixel signal of the captured image output from the image sensor 21.
 図2(a)、(b)は、撮像素子21に対する画素ブロック212の設定方法を模式的に示す図である。図2(a)は、撮像面全体に対する画素ブロック212の設定方法を示し、図2(b)は、撮像面の一部の領域を拡大して示している。 FIGS. 2(a) and 2(b) are diagrams schematically showing a method of setting the pixel block 212 for the image sensor 21. FIG. 2(a) shows a method of setting the pixel blocks 212 for the entire imaging surface, and FIG. 2(b) shows an enlarged view of a part of the imaging surface.
 図2(a)、(b)に示すように、撮像素子21の撮像面は、それぞれ所定数の画素領域211を含む複数の画素ブロック212に区分される。画素領域211は、撮像素子21上の1つの画素に対応する領域である。図2(a)、(b)の例では、3行および3列に並ぶ9個の画素領域211によって、1つの画素ブロック212が構成されている。ただし、1つの画素ブロック212に含まれる画素領域211の数は、これに限られるものではない。 As shown in FIGS. 2A and 2B, the imaging surface of the image sensor 21 is divided into a plurality of pixel blocks 212 each including a predetermined number of pixel regions 211. The pixel area 211 is an area corresponding to one pixel on the image sensor 21. In the examples shown in FIGS. 2A and 2B, one pixel block 212 is composed of nine pixel regions 211 arranged in three rows and three columns. However, the number of pixel regions 211 included in one pixel block 212 is not limited to this.
 撮像素子21の撮像面には、投射部10から投射された後、被写体により反射されたパターン光のドットが投影される。したがって、画素ブロック212には、投影されたドット群の一部が含まれる。ドットはランダムなパターンで分布するため。各画素ブロック212に含まれるドットの分布もユニークなものとなる。これにより、各画素ブロック212における被写体までの距離が、ステレオ対応点探索により、適正に算出される。 On the imaging surface of the image sensor 21, dots of pattern light that are projected from the projection unit 10 and reflected by the subject are projected. Therefore, pixel block 212 includes a portion of the projected dot group. Because the dots are distributed in a random pattern. The distribution of dots included in each pixel block 212 is also unique. Thereby, the distance to the subject in each pixel block 212 is appropriately calculated by searching for stereo corresponding points.
 図3(a)~(d)は、撮像画像上の対象画素ブロックTB1(距離の取得対象とされる画素ブロック212)に適合する適合画素ブロックを、基準画像上において探索する処理を模式的に示す図である。 FIGS. 3(a) to 3(d) schematically illustrate the process of searching a reference image for a compatible pixel block that matches the target pixel block TB1 (pixel block 212 from which distance is to be obtained) on the captured image. FIG.
 まず、図3(a)に示すように、信号処理部31は、基準画像上に探索範囲R0を設定する。探索範囲R0の開始位置ST1は、基準画像上の対象画素ブロックTB1に対応する位置に設定される。探索範囲R0は、開始位置ST1から、投射部10と撮像部20の離間方向に、所定の画素数(距離検出レンジに対応する画素数)だけ延びている。探索範囲R0の上下方向の幅は、対象画素ブロックTB1の上下方向の幅と同じである。 First, as shown in FIG. 3(a), the signal processing unit 31 sets a search range R0 on the reference image. A starting position ST1 of the search range R0 is set to a position corresponding to the target pixel block TB1 on the reference image. The search range R0 extends from the start position ST1 in the direction of separation between the projection unit 10 and the imaging unit 20 by a predetermined number of pixels (the number of pixels corresponding to the distance detection range). The vertical width of the search range R0 is the same as the vertical width of the target pixel block TB1.
 信号処理部31は、開始位置ST1を探索位置に設定する。信号処理部31は、この探索位置に対象画素ブロックTB1と同一サイズの参照画素ブロックRB1を設定し、対象画素ブロックTB1と参照画素ブロックRB1との相関値を算出する。 The signal processing unit 31 sets the start position ST1 as the search position. The signal processing unit 31 sets a reference pixel block RB1 having the same size as the target pixel block TB1 at this search position, and calculates a correlation value between the target pixel block TB1 and the reference pixel block RB1.
 ここで、相関値は、たとえば、対象画素ブロックTB1および参照画素ブロックRB1の互いに対応する画素領域について画素値(輝度)の差分をそれぞれ算出し、算出したそれぞれの差分の絶対値を全て積算した値(SAD)として取得される。あるいは、相関値は、上記差分を2乗した値を全て積算した値(SSD)として取得されてもよい。但し、相関値の算出方法は、これらに限られるものではなく、対象画素ブロックTB1と参照画素ブロックRB1との相関の指標となる相関値が得られる限りにおいて、他の算出方法であってもよい。 Here, the correlation value is, for example, a value obtained by calculating differences in pixel values (luminance) for mutually corresponding pixel areas of the target pixel block TB1 and reference pixel block RB1, and integrating all the absolute values of the calculated differences. (SAD). Alternatively, the correlation value may be obtained as a value (SSD) obtained by integrating all the squared values of the differences. However, the method of calculating the correlation value is not limited to these, and other calculation methods may be used as long as a correlation value that is an index of the correlation between the target pixel block TB1 and the reference pixel block RB1 can be obtained. .
 こうして、1つの参照画素ブロックRB1に対する処理が終了すると、信号処理部31は、図3(b)に示すように、次の探索位置を設定する。具体的には、信号処理部31は、前回の探索位置を1画素分だけ探索範囲R0の終端方向にシフトさせた位置を、今回の探索位置に設定する。そして、信号処理部31は、上記と同様の処理により、今回の探索位置の参照画素ブロックRB1と対象画素ブロックTB1との間の相関値を算出する。 When the processing for one reference pixel block RB1 is thus completed, the signal processing unit 31 sets the next search position, as shown in FIG. 3(b). Specifically, the signal processing unit 31 sets a position obtained by shifting the previous search position by one pixel toward the end of the search range R0 as the current search position. Then, the signal processing unit 31 calculates the correlation value between the reference pixel block RB1 and the target pixel block TB1 at the current search position by processing similar to the above.
 信号処理部31は、探索位置を1画素分だけ終端方向にシフトさせながら、同様の処理を繰り返し実行する。図3(c)は、探索範囲R0の最後の探索位置から1つ前の探索位置に参照画素ブロックRB1が設定された状態を示し、図3(d)は、探索範囲R0の最後の探索位置に参照画素ブロックRB1が設定された状態を示している。 The signal processing unit 31 repeatedly executes similar processing while shifting the search position by one pixel in the terminal direction. 3(c) shows a state in which the reference pixel block RB1 is set at the previous search position from the last search position in the search range R0, and FIG. 3(d) shows the state in which the reference pixel block RB1 is set at the last search position in the search range R0. This shows a state in which the reference pixel block RB1 is set.
 こうして、最後の探索位置に対する処理が終了すると、信号処理部31は、順次設定した探索位置のうち、相関値がピークとなる探索位置を特定し、特定した探索位置と開始位置ST1との画素ずれ量に基づいて、三角測量法により、被写体までの距離を算出する。信号処理部31は、撮像画像上の全ての対象画素ブロックTB1について同様の処理を繰り返す。こうして、全ての対象画素ブロックTB1について、被写体までの距離を算出すると、信号処理部31は、算出した距離を各対象画素ブロックTB1にマッピングした距離画像を、通信インタフェース34を介して外部装置に送信する。 In this way, when the processing for the last search position is completed, the signal processing unit 31 specifies the search position where the correlation value is the peak among the sequentially set search positions, and determines the pixel shift between the specified search position and the start position ST1. Based on the amount, the distance to the subject is calculated by triangulation. The signal processing unit 31 repeats the same process for all target pixel blocks TB1 on the captured image. After calculating the distances to the subject for all target pixel blocks TB1 in this way, the signal processing unit 31 transmits a distance image in which the calculated distances are mapped to each target pixel block TB1 to an external device via the communication interface 34. do.
 図4は、被写体40が、ガラス等の透明な部材41とその奥にある部材42とで構成される場合の、パターン光L0の撮像状態を模式的に示す図である。 FIG. 4 is a diagram schematically showing the imaging state of the patterned light L0 when the subject 40 is composed of a transparent member 41 such as glass and a member 42 located behind the transparent member 41.
 図4に示す被写体40において、パターン光L0は、部材41の表面と、その奥にある部材42の両方において反射される。すなわち、パターン光L0は、その一部が、部材41の表面で鏡面反射され、残りが部材41を透過する。部材41を透過したパターン光L0は、奥側の部材42の表面で拡散反射され、手前の部材41に向かう。その後、このパターン光は、一部が部材41を透過する。 In the subject 40 shown in FIG. 4, the patterned light L0 is reflected on both the surface of the member 41 and the member 42 located deep therein. That is, part of the pattern light L0 is specularly reflected on the surface of the member 41, and the rest is transmitted through the member 41. The pattern light L0 that has passed through the member 41 is diffusely reflected on the surface of the member 42 on the back side, and is directed toward the member 41 on the front side. Thereafter, part of this pattern light passes through the member 41.
 したがって、撮像部20の撮像レンズ22には、手前の部材41の表面で鏡面反射されたパターン光と、奥の部材42の表面で拡散反射されたパターン光とが取り込まれる。したがって、撮像素子21には、これら2種類のパターン光が導かれる。このとき、撮像素子21では、同一の画素ブロック212に対し、上記の2種類のパターン光が入射することが起こり得る。 Therefore, the imaging lens 22 of the imaging unit 20 captures the pattern light specularly reflected on the surface of the front member 41 and the pattern light diffusely reflected on the surface of the rear member 42. Therefore, these two types of pattern light are guided to the image sensor 21. At this time, in the image sensor 21, the above two types of pattern light may be incident on the same pixel block 212.
 たとえば、図4の例では、投射部10から投射されたパターン光L0のうち、光部分L1と光部分L2とが、同一の画素ブロック212に導かれる。このため、この画素ブロック212には、光部分L1に含まれるドットDTと、光部分L2に含まれるドットDTとが含まれることになる。 For example, in the example of FIG. 4, a light portion L1 and a light portion L2 of the pattern light L0 projected from the projection unit 10 are guided to the same pixel block 212. Therefore, this pixel block 212 includes dots DT included in the light portion L1 and dots DT included in the light portion L2.
 図5は、1つの画素ブロック212に対し、2つの部材41、42からのパターン光が入射した場合の探索位置と相関値の関係を例示するグラフである。ここでは、縦軸の相関値として、SADまたはSSDが想定されている。 FIG. 5 is a graph illustrating the relationship between the search position and the correlation value when pattern light from two members 41 and 42 is incident on one pixel block 212. Here, SAD or SSD is assumed as the correlation value on the vertical axis.
 この例では、図3(a)~(d)の処理により、閾値Th0より小さい範囲において、2つのピーク位置が得られている。たとえば、探索位置P1は、図4において、手前の部材41の表面で鏡面反射された光部分L1のドットパターンが基準画像と適合する探索位置であり、探索位置P2は、図4において、奥側の部材42の表面で鏡面反射された光部分L2のドットパターンが基準画像と適合する探索位置である。このように、図4のような被写体では、部材41、42の表面からのパターン光が同一の画素ブロック212に入射し得るため、各画素ブロック212に対して2つの探索位置が取得され得る。 In this example, two peak positions are obtained in a range smaller than the threshold Th0 by the processing shown in FIGS. 3(a) to 3(d). For example, the search position P1 in FIG. 4 is a search position where the dot pattern of the light portion L1 specularly reflected on the surface of the front member 41 matches the reference image, and the search position P2 is the search position on the back side in FIG. The dot pattern of the light portion L2 that is specularly reflected on the surface of the member 42 is the search position that matches the reference image. In this way, in the object as shown in FIG. 4, the pattern light from the surfaces of the members 41 and 42 can be incident on the same pixel block 212, so two search positions can be obtained for each pixel block 212.
 図5において、閾値Th0は、ノイズ等の影響により相関値がピークとなった場合に、この探索位置が、被写体までの距離に対応する探索位置として検出されることを防ぐためのものである。閾値Th0は、固定の値であってもよく、あるいは、探索範囲R0の全探索位置において得られた相関値の最大値、平均値、中央値または最頻値に対し、所定の比率を乗じ、または、所定値を加減算して得られる値に設定されてもよい。 In FIG. 5, the threshold Th0 is used to prevent this search position from being detected as a search position corresponding to the distance to the subject when the correlation value reaches a peak due to the influence of noise or the like. The threshold Th0 may be a fixed value, or may be obtained by multiplying the maximum value, average value, median value, or mode of the correlation values obtained at all search positions in the search range R0 by a predetermined ratio, Alternatively, it may be set to a value obtained by adding or subtracting a predetermined value.
 なお、図5には、部材41、42について2つの探索位置P1、P2が得られた例が示されているが、部材41、42からのドットDTが互いに重畳することにより、画素ブロック212におけるドットパターンのユニーク性が低下することも起こり得る。この場合、閾値Th0より小さい範囲に含まれる相関値のピークが得られないことも想定され得る。このような場合、当該画素ブロック212に対する距離は、その周囲の画素ブロック212により補間されればよい。これにより、全画素ブロック212に距離がマッピングされた距離画像が取得され得る。 Note that although FIG. 5 shows an example in which two search positions P1 and P2 are obtained for the members 41 and 42, the dots DT from the members 41 and 42 overlap each other, so that It is also possible that the uniqueness of the dot pattern is reduced. In this case, it may be assumed that the peak of the correlation value included in the range smaller than the threshold Th0 may not be obtained. In such a case, the distance to the pixel block 212 may be interpolated using the surrounding pixel blocks 212. As a result, a distance image in which distances are mapped to all pixel blocks 212 can be obtained.
 図6は、被写体に対する距離の取得処理を示すフローチャートである。 FIG. 6 is a flowchart showing the process of obtaining the distance to the subject.
 信号処理部31は、撮像素子21から撮像画像を取得する(S101)。次に、信号処理部31は、撮像画像上の対象画素ブロックTB1に対し、図3(a)~(d)に示した対応点探索処理を実行する(S102)。そして、信号処理部31は、対応点探索処理において複数の対応点(適合探索位置)が得られたか否かを判定する(S103)。すなわち、図5に示した閾値Th0より小さい範囲に、相関値のピークが複数存在し、これにより、対象画素ブロックTB1に適合する探索位置が複数得られたか否かを判定する。 The signal processing unit 31 acquires a captured image from the image sensor 21 (S101). Next, the signal processing unit 31 executes the corresponding point search process shown in FIGS. 3A to 3D on the target pixel block TB1 on the captured image (S102). Then, the signal processing unit 31 determines whether a plurality of corresponding points (compatible search positions) are obtained in the corresponding point search process (S103). That is, it is determined whether there are a plurality of correlation value peaks in a range smaller than the threshold value Th0 shown in FIG. 5, and thereby a plurality of search positions matching the target pixel block TB1 have been obtained.
 当該対象画素ブロックTB1について複数の対応点が得られた場合(S103:YES)、信号処理部31は、そのうち、基準位置(図3(a)の開始位置ST1)からの画素ずれが最短の対応点を、距離取得のための対応点として選択する(S104)。そして、信号処理部31は、選択した対応点の、開始位置ST1に対する画素ずれ量から、三角測量法により、当該対象画素ブロックTB1における距離を算出する(S105)。 If a plurality of corresponding points are obtained for the target pixel block TB1 (S103: YES), the signal processing unit 31 selects the corresponding point with the shortest pixel shift from the reference position (starting position ST1 in FIG. 3(a)). The point is selected as a corresponding point for distance acquisition (S104). Then, the signal processing unit 31 calculates the distance in the target pixel block TB1 using the triangulation method from the pixel shift amount of the selected corresponding point with respect to the start position ST1 (S105).
 他方、当該対象画素ブロックTB1について対応点が1つだけ得られた場合(S103:NO、S109:YES)、信号処理部31は、得られた対応点の開始位置ST1に対する画素ずれ量から、三角測量法により、当該対象画素ブロックTB1における距離を算出する(S105)。また、当該対象画素ブロックTB1について対応点が得られなかった場合(S103:NO、S109:NO)、信号処理部31は、当該対象画素ブロックTB1に対する距離を算出することなく、処理をステップS106に進める。この場合、信号処理部31は、当該対象画素ブロックTB1に対し、距離が得られなかったことを示すフラグを設定する。 On the other hand, if only one corresponding point is obtained for the target pixel block TB1 (S103: NO, S109: YES), the signal processing unit 31 calculates a triangle from the pixel shift amount of the obtained corresponding point with respect to the starting position ST1. The distance in the target pixel block TB1 is calculated by the surveying method (S105). Further, if a corresponding point is not obtained for the target pixel block TB1 (S103: NO, S109: NO), the signal processing unit 31 returns the process to step S106 without calculating the distance to the target pixel block TB1. Proceed. In this case, the signal processing unit 31 sets a flag indicating that the distance cannot be obtained for the target pixel block TB1.
 ここで、ステップS104において選択された対応点は、開始位置ST1からの画素ずれが最短の対応点であるため、図4の例では、最も手前の部材41の表面で鏡面反射された光部分L1のドットパターンが基準画像と適合する探索位置である可能性が高い。また、図1に示したように、撮像部20は、被写体で鏡面反射されたパターン光を偏光フィルタ23によって主として抽出する構成であるため、ステップS109の判定がYESとなって対応点が1つだけ得られた場合、その対応点は、最も手前の部材41の表面で鏡面反射された光部分L1のドットパターンが基準画像と適合する探索位置である可能性が高い。 Here, since the corresponding point selected in step S104 is the corresponding point with the shortest pixel shift from the start position ST1, in the example of FIG. There is a high possibility that the dot pattern is a search position that matches the reference image. Further, as shown in FIG. 1, since the imaging unit 20 is configured to mainly extract pattern light specularly reflected by the subject using the polarizing filter 23, the determination in step S109 is YES and there is only one corresponding point. If the corresponding point is obtained, there is a high possibility that the corresponding point is a search position where the dot pattern of the light portion L1 specularly reflected on the surface of the member 41 closest to the front matches the reference image.
 したがって、図6のフローチャートでは、最も手前の部材41の表面で鏡面反射された光部分L1のドットパターンが基準画像と適合する探索位置(S104またはS109:YES)について、距離の算出が行われ(S105)、このような探索位置が得られなかった場合は(S109:NO)、距離の算出が行われることなく、処理がステップS106に進められる。 Therefore, in the flowchart of FIG. 6, the distance is calculated for the search position (S104 or S109: YES) where the dot pattern of the light portion L1 specularly reflected on the surface of the nearest member 41 matches the reference image (S104 or S109: YES). S105) If such a search position is not obtained (S109: NO), the process proceeds to step S106 without calculating the distance.
 こうして、当該対象画素ブロックTB1に対する処理を行った後、信号処理部31は、撮像画像上の全ての画素ブロック212について距離の算出処理を行ったか否かを判定する(S106)。全ての画素ブロック212について処理が終了していない場合(S106:NO)、信号処理部31は、次の画素ブロック212を対象画素ブロックTB1に設定して、ステップS102以降の処理を実行する。 After performing the processing on the target pixel block TB1, the signal processing unit 31 determines whether distance calculation processing has been performed on all the pixel blocks 212 on the captured image (S106). If the processing has not been completed for all the pixel blocks 212 (S106: NO), the signal processing unit 31 sets the next pixel block 212 as the target pixel block TB1, and executes the processing from step S102 onwards.
 こうして、撮像画像上の全ての画素ブロック212に対して距離の算出処理が終了すると(S106:YES)、信号処理部31は、距離が得られなかった画素ブロック212、すなわち、ステップS109の判定がNOとなってフラグを付した画素ブロック212に対して距離を補間する処理を実行する(S107)。より詳細には、信号処理部31は、距離が得られなかった画素ブロック212に対する距離を、その周囲の距離で補間する。これにより、全ての画素ブロック212に対して距離が対応付けられた距離画像が構成される。 In this way, when the distance calculation process is completed for all the pixel blocks 212 on the captured image (S106: YES), the signal processing unit 31 calculates the distance for the pixel blocks 212 for which the distance was not obtained, that is, the determination in step S109 is completed. A distance interpolation process is executed for the flagged pixel block 212 with a NO answer (S107). More specifically, the signal processing unit 31 interpolates the distance to the pixel block 212 for which the distance could not be obtained using distances around the pixel block 212. As a result, a distance image in which distances are associated with all the pixel blocks 212 is constructed.
 信号処理部31は、こうして構成した距離画像を、通信インタフェース34を介して外部装置に送信する(S108)。これにより、信号処理部31は、図6の処理を終了する。 The signal processing unit 31 transmits the distance image configured in this way to an external device via the communication interface 34 (S108). Thereby, the signal processing unit 31 ends the process of FIG.
 <実施形態の効果>
 上記実施形態によれば、以下の効果が奏される。
<Effects of embodiment>
According to the above embodiment, the following effects are achieved.
 図1に示したように、投射部10から投射される光の偏光方向と同じ偏光方向の光を抽出する偏光フィルタ23が撮像部20に配置されるため、被写体表面で鏡面反射された投射部10からの光が撮像素子21により受光されやすくなる。このため、図4に示すように、被写体40が、透明な部材41とその奥にある部材42とで構成される場合に、透明な部材41までの距離を適正に検出できる。 As shown in FIG. 1, the polarizing filter 23 that extracts light having the same polarization direction as that of the light projected from the projection section 10 is disposed in the imaging section 20, so that the projection section is mirror-reflected on the surface of the subject. The light from 10 is more easily received by the image sensor 21. Therefore, as shown in FIG. 4, when the subject 40 is composed of a transparent member 41 and a member 42 located behind the transparent member 41, the distance to the transparent member 41 can be appropriately detected.
 図6に示したように、信号処理部31は、距離の取得対象である領域(対象画素ブロックTB1)について複数の距離が得られる場合に(S103:YES)、最短の距離を計測結果として選択する(S104、S105)。これにより、図5を参照して説明したように、透明な部材41までの距離を適正に検出できる。 As shown in FIG. 6, when a plurality of distances can be obtained for the region (target pixel block TB1) for which distances are to be obtained (S103: YES), the signal processing unit 31 selects the shortest distance as the measurement result. (S104, S105). Thereby, as explained with reference to FIG. 5, the distance to the transparent member 41 can be appropriately detected.
 図1に示したように、投射部10は、パターン生成器13により所定の強度分布のパターン光を生成して被写体に投射し、信号処理部31は、予め規定された距離に被写体があるときの撮像画像である基準画像と、撮像部20により撮像された撮像画像とを比較して、被写体までの距離を計測する。これにより、被写体表面の反射率や光吸収率が均一な場合も、当該表面の各位置までの距離を計測できる。また、図6の処理により、透明な部材41までの距離を適正に検出できる。 As shown in FIG. 1, the projection unit 10 uses the pattern generator 13 to generate pattern light with a predetermined intensity distribution and projects it onto the subject, and the signal processing unit 31 generates pattern light with a predetermined intensity distribution when the subject is located at a predetermined distance. The distance to the subject is measured by comparing the reference image, which is the captured image, with the captured image captured by the imaging unit 20. Thereby, even if the reflectance and light absorption rate of the subject surface are uniform, the distance to each position on the subject surface can be measured. Moreover, the distance to the transparent member 41 can be appropriately detected by the process shown in FIG.
 図1に示したように、投射部10は、略均一な偏光の光を生成するための偏光フィルタ15を備える。これにより、光源11の偏光方向がランダムであっても、略均一な偏光方向に光を被写体に投射できる。 As shown in FIG. 1, the projection unit 10 includes a polarization filter 15 for generating substantially uniformly polarized light. Thereby, even if the polarization direction of the light source 11 is random, light can be projected onto the subject in a substantially uniform polarization direction.
 <変更例1>
 上記実施形態では、図6に示したように、距離の取得対象である領域(対象画素ブロックTB1)について複数の距離が得られる場合に(S103:YES)、最短の距離が計測結果として選択された(S104、S105)。これに対し、変更例1では、距離の取得対象である領域(対象画素ブロックTB1)について複数の距離が得られる場合、最長の距離が計測結果として選択される。
<Change example 1>
In the above embodiment, as shown in FIG. 6, when a plurality of distances are obtained for the area (target pixel block TB1) for which distances are to be obtained (S103: YES), the shortest distance is selected as the measurement result. (S104, S105). On the other hand, in modification example 1, when a plurality of distances can be obtained for the region (target pixel block TB1) for which distances are to be obtained, the longest distance is selected as the measurement result.
 図7は、変更例1に係る、被写体に対する距離の取得処理を示すフローチャートである。 FIG. 7 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 1.
 図7のフローチャートでは、図6のフローチャートのステップS104がステップS111に置き換えられ、図6のステップS109が省略されている。図7のその他のステップの処理は、図6の対応するステップの処理と同様である。 In the flowchart of FIG. 7, step S104 of the flowchart of FIG. 6 is replaced with step S111, and step S109 of FIG. 6 is omitted. The processing in other steps in FIG. 7 is similar to the processing in the corresponding steps in FIG.
 図7を参照して、対応点探索により(S102)、対象画素ブロックTB1について複数の対応点が抽出された場合(S103:YES)、信号処理部31は、そのうち、開始位置ST1からの画素ずれが最長の対応点を、距離取得のための対応点として選択する(S111)。そして、信号処理部31は、選択した対応点の開始位置ST1に対する画素ずれ量から、三角測量法により、当該対象画素ブロックTB1における距離を算出する(S105)。 Referring to FIG. 7, when a plurality of corresponding points are extracted for the target pixel block TB1 through the corresponding point search (S102) (S103: YES), the signal processing unit 31 detects the pixel deviation from the starting position ST1. The corresponding point with the longest is selected as the corresponding point for distance acquisition (S111). Then, the signal processing unit 31 calculates the distance in the target pixel block TB1 using the triangulation method from the pixel shift amount of the selected corresponding point with respect to the start position ST1 (S105).
 他方、当該対象画素ブロックTB1について複数の対応点が得られなかった場合(S103:NO)、信号処理部31は、当該対象画素ブロックTB1に対する距離を算出することなく、処理をステップS106に進める。この場合、信号処理部31は、当該対象画素ブロックTB1に対し、距離が得られなかったことを示すフラグを設定する。この対象画素ブロックTB1については、上記実施形態と同様、ステップ107により、距離値が補間される。 On the other hand, if a plurality of corresponding points are not obtained for the target pixel block TB1 (S103: NO), the signal processing unit 31 advances the process to step S106 without calculating the distance to the target pixel block TB1. In this case, the signal processing unit 31 sets a flag indicating that the distance cannot be obtained for the target pixel block TB1. Regarding this target pixel block TB1, the distance value is interpolated in step 107 as in the above embodiment.
 変更例1の構成によれば、距離の取得対象である領域(対象画素ブロックTB1)について複数の距離が得られる場合に(S103:YES)、最長の距離が計測結果として選択される(S111、S105)。これにより、図5を参照して説明したように、透明な部材41の奥にある部材42までの距離を検出できる。 According to the configuration of modification example 1, when a plurality of distances can be obtained for the region (target pixel block TB1) for which distances are to be obtained (S103: YES), the longest distance is selected as the measurement result (S111, S105). Thereby, as explained with reference to FIG. 5, the distance to the member 42 located at the back of the transparent member 41 can be detected.
 なお、図7のフローチャートでは、ステップS103の判定がNOの場合、当該対象画素ブロックTB1に対して対応点が1つだけ得られていることもある。しかし、図1に示したように、撮像部20は、被写体で鏡面反射されたパターン光を偏光フィルタ23によって主として抽出する構成であるため、このように対応点が1つだけ得られた場合、その対応点は、最も手前の部材41の表面で鏡面反射された光部分L1のドットパターンが基準画像と適合する探索位置である可能性が高い。このため、当該対応点から取得される距離は、奥側の部材42までの距離ではなく、手前の透明な部材41までの距離である可能性が高い。したがって、この距離を計測結果に含めると、奥側の部材42までの距離のみを含む距離画像を精度良く取得できない。 Note that in the flowchart of FIG. 7, if the determination in step S103 is NO, only one corresponding point may be obtained for the target pixel block TB1. However, as shown in FIG. 1, since the imaging unit 20 is configured to mainly extract pattern light specularly reflected by the subject using the polarizing filter 23, when only one corresponding point is obtained in this way, The corresponding point is likely to be a search position where the dot pattern of the light portion L1 specularly reflected on the surface of the member 41 closest to the user matches the reference image. Therefore, the distance acquired from the corresponding point is likely to be the distance to the transparent member 41 in the front, rather than the distance to the member 42 on the back side. Therefore, if this distance is included in the measurement result, a distance image that includes only the distance to the member 42 on the back side cannot be obtained with high accuracy.
 このような観点から、図7のフローチャートでは、図6のステップS109が省略され、対象画素ブロックTB1に対し対応点が複数抽出されなかった場合は、一律、当該対象画素ブロックTB1は距離が得られなかったとして扱われる。これにより、距離画像に奥側の部材42までの距離のみが含まれやすくなり、当該距離画像の精度を高めることができる。 From this point of view, in the flowchart of FIG. 7, step S109 of FIG. 6 is omitted, and if a plurality of corresponding points are not extracted for the target pixel block TB1, the distance is uniformly obtained for the target pixel block TB1. treated as if it had not happened. This makes it easier for the distance image to include only the distance to the member 42 on the back side, and it is possible to improve the accuracy of the distance image.
 なお、図6の処理においても、図7と同様に、ステップS109が省略され、ステップS103の判定がNOの場合に処理がステップS106に進められてもよい。この処理によっても、手前の透明な部材41までの距離を含む距離画像を精度良く取得することができる。 Note that in the process of FIG. 6 as well, step S109 may be omitted and the process may proceed to step S106 if the determination in step S103 is NO. This process also makes it possible to accurately acquire a distance image that includes the distance to the transparent member 41 in the foreground.
 <変更例2>
 変更例2では、距離の取得対象である領域(対象画素ブロックTB1)について予め設定された範囲に含まれる距離が得られる場合に、この距離が計測結果として取得される。
<Change example 2>
In modification example 2, when a distance included in a preset range is obtained for the region (target pixel block TB1) for which the distance is to be obtained, this distance is obtained as the measurement result.
 図8は、変更例2に係る、被写体に対する距離の取得処理を示すフローチャートである。 FIG. 8 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 2.
 図8のフローチャートでは、図7のフローチャートのステップS103、S111がステップS112、S113に置き換えられている。図8のその他のステップの処理は、図6および図7の対応するステップの処理と同様である。 In the flowchart of FIG. 8, steps S103 and S111 in the flowchart of FIG. 7 are replaced with steps S112 and S113. The processing in other steps in FIG. 8 is similar to the processing in the corresponding steps in FIGS. 6 and 7.
 図8を参照して、対応点探索により(S102)、設定範囲内において対応点が抽出された場合(S103:YES)、信号処理部31は、その対応点を、距離取得のための対応点として選択する(S113)。そして、信号処理部31は、選択した対応点の開始位置ST1に対する画素ずれ量から、三角測量法により、当該対象画素ブロックTB1における距離を算出する(S105)。 Referring to FIG. 8, if a corresponding point is extracted within the set range (S103: YES) by the corresponding point search (S102), the signal processing unit 31 converts the corresponding point into a corresponding point for distance acquisition. (S113). Then, the signal processing unit 31 calculates the distance in the target pixel block TB1 using the triangulation method from the pixel shift amount of the selected corresponding point with respect to the start position ST1 (S105).
 他方、設定範囲内において対応点が得られなかった場合(S112:NO)、信号処理部31は、当該対象画素ブロックTB1に対する距離を算出することなく、処理をステップS106に進める。この場合、信号処理部31は、当該対象画素ブロックTB1に対し、距離が得られなかったことを示すフラグを設定する。この対象画素ブロックTB1については、上記実施形態と同様、ステップ107により、距離値が補間される。 On the other hand, if no corresponding point is obtained within the set range (S112: NO), the signal processing unit 31 advances the process to step S106 without calculating the distance to the target pixel block TB1. In this case, the signal processing unit 31 sets a flag indicating that the distance cannot be obtained for the target pixel block TB1. Regarding this target pixel block TB1, the distance value is interpolated in step 107 as in the above embodiment.
 ステップS112の設定範囲は、たとえば、通信インタフェース34を介して外部装置から受信するコマンドに基づき設定される。たとえば、外部装置が、物品を把持してベルトコンベア上の被載置物に載置するロボットアームである場合、撮像装置1は、ロボットアームの把持部に設置され得る。この場合、被載置物が被写体である。この被載置物は、たとえば、図4に示すような透明な部材41とその奥の部材42とを有する。 The setting range in step S112 is set based on a command received from an external device via the communication interface 34, for example. For example, when the external device is a robot arm that grips an article and places it on an object on a belt conveyor, the imaging device 1 can be installed in the gripping section of the robot arm. In this case, the placed object is the subject. This placed object has, for example, a transparent member 41 and a member 42 at the back thereof as shown in FIG.
 ロボットアームは、把持部の現在位置と被載置物の位置とに基づいて、把持部と被載置物との距離(被載置物において最も手前にある透明な部材41までの距離に相当)を概算し、この距離を含む所定の距離範囲を信号処理部31に送信する。信号処理部31は、受信した距離範囲に対応する探索範囲を、ステップS112の設定範囲に設定する。 The robot arm roughly calculates the distance between the gripping part and the object (corresponding to the distance to the transparent member 41 that is closest to the front of the object) based on the current position of the gripping part and the position of the object. Then, a predetermined distance range including this distance is transmitted to the signal processing section 31. The signal processing unit 31 sets the search range corresponding to the received distance range as the set range in step S112.
 あるいは、ロボットアームは、把持部と被載置物との距離(被載置物において最も手前にある透明な部材41までの距離に相当)の概算値、または把持部および被載置物の位置を信号処理部31に送信してもよい。この場合、信号処理部31は、受信したこれらの情報に基づいて、最も手前にある透明な部材41までの距離に対応する探索位置を中心とする所定の探索範囲を、ステップS112の設定範囲に設定すればよい。 Alternatively, the robot arm processes a signal to determine the approximate value of the distance between the gripping part and the object to be placed (corresponding to the distance to the transparent member 41 that is closest to the front of the object) or the position of the gripping part and the object to be placed. It may also be transmitted to the section 31. In this case, the signal processing unit 31 sets a predetermined search range centered on the search position corresponding to the distance to the nearest transparent member 41 to the range set in step S112 based on the received information. Just set it.
 変更例2の構成によれば、たとえば、図9に示すように、距離の取得対象である領域(対象画素ブロックTB1)について複数の対応点(探索位置P1、P2)が得られる場合に、設定範囲ΔWに含まれる対応点(探索位置P1)が、距離の取得対象として選択される。よって、最も手前にある透明な部材41までの距離を適正に取得できる。 According to the configuration of modification example 2, for example, as shown in FIG. The corresponding points (search position P1) included in the range ΔW are selected as distance acquisition targets. Therefore, the distance to the transparent member 41 located closest to the user can be appropriately obtained.
 なお、図9の例では、最も手前にある透明な部材41までの距離が取得されるように設定範囲ΔWが設定されたが、奥側の部材42までの距離が取得されるように設定範囲ΔWが設定されてもよい。この場合、奥側の部材までの距離に対応する対応点(探索位置P2)が、距離の取得対象として選択される。これにより、透明な部材41の奥にある部材42までの距離を適正に検出できる。 In addition, in the example of FIG. 9, the setting range ΔW was set so that the distance to the transparent member 41 located closest to the front was obtained, but the setting range ΔW was set so that the distance to the member 42 on the far side was obtained. ΔW may be set. In this case, the corresponding point (search position P2) corresponding to the distance to the far side member is selected as the distance acquisition target. Thereby, the distance to the member 42 located at the back of the transparent member 41 can be appropriately detected.
 <変更例3>
 上記実施形態では、図6のステップS107における補間処理において、距離が取得されなかった画素ブロック212の周囲の画素ブロック212を用いて補間がなされた。これに対し、変更例3では、補間に用いる画素ブロック212の範囲が拡張される。
<Change example 3>
In the embodiment described above, in the interpolation process in step S107 in FIG. 6, interpolation is performed using the pixel blocks 212 around the pixel block 212 for which the distance was not acquired. In contrast, in modification example 3, the range of pixel blocks 212 used for interpolation is expanded.
 図10は、変更例3に係る、補間処理を説明するための図である。 FIG. 10 is a diagram for explaining interpolation processing according to modification example 3.
 たとえば、図4に示す透明な部材41の表面の一部の範囲に不意に微細な凹凸が生じた場合、部材41のこの範囲の表面では、鏡面反射が起こりにくく、拡散反射が支配的になる。これにより、この範囲では、対応点探索において、部材41の表面に対応する対応点が抽出されにくく、その結果、部材41までの距離が取得されにくい。 For example, if fine irregularities suddenly occur in a part of the surface of the transparent member 41 shown in FIG. 4, specular reflection is unlikely to occur on the surface of the member 41 in this range, and diffuse reflection becomes dominant. . Therefore, in this range, it is difficult to extract corresponding points corresponding to the surface of the member 41 in the corresponding point search, and as a result, it is difficult to obtain the distance to the member 41.
 たとえば、図10において、補間対象の画素ブロック212を含む周囲の範囲R11では、上記の事象により、拡散反射が支配的となり、透明な部材41の表面に対する距離が算出されにくくなる場合がある。すなわち、補間対象の画素ブロック212の周囲の画素ブロック212においても、不意に生じた微細な凹凸等の影響により、距離が取得されない等、距離取得が不安定な状態となり得る。このため、周囲の画素ブロック212を補間に用いた場合は、補間対象の画素ブロック212に対する距離の補間を適正に行い得ないことが起こり得る。 For example, in FIG. 10, in the surrounding range R11 including the pixel block 212 to be interpolated, due to the above phenomenon, diffuse reflection becomes dominant, and the distance to the surface of the transparent member 41 may become difficult to calculate. That is, even in the pixel blocks 212 surrounding the pixel block 212 to be interpolated, the distance acquisition may become unstable, such as the distance not being acquired due to the influence of unexpectedly occurring minute irregularities. Therefore, when surrounding pixel blocks 212 are used for interpolation, it may not be possible to properly interpolate the distance to the pixel block 212 to be interpolated.
 そこで、変更例3では、補間に用いる画素ブロック212の範囲が、補間対象の画素ブロック212の周囲の画素ブロック212の範囲から拡張される。たとえば、図10の例では、微細な凹凸等によって拡散反射が支配的になると通常想定され得る範囲R11よりもさらに広い範囲R12が、補間対象の画素ブロック212を中心に設定され、この範囲R12に含まれる画素ブロック212の距離から、補間対象の画素ブロック212の距離が補間される。補間の方法は、線形補間等、従来周知の補間方法が適用されればよい。 Therefore, in modification example 3, the range of the pixel block 212 used for interpolation is expanded from the range of the pixel blocks 212 surrounding the pixel block 212 to be interpolated. For example, in the example shown in FIG. 10, a range R12 that is wider than the range R11 in which diffuse reflection is normally assumed to be dominant due to fine irregularities, etc. is set around the pixel block 212 to be interpolated, and this range R12 is The distance of the pixel block 212 to be interpolated is interpolated from the distance of the included pixel block 212. As the interpolation method, a conventionally known interpolation method such as linear interpolation may be applied.
 変更例3によれば、上記のように、補間の対象とされる範囲が拡張されるため、補間対象の画素ブロック212に対する距離の補間をより適正に行うことができる。よって、透明な部材41の表面までの距離がマッピングされた距離画像の品質を高めることができる。 According to modification example 3, as described above, the range to be interpolated is expanded, so that the distance to the pixel block 212 to be interpolated can be interpolated more appropriately. Therefore, the quality of the distance image in which the distance to the surface of the transparent member 41 is mapped can be improved.
 <変更例4>
 変更例4では、被写体に対するパターン光の照射位置を変更する照射位置変更部が撮像装置1に配置される。また、変更例4では、被写体に対する撮像部20の撮像位置を変更する撮像位置変更部が撮像装置1に配置される。
<Change example 4>
In modification example 4, an irradiation position changing unit that changes the irradiation position of the pattern light on the subject is arranged in the imaging device 1. Furthermore, in modification example 4, an imaging position changing unit that changes the imaging position of the imaging unit 20 with respect to the subject is arranged in the imaging device 1.
 図11は、変更例4に係る、撮像装置1の構成を示す図である。 FIG. 11 is a diagram showing the configuration of the imaging device 1 according to modification example 4.
 投射部10は、Y軸に平行な支軸51によって支持されている。支軸51は、ギア52の中心に固着されている。ギア52は、モータ53の駆動軸に固着されたギア54と噛み合っている。モータ53は、たとえば、ステッピングモータである。モータ53が駆動されると、ギア52が回動し、支軸51が回動する。これにより、投射部10が、支軸51を中心に、X-Z平面に平行に回動する。これにより、被写体40に対するパターン光の照射位置がX軸方向に変更される。支軸51、ギア52、モータ53およびギア54によって、被写体に対するパターン光の照射位置を変更する照射位置変更部50が構成される。 The projection unit 10 is supported by a support shaft 51 parallel to the Y-axis. The support shaft 51 is fixed to the center of the gear 52. The gear 52 meshes with a gear 54 fixed to a drive shaft of a motor 53. Motor 53 is, for example, a stepping motor. When the motor 53 is driven, the gear 52 rotates and the support shaft 51 rotates. As a result, the projection unit 10 rotates about the support shaft 51 in parallel to the XZ plane. Thereby, the irradiation position of the pattern light on the subject 40 is changed in the X-axis direction. The support shaft 51, the gear 52, the motor 53, and the gear 54 constitute an irradiation position changing unit 50 that changes the irradiation position of the pattern light onto the subject.
 撮像部20は、Y軸に平行な支軸61によって支持されている。支軸61は、ギア62の中心に固着されている。ギア62は、モータ63の駆動軸に固着されたギア64と噛み合っている。モータ63は、たとえば、ステッピングモータである。モータ63が駆動されると、ギア62が回動し、支軸61が回動する。これにより、撮像部20が、支軸61を中心に、X-Z平面に平行に回動する。これにより、被写体40に対する撮像部20の撮像位置がX軸方向に変更される。支軸61、ギア62、モータ63およびギア64によって、被写体に対する撮像部20の撮像位置を変更する撮像位置変更部60が構成される。 The imaging unit 20 is supported by a support shaft 61 parallel to the Y-axis. The support shaft 61 is fixed to the center of the gear 62. The gear 62 meshes with a gear 64 fixed to the drive shaft of a motor 63. Motor 63 is, for example, a stepping motor. When the motor 63 is driven, the gear 62 rotates and the support shaft 61 rotates. As a result, the imaging unit 20 rotates about the support shaft 61 in parallel to the XZ plane. Thereby, the imaging position of the imaging unit 20 with respect to the subject 40 is changed in the X-axis direction. The support shaft 61, the gear 62, the motor 63, and the gear 64 constitute an imaging position changing unit 60 that changes the imaging position of the imaging unit 20 with respect to the subject.
 この構成によれば、照射位置変更部50によって被写体に対するパターン光の照射位置が変更されると、撮像素子21上の各画素ブロック212に入射するパターン光のドットパターンが変化する。この場合、各画素ブロック212では、透明な部材41の表面で鏡面反射されたパターン光のドットパターンと、その奥の部材42の表面で拡散反射されたパターン光のドットパターンとの両方が変化する。すなわち、各画素ブロック212では、照射位置変更部50によって被写体に対するパターン光の照射位置が変更される前後において、透明な部材41および部材42からそれぞれ入射するパターン光のドットパターンの組合せが変化する。 According to this configuration, when the irradiation position changing unit 50 changes the irradiation position of the pattern light on the subject, the dot pattern of the pattern light that enters each pixel block 212 on the image sensor 21 changes. In this case, in each pixel block 212, both the dot pattern of the pattern light specularly reflected on the surface of the transparent member 41 and the dot pattern of the pattern light diffusely reflected on the surface of the member 42 behind it change. . That is, in each pixel block 212, the combination of dot patterns of the pattern light incident from the transparent member 41 and the member 42 changes before and after the irradiation position of the pattern light on the subject is changed by the irradiation position changing unit 50.
 このため、照射位置が変更される前に、これら2つのドットパターンの重畳によりドットパターンの特異性が低下していた画素ブロック212が、照射位置の変更によりドットパターンの特異性が回復することがあり、これにより、図5に示すように、複数の対応点(探索位置P1、P2)が得られる場合がある。このため、照射位置の変更後に取得された対応点により、当該画素ブロック212に対する距離を、補間処理なしに取得できる。この場合、信号処理部31は、照射位置変更部50による照射位置の変更をさらに考慮して、対応点に基づく距離の算出処理を行えばよい。 Therefore, for the pixel block 212 whose dot pattern specificity had decreased due to the superposition of these two dot patterns before the irradiation position was changed, the dot pattern specificity can be restored by changing the irradiation position. As a result, as shown in FIG. 5, a plurality of corresponding points (search positions P1, P2) may be obtained. Therefore, the distance to the pixel block 212 can be obtained without interpolation processing using the corresponding points obtained after changing the irradiation position. In this case, the signal processing unit 31 may further take into consideration the change in the irradiation position by the irradiation position changing unit 50 and perform distance calculation processing based on the corresponding points.
 図12は、変更例4に係る、被写体に対する距離の取得処理を示すフローチャートである。 FIG. 12 is a flowchart illustrating a process for obtaining a distance to a subject according to modification example 4.
 この処理において、信号処理部31は、さらに、照射位置変更部50を制御する。信号処理部31は、ますは、照射位置を変更することなく、投射部10を上記実施形態と同様の位置に設定したまま、図6と同様、ステップS101~S106、S109の処理を実行する。次に、信号処理部31は、照射位置変更部50のモータ53を所定ステップ数だけ駆動して、パターン光の照射位置を変更する(S121)。そして、信号処理部31は、ステップS101~S106、S109の処理では距離を取得できなかった画素ブロック212に対して、距離の再計測を行う(S122)。具体的には、信号処理部31は、これらの画素ブロック212に対して、再度、ステップS102と同様のステレオ対応点探索を行い、ステップS103~S105、S109の距離の算出を行う。 In this process, the signal processing section 31 further controls the irradiation position changing section 50. The signal processing unit 31 first executes the processes of steps S101 to S106 and S109 in the same manner as in FIG. 6, with the projection unit 10 set at the same position as in the above embodiment without changing the irradiation position. Next, the signal processing unit 31 drives the motor 53 of the irradiation position changing unit 50 by a predetermined number of steps to change the irradiation position of the pattern light (S121). Then, the signal processing unit 31 remeasures the distance for the pixel block 212 for which the distance could not be obtained in the processes of steps S101 to S106 and S109 (S122). Specifically, the signal processing unit 31 performs the same stereo corresponding point search as in step S102 again on these pixel blocks 212, and calculates the distances in steps S103 to S105 and S109.
 信号処理部31は、この再計測処理によっても距離を取得できなかった画素ブロック212が存在する場合、この画素ブロック212に対して、補間処理を実行する(S107)。こうして、全ての画素ブロック212に対して距離が設定されると、信号処理部31は、1画面分の距離画像を、通信インタフェース34を介して、外部装置に送信する。 If there is a pixel block 212 for which the distance could not be obtained even through this re-measurement process, the signal processing unit 31 performs interpolation processing on this pixel block 212 (S107). Once the distances have been set for all the pixel blocks 212 in this way, the signal processing unit 31 transmits one screen worth of distance images to the external device via the communication interface 34.
 変更例4によれば、上記のように、照射位置変更部50により被写体に対するパターン光の投射位置が変更可能であるため、たとえば、図12のステップS102~S105、S109の処理により距離を取得できなかった画素ブロック212に対する距離を、ステップS121、S122の処理により、補間処理なしに取得できる。よって、距離画像の品質を高めることができる。 According to modification example 4, as described above, since the projection position of the pattern light on the subject can be changed by the irradiation position changing unit 50, the distance can be obtained by, for example, the processing of steps S102 to S105 and S109 in FIG. The distance to the pixel block 212 that did not exist can be obtained without interpolation processing by the processing in steps S121 and S122. Therefore, the quality of the distance image can be improved.
 なお、図12の処理では、照射位置の変更が1回だけ行われたが、照射位置の変更が複数回行われて、距離を取得できなかった画素ブロック212に対する距離が再計測されてもよい。これにより、最終的に距離を取得できなかった画素ブロック212の数が減少し、ステップS107の補間処理の対象とされる画素ブロック212の数が減少する。よって、距離画像の品質をさらに高めることができる。 Note that in the process of FIG. 12, the irradiation position is changed only once, but the irradiation position may be changed multiple times and the distance to the pixel block 212 for which the distance could not be obtained may be remeasured. . As a result, the number of pixel blocks 212 for which distances could not be finally obtained is reduced, and the number of pixel blocks 212 targeted for the interpolation process in step S107 is reduced. Therefore, the quality of the distance image can be further improved.
 また、このような照射位置の変更に応じて距離の再計測の処理は、図7および図8の処理にも適用されてもよい。これにより、これらの処理により取得される距離画像の品質を高めることができる。 Further, the process of re-measuring the distance in response to a change in the irradiation position may also be applied to the processes in FIGS. 7 and 8. Thereby, the quality of the distance image obtained by these processes can be improved.
 さらに、上記では、照射位置の変更が、モータ53を数ステップ駆動する程度のものであったが、パターン光の投射範囲が他の範囲に変更される程度まで大きく照射位置が変更されてもよい。これにより、より広い範囲において被写体までの距離を計測できる。 Further, in the above, the irradiation position is changed by driving the motor 53 several steps, but the irradiation position may be changed significantly to the extent that the pattern light projection range is changed to another range. . This allows the distance to the subject to be measured over a wider range.
 但し、この場合、照射位置の変更により撮像素子21にパターン光が投影されないことが起こり得る。このような場合、信号処理部31は、撮像素子21にパターン光が投影されるよう、撮像位置変更部60を駆動すればよい。たとえば、信号処理部31は、モータ53の駆動方向および駆動量(ステップ数)とモータ63の駆動方向および駆動量(ステップ数)とを対応付けたテーブルを予め保持しておき、このテーブルに基づき、照射位置変更部50の駆動に対応する駆動方向および駆動量だけ、撮像位置変更部60を駆動すればよい。 However, in this case, the pattern light may not be projected onto the image sensor 21 due to a change in the irradiation position. In such a case, the signal processing section 31 may drive the imaging position changing section 60 so that the pattern light is projected onto the imaging device 21. For example, the signal processing unit 31 stores in advance a table in which the drive direction and drive amount (number of steps) of the motor 53 are associated with the drive direction and drive amount (step number) of the motor 63, and based on this table, , it is sufficient to drive the imaging position changing unit 60 by the driving direction and driving amount corresponding to the driving of the irradiation position changing unit 50.
 また、図11の構成では、投射部10および撮像部20をY軸に平行な支軸51、61について回動させることで、投射位置および撮像位置が変更されたが、投射位置および撮像位置の変更方法は、これに限られるものではない。たとえば、投射部10および撮像部20をX軸方向またはY軸方向に移動させる構成であってもよく、あるいは、被写体40を、X軸方向またはY軸方向に移動させる構成であってもよい。この場合、たとえば、被写体40が載置される載置台にX-Yステージ70が用いられる。この他、被写体40をX-Y平面に平行な状態から傾ける構成であってもよく、また、被写体40と投射部10および撮像部20との相対距離を変化させる構成であってもよい。 In addition, in the configuration of FIG. 11, the projection position and the imaging position are changed by rotating the projection unit 10 and the imaging unit 20 about the support shafts 51 and 61 parallel to the Y axis. The changing method is not limited to this. For example, the projection unit 10 and the imaging unit 20 may be moved in the X-axis direction or the Y-axis direction, or the subject 40 may be moved in the X-axis direction or the Y-axis direction. In this case, for example, an XY stage 70 is used as a mounting table on which the subject 40 is mounted. In addition, the configuration may be such that the subject 40 is tilted from a state parallel to the XY plane, or the relative distance between the subject 40 and the projection unit 10 and the imaging unit 20 may be changed.
 <その他の変更例>
 上記実施形態では、図6に示すように、まず、透明な部材41に対応する対応点を抽出した後(S103、S104、S109)、抽出した対応点について距離が算出されたが、複数の対応点が抽出された場合、全ての対応点について一旦距離を算出した後、そのうち最短の距離を透明な部材41に対応する距離として選択してもよい。この点は、図7および図8の処理においても同様である。
<Other change examples>
In the above embodiment, as shown in FIG. 6, after first extracting corresponding points corresponding to the transparent member 41 (S103, S104, S109), distances are calculated for the extracted corresponding points. When points are extracted, the distances may be calculated once for all corresponding points, and then the shortest distance among them may be selected as the distance corresponding to the transparent member 41. This point also applies to the processing in FIGS. 7 and 8.
 また、図6および図7の処理が並行して行われて、被写体40に対し、部材41、42までの距離が同時に取得されてもよい。同様に、図8の処理において、設定範囲を複数設定して、部材41、42までの距離が同時に取得されてもよい。 Furthermore, the processes in FIGS. 6 and 7 may be performed in parallel to obtain the distances from the subject 40 to the members 41 and 42 at the same time. Similarly, in the process of FIG. 8, a plurality of setting ranges may be set and the distances to the members 41 and 42 may be acquired at the same time.
 また、撮像部20は、必ずしも1つでなくてもよく、互いに異なる方向から撮像を行う複数の撮像部20が配置されてもよい。光源11の出射波長は、必ずしも赤外の波長帯でなくてもよく、たとえば、可視の波長帯であってもよい。 Furthermore, the number of imaging units 20 does not necessarily have to be one, and a plurality of imaging units 20 that capture images from mutually different directions may be arranged. The emission wavelength of the light source 11 does not necessarily have to be in the infrared wavelength band, but may be in the visible wavelength band, for example.
 また、上記実施形態では、ステレオ対応点探索の探索範囲が行方向であったが、探索範囲が列方向であってもよく、あるいは、探索範囲が行と列とを組み合わせた方向であってもよい。 Further, in the above embodiment, the search range of the stereo corresponding point search is in the row direction, but the search range may be in the column direction, or the search range may be in a combination of rows and columns. good.
 また、撮像処理部33が、撮像信画像に対し、撮像レンズ22の歪曲歪み等による歪みを補正してもよく、また、信号処理部31に予め保持された基準画像に対し、撮像レンズ22の歪曲歪み等による歪みの補正がなされてもよい。 Further, the image processing unit 33 may correct the distortion caused by the distortion of the imaging lens 22 on the captured image, and may also correct the distortion caused by the distortion of the imaging lens 22 on the captured image. Distortion may be corrected by distortion or the like.
 また、上記実施形態では、画素ブロック212ごとに相関値が求められたが、パラボラフィッティング等の処理により、画素ブロック212間における相関値がさらに取得されて、対応点の探索が行われてもよい。 Further, in the above embodiment, a correlation value is obtained for each pixel block 212, but correlation values between pixel blocks 212 may be further obtained through processing such as parabola fitting, and a search for corresponding points may be performed. .
 また、撮像装置1の構成は、上記実施形態に示した構成に限られるものではなく、たとえば、撮像素子21として、複数のフォトセンサがマトリクス状に配置されたフォトセンサアレイが用いられてもよい。 Further, the configuration of the imaging device 1 is not limited to the configuration shown in the above embodiment, and for example, a photosensor array in which a plurality of photosensors are arranged in a matrix may be used as the imaging element 21. .
 また、上記実施形態では、予め保持された基準画像と撮像画像とのステレオ対応点探索により、被写体までの距離が算出されたが、ステレオ配置された2つの撮像部によりそれぞれ取得された2つの撮像画像の一方を基準画像として用い、他方の撮像画像と当該基準画像とのステレオ対応点探索により、被写体までの距離が算出され構成であってもよい。この場合、これら2つの撮像部は、図1の撮像部20と同様の構成を備える。投射部10は、これら2つの撮像部の撮像視野が重なる範囲に、パターン光を投射するよう配置されればよい。この場合も、各撮像部に配置される偏光フィルタは、投射部10から出射される光と同じ偏光方向の光を抽出するよう配置されればよい。また、この場合も、撮像部は、2つに限らず、互いに異なる方向から撮像を行う3つ以上の撮像部が配置されてもよい。 In addition, in the above embodiment, the distance to the subject is calculated by searching for stereo corresponding points between the reference image held in advance and the captured image, but the distance to the subject is calculated by searching for stereo corresponding points between the reference image held in advance and the captured image. One of the images may be used as a reference image, and the distance to the subject may be calculated by searching for stereo corresponding points between the other captured image and the reference image. In this case, these two imaging units have the same configuration as the imaging unit 20 in FIG. 1 . The projection unit 10 may be arranged so as to project pattern light onto a range where the imaging fields of these two imaging units overlap. In this case as well, the polarizing filters arranged in each imaging section may be arranged so as to extract light in the same polarization direction as the light emitted from the projection section 10. Also in this case, the number of imaging units is not limited to two, and three or more imaging units that capture images from different directions may be arranged.
 また、距離の計測手法は、必ずしも、ステレオ対応点探索による計測手法でなくてもよく、たとえば、TOFによる計測手法であってもよい。この場合、たとえば、図1の構成からパターン生成器13を省略することにより、いわゆるフラッシュ方式で距離計測を行うためのTOFによる構成が実現され得る。この構成では、投射レンズ14が、凹レンズ等の拡散作用を有していてもよい。 Further, the distance measurement method does not necessarily have to be a measurement method using stereo corresponding point search, and may be a measurement method using TOF, for example. In this case, for example, by omitting the pattern generator 13 from the configuration of FIG. 1, a TOF configuration for performing distance measurement using a so-called flash method can be realized. In this configuration, the projection lens 14 may have a diffusion effect such as a concave lens.
 この構成において、信号処理部31は、光源11をパルス発光させ、この発光タイミングと、撮像素子21の各画素における被写体からの反射光の受光タイミングとの時間差に基づき、被写体までの距離を計測する。すなわち、各画素の領域が、距離計測の単位領域となる。 In this configuration, the signal processing unit 31 causes the light source 11 to emit pulsed light, and measures the distance to the subject based on the time difference between the timing of this emission and the timing of reception of reflected light from the subject in each pixel of the image sensor 21. . That is, each pixel area becomes a unit area for distance measurement.
 この場合、図4に示す被写体では、透明な部材41の表面で反射された反射光と、その奥の部材42で反射された反射光との両方が入射する。信号処理部31は、これら2つの反射光に基づき、それぞれ、上記の時間差を取得し、各時間差に基づく距離を算出する。そして、信号処理部31は、各画素について取得した2種類の距離のうち、短い方、長い方または所定の設定範囲に含まれるものを選択して、距離画像を生成する。 In this case, in the subject shown in FIG. 4, both the reflected light reflected from the surface of the transparent member 41 and the reflected light reflected from the member 42 located further therein are incident. The signal processing unit 31 obtains the above-mentioned time differences based on these two reflected lights, and calculates a distance based on each time difference. Then, the signal processing unit 31 selects the shorter one, the longer one, or the one included in a predetermined setting range from the two types of distances obtained for each pixel, and generates a distance image.
 この構成によっても、上記実施形態および変更例1、2と同様、透明な部材41または部材42までの距離を適正に取得することができる。 With this configuration as well, the distance to the transparent member 41 or 42 can be appropriately acquired, as in the above embodiment and Modifications 1 and 2.
 なお、TOF方式を用いる場合は、必ずしも、フラッシュ方式でなくてもよく、円形状のビームを2次元走査させる方式や、ラインビームを短軸方向に走査させる方式であってもよい。これらの場合は、図1の構成からパターン生成器13および投射レンズ14が省略され、ビームを走査させる構成が追加される。 Note that when using the TOF method, it does not necessarily have to be a flash method, and a method in which a circular beam is scanned two-dimensionally or a method in which a line beam is scanned in the short axis direction may be used. In these cases, the pattern generator 13 and projection lens 14 are omitted from the configuration of FIG. 1, and a configuration for scanning the beam is added.
 たとえば、円形状のビームを2次元走査させる方式の場合、コリメータレンズ12で平行光化された光がMEMSミラー等の光偏向器によって、2次元走査される。また、ラインビームを短軸方向走査させる方式の場合、コリメータレンズ12の後段にシリンドリカルレンズが配置されてラインビームが生成され、シリンドリカルレンズの焦点位置付近に、ラインビームを短軸方向に走査させるための光偏向器が配置される。 For example, in the case of a method in which a circular beam is scanned two-dimensionally, the light parallelized by the collimator lens 12 is scanned two-dimensionally by an optical deflector such as a MEMS mirror. In addition, in the case of a method in which the line beam is scanned in the short axis direction, a cylindrical lens is placed after the collimator lens 12 to generate the line beam, and the line beam is scanned in the short axis direction near the focal position of the cylindrical lens. optical deflectors are arranged.
 TOF方式の場合も、光源11は、レーザ光源であってよい。この場合、投射部10から偏光フィルタ15が省略され、レーザ光源の直線偏光の方向が撮像部20の偏光フィルタ23の方向に沿うように、レーザ光源が配置されればよい。 Also in the case of the TOF method, the light source 11 may be a laser light source. In this case, the polarizing filter 15 may be omitted from the projection section 10 and the laser light source may be arranged such that the direction of linearly polarized light from the laser light source is along the direction of the polarizing filter 23 of the imaging section 20.
 この他、本発明の実施形態は、特許請求の範囲に示された技術的思想の範囲内において、適宜、種々の変更が可能である。 In addition, the embodiments of the present invention can be appropriately modified in various ways within the scope of the technical idea shown in the claims.
 1 撮像装置
 10 投射部
 15 偏光フィルタ
 20 撮像部
 21 撮像素子
 22 撮像レンズ
 23 偏光フィルタ
 31 信号処理部
 40 被写体
 50 照射位置変更部
 60 撮像位置変更部
 ΔW 設定範囲
1 Imaging device 10 Projection unit 15 Polarizing filter 20 Imaging unit 21 Imaging element 22 Imaging lens 23 Polarizing filter 31 Signal processing unit 40 Subject 50 Irradiation position changing unit 60 Imaging position changing unit ΔW setting range

Claims (8)

  1.  略均一な偏光方向の光を投射する投射部と、
     前記光が投射された被写体を撮像する撮像部と、
     前記撮像部により取得された撮像画像を処理して前記被写体までの距離を計測する信号処理部と、を備え、
     前記撮像部は、
      撮像レンズと、
      前記投射部から出射される光と同じ偏光方向の光を抽出する偏光フィルタと、
      前記撮像レンズおよび前記偏光フィルタを経由した前記被写体からの光を受光する撮像素子と、を備え、
     前記信号処理部は、前記撮像画像上の領域ごとに、計測対象の距離を1つに制限することなく、距離の計測を行う、
    ことを特徴とする撮像装置。
     
    a projection unit that projects light with a substantially uniform polarization direction;
    an imaging unit that captures an image of a subject onto which the light is projected;
    a signal processing unit that processes the captured image acquired by the imaging unit and measures the distance to the subject;
    The imaging unit includes:
    an imaging lens;
    a polarizing filter that extracts light in the same polarization direction as the light emitted from the projection section;
    an imaging element that receives light from the subject via the imaging lens and the polarizing filter;
    The signal processing unit measures the distance for each region on the captured image without limiting the distance to be measured to one.
    An imaging device characterized by:
  2.  請求項1に記載の撮像装置において、
     前記信号処理部は、前記領域について複数の距離が得られる場合に、最短の距離を計測結果として選択する、
    ことを特徴とする撮像装置。
     
    The imaging device according to claim 1,
    The signal processing unit selects the shortest distance as a measurement result when a plurality of distances are obtained for the area.
    An imaging device characterized by:
  3.  請求項1に記載の撮像装置において、
     前記信号処理部は、前記領域について複数の距離が得られる場合に、最長の距離を計測結果として選択する、
    ことを特徴とする撮像装置。
     
    The imaging device according to claim 1,
    The signal processing unit selects the longest distance as a measurement result when a plurality of distances are obtained for the area.
    An imaging device characterized by:
  4.  請求項1に記載の撮像装置において、
     前記信号処理部は、前記領域について予め設定された範囲に含まれる距離が得られる場合に、この距離を計測結果として取得する、
    ことを特徴とする撮像装置。
     
    The imaging device according to claim 1,
    The signal processing unit acquires this distance as a measurement result when a distance included in a preset range for the area is obtained.
    An imaging device characterized by:
  5.  請求項1ないし4の何れか一項に記載の撮像装置において、
     前記被写体に対する前記光の照射位置を変更する照射位置変更部をさらに備える、
    ことを特徴とする撮像装置。
     
    The imaging device according to any one of claims 1 to 4,
    further comprising an irradiation position changing unit that changes the irradiation position of the light on the subject;
    An imaging device characterized by:
  6.  請求項1ないし5の何れか一項に記載の撮像装置において、
     前記被写体に対する前記撮像部の撮像位置を変更する撮像位置変更部をさらに備える、
    ことを特徴とする撮像装置。
     
    The imaging device according to any one of claims 1 to 5,
    further comprising an imaging position changing unit that changes an imaging position of the imaging unit with respect to the subject;
    An imaging device characterized by:
  7.  請求項1ないし6の何れか一項に記載の撮像装置において、
     前記投射部は、所定の強度分布のパターン光を投射し、
     前記信号処理部は、予め規定された距離に被写体があるときの撮像画像である基準画像と、前記撮像部により撮像された前記撮像画像とを比較して、前記被写体までの距離を計測する、
    ことを特徴とする撮像装置。
     
    The imaging device according to any one of claims 1 to 6,
    The projection unit projects pattern light with a predetermined intensity distribution,
    The signal processing unit measures the distance to the subject by comparing a reference image that is a captured image when the subject is at a predefined distance with the captured image captured by the imaging unit.
    An imaging device characterized by:
  8.  請求項1ないし7の何れか一項に記載の撮像装置において、
     前記投射部は、前記略均一な偏光の光を生成するための偏光フィルタを備える、
    ことを特徴とする撮像装置。
    The imaging device according to any one of claims 1 to 7,
    The projection unit includes a polarization filter for generating the substantially uniformly polarized light.
    An imaging device characterized by:
PCT/JP2023/003972 2022-03-08 2023-02-07 Imaging device WO2023171203A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-035198 2022-03-08
JP2022035198 2022-03-08

Publications (1)

Publication Number Publication Date
WO2023171203A1 true WO2023171203A1 (en) 2023-09-14

Family

ID=87936617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/003972 WO2023171203A1 (en) 2022-03-08 2023-02-07 Imaging device

Country Status (1)

Country Link
WO (1) WO2023171203A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05203413A (en) * 1992-01-29 1993-08-10 Hitachi Ltd Noncontact method and instrument for measuring displacement and noncontact film thickness measuring instrument
JP2003057168A (en) * 2001-08-20 2003-02-26 Omron Corp Road-surface judging apparatus and method of installing and adjusting the same
US8547531B2 (en) * 2010-09-01 2013-10-01 The United States Of America As Represented By The Administrator Of The National Aeronautics Space Administration Imaging device
JP2015197340A (en) * 2014-03-31 2015-11-09 国立大学法人 東京大学 inspection system and inspection method
JP2020004085A (en) * 2018-06-28 2020-01-09 キヤノン株式会社 Image processor, image processing method and program
WO2020214959A1 (en) * 2019-04-17 2020-10-22 The Regents Of The University Of Michigan Multidimensional materials sensing systems and methods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05203413A (en) * 1992-01-29 1993-08-10 Hitachi Ltd Noncontact method and instrument for measuring displacement and noncontact film thickness measuring instrument
JP2003057168A (en) * 2001-08-20 2003-02-26 Omron Corp Road-surface judging apparatus and method of installing and adjusting the same
US8547531B2 (en) * 2010-09-01 2013-10-01 The United States Of America As Represented By The Administrator Of The National Aeronautics Space Administration Imaging device
JP2015197340A (en) * 2014-03-31 2015-11-09 国立大学法人 東京大学 inspection system and inspection method
JP2020004085A (en) * 2018-06-28 2020-01-09 キヤノン株式会社 Image processor, image processing method and program
WO2020214959A1 (en) * 2019-04-17 2020-10-22 The Regents Of The University Of Michigan Multidimensional materials sensing systems and methods

Similar Documents

Publication Publication Date Title
CN107037443B (en) Method for determining distance based on triangulation principle and distance measuring unit
US6741082B2 (en) Distance information obtaining apparatus and distance information obtaining method
WO2011102025A1 (en) Object detection device and information acquisition device
EP2813809A1 (en) Device and method for measuring the dimensions of an objet and method for producing an item using said device
EP1946376B1 (en) Apparatus for and method of measuring image
US7502100B2 (en) Three-dimensional position measurement method and apparatus used for three-dimensional position measurement
JP4718486B2 (en) System and method for optical navigation using projected fringe technique
JP2013190394A (en) Pattern illumination apparatus and distance measuring apparatus
JP2013257162A (en) Distance measuring device
JP2021113832A (en) Surface shape measurement method
JP3414624B2 (en) Real-time range finder
WO2023171203A1 (en) Imaging device
JP2023176026A (en) Method for determining scan range
JP6362058B2 (en) Test object measuring apparatus and article manufacturing method
JP2014238299A (en) Measurement device, calculation device, and measurement method for inspected object, and method for manufacturing articles
JP4973836B2 (en) Displacement sensor with automatic measurement area setting means
JP2011242230A (en) Shape measuring device
JP6149990B2 (en) Surface defect detection method and surface defect detection apparatus
JP4339165B2 (en) Light receiving center detection method, distance measuring device, angle measuring device, and optical measuring device
JP2006313143A (en) Irregularity inspection device and method thereof
JP6888429B2 (en) Pattern irradiation device, imaging system and handling system
JP2022023609A (en) Survey device
JP6820516B2 (en) Surface shape measurement method
JP4788968B2 (en) Focal plane tilt type confocal surface shape measuring device
JP2007333458A (en) Peripheral obstacle detector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23766383

Country of ref document: EP

Kind code of ref document: A1