WO2013031447A1 - Object detection device and information acquisition device - Google Patents

Object detection device and information acquisition device Download PDF

Info

Publication number
WO2013031447A1
WO2013031447A1 PCT/JP2012/069125 JP2012069125W WO2013031447A1 WO 2013031447 A1 WO2013031447 A1 WO 2013031447A1 JP 2012069125 W JP2012069125 W JP 2012069125W WO 2013031447 A1 WO2013031447 A1 WO 2013031447A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
segment
area
distance
segment area
Prior art date
Application number
PCT/JP2012/069125
Other languages
French (fr)
Japanese (ja)
Inventor
武藤 裕之
Original Assignee
三洋電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三洋電機株式会社 filed Critical 三洋電機株式会社
Publication of WO2013031447A1 publication Critical patent/WO2013031447A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the present invention relates to an object detection apparatus that detects an object in a target area based on a state of reflected light when light is projected onto the target area, and an information acquisition apparatus suitable for use in the object detection apparatus.
  • An object detection device using light has been developed in various fields.
  • An object detection apparatus using a so-called distance image sensor can detect not only a planar image on a two-dimensional plane but also the shape and movement of the detection target object in the depth direction.
  • light in a predetermined wavelength band is projected from a laser light source or LED (Light-Emitting-Diode) onto a target area, and the reflected light is received by a light-receiving element such as a CMOS image sensor.
  • CMOS image sensor Light-Emitting-Diode
  • a distance image sensor of a type that irradiates a target region with laser light having a predetermined dot pattern reflected light from the target region of laser light having a dot pattern is received by a light receiving element. Based on the light receiving position of the dot on the light receiving element, the distance to each part of the detection target object (irradiation position of each dot on the detection target object) is detected using triangulation (for example, non-patent) Reference 1).
  • triangulation for example, non-patent
  • the dot pattern captured by the image sensor when the reference plane is arranged at a position separated by a predetermined distance is compared with the dot pattern captured by the image sensor at the time of actual measurement to detect the distance.
  • a plurality of areas are set in the dot pattern with respect to the reference plane.
  • the object detection device detects the distance to the target object for each region based on the position on the dot pattern captured at the time of actual measurement of the dots included in each region. Then, the object detection device detects the shape, contour, movement, and the like of the object based on the distance information obtained for each region.
  • the distance detection resolution increases.
  • the resolution of distance detection can be remarkably enhanced by setting each region so that adjacent regions are shifted from each other by one pixel on the image sensor.
  • the resolution of distance detection may be set to such an extent that the shape and contour of the detection target object can be grasped by the acquired distance information.
  • the present invention has been made in view of this point, and an object of the present invention is to provide an information acquisition apparatus and an object detection apparatus that can suppress an increase in the amount of calculation while maintaining the accuracy of object detection.
  • the 1st aspect of this invention is related with the information acquisition apparatus which acquires the information of a target area
  • the information acquisition device is arranged so as to be lined up with a projection optical system that projects a laser beam with a predetermined dot pattern on a target area and spaced apart by a predetermined distance from the projection optical system,
  • a light receiving optical system that images a target area with an image sensor, a storage unit that holds a reference dot pattern imaged by the light receiving optical system when the laser beam is irradiated on a reference surface, and a segment area in the reference dot pattern
  • the distance for which the distance is acquired for the position in the target area corresponding to the segment area by collating the measured dot pattern acquired by imaging the target area during distance measurement and the dots in the segment area An acquisition unit.
  • the distance acquisition unit sets the size of the segment area so that the image sensor has a quadrangular shape with 5 pixels or more in the vertical direction and 5 pixels or more in the horizontal direction, and sets the interval between the adjacent segment areas by 2 pixels. And a distance not more than the upper limit value corresponding to the size of the segment area.
  • the second aspect of the present invention relates to an object detection apparatus.
  • the object detection apparatus according to this aspect includes the information acquisition apparatus according to the first aspect.
  • an information acquisition device and an object detection device that can suppress an increase in the amount of calculation while maintaining the accuracy of object detection.
  • an information acquisition device of a type that irradiates a target area with laser light having a predetermined dot pattern is exemplified.
  • FIG. 1 shows a schematic configuration of the object detection apparatus according to the present embodiment.
  • the object detection device includes an information acquisition device 1 and an information processing device 2.
  • the television 3 is controlled by a signal from the information processing device 2.
  • the information acquisition device 1 projects infrared light over the entire target area and receives the reflected light with a CMOS image sensor, whereby the distance between each part of the object in the target area (hereinafter referred to as “three-dimensional distance information”). To get.
  • the acquired three-dimensional distance information is sent to the information processing apparatus 2 via the cable 4.
  • the information processing apparatus 2 is, for example, a controller for TV control, a game machine, a personal computer, or the like.
  • the information processing device 2 detects an object in the target area based on the three-dimensional distance information received from the information acquisition device 1, and controls the television 3 based on the detection result.
  • the information processing apparatus 2 detects a person based on the received three-dimensional distance information and detects the movement of the person from the change in the three-dimensional distance information.
  • the information processing device 2 is a television control controller
  • the information processing device 2 detects the person's gesture from the received three-dimensional distance information, and outputs a control signal to the television 3 in accordance with the gesture.
  • the application program to be installed is installed.
  • the user can cause the television 3 to execute a predetermined function such as channel switching or volume up / down by making a predetermined gesture while watching the television 3.
  • the information processing device 2 when the information processing device 2 is a game machine, the information processing device 2 detects the person's movement from the received three-dimensional distance information, and displays a character on the television screen according to the detected movement.
  • An application program that operates and changes the game battle situation is installed. In this case, the user can experience a sense of realism in which he / she plays a game as a character on the television screen by making a predetermined movement while watching the television 3.
  • FIG. 2 is a diagram showing the configuration of the information acquisition device 1 and the information processing device 2.
  • the information acquisition apparatus 1 includes a projection optical system 100 and a light receiving optical system 200 as a configuration of an optical unit.
  • the projection optical system 100 and the light receiving optical system 200 are arranged in the information acquisition apparatus 1 so as to be aligned in the X-axis direction.
  • the projection optical system 100 includes a laser light source 110, a collimator lens 120, a leakage mirror 130, a diffractive optical element (DOE: Diffractive Optical Element) 140, and an FMD (Front Monitor Diode) 150.
  • the light receiving optical system 200 includes an aperture 210, an imaging lens 220, a filter 230, and a CMOS image sensor 240.
  • the information acquisition apparatus 1 includes a CPU (Central Processing Unit) 21, a laser driving circuit 22, a PD signal processing circuit 23, an imaging signal processing circuit 24, an input / output circuit 25, A memory 26 is provided.
  • CPU Central Processing Unit
  • the laser light source 110 outputs laser light in a narrow wavelength band with a wavelength of about 830 nm in a direction away from the light receiving optical system 200 (X-axis negative direction).
  • the collimator lens 120 converts the laser light emitted from the laser light source 110 into light slightly spread from parallel light (hereinafter simply referred to as “parallel light”).
  • the leakage mirror 130 is composed of a multilayer film of dielectric thin films, and the number of layers and the thickness of the film are designed so that the reflectance is slightly lower than 100% and the transmittance is several steps smaller than the reflectance.
  • the leakage mirror 130 reflects most of the laser light incident from the collimator lens 120 side in the direction toward the DOE 140 (Z-axis direction) and transmits the remaining part in the direction toward the FMD 150 (X-axis negative direction).
  • the DOE 140 has a diffraction pattern on the incident surface. Due to the diffraction effect of the diffraction pattern, the laser light incident on the DOE 140 is converted into a dot pattern laser light and irradiated onto the target region.
  • the diffraction pattern has, for example, a structure in which a step type diffraction hologram is formed in a predetermined pattern. The diffraction hologram is adjusted in pattern and pitch so as to convert the laser light converted into parallel light by the collimator lens 120 into laser light of a dot pattern.
  • the DOE 140 irradiates the target region with the laser beam incident from the leakage mirror 130 as a laser beam having a dot pattern that spreads radially.
  • the size of each dot in the dot pattern depends on the beam size of the laser light when entering the DOE 140.
  • the FMD 150 receives the laser light transmitted through the leakage mirror 130 and outputs an electrical signal corresponding to the amount of light received.
  • the laser light reflected from the target area enters the imaging lens 220 through the aperture 210.
  • the aperture 210 stops the light from the outside so as to match the F number of the imaging lens 220.
  • the imaging lens 220 collects the light incident through the aperture 210 on the CMOS image sensor 240.
  • the filter 230 is an IR filter (Infrared Filter) that transmits light in the infrared wavelength band including the emission wavelength (about 830 nm) of the laser light source 110 and cuts the wavelength band of visible light.
  • the CMOS image sensor 240 receives the light collected by the imaging lens 220 and outputs a signal (charge) corresponding to the amount of received light to the imaging signal processing circuit 24 for each pixel.
  • the output speed of the signal is increased so that the signal (charge) of the pixel can be output to the imaging signal processing circuit 24 with high response from light reception in each pixel.
  • CPU 21 controls each unit according to a control program stored in memory 26.
  • the CPU 21 is provided with the functions of a laser control unit 21a for controlling the laser light source 110 and a distance acquisition unit 21b for generating three-dimensional distance information.
  • the laser drive circuit 22 drives the laser light source 110 according to a control signal from the CPU 21.
  • the PD signal processing circuit 23 amplifies and digitizes the voltage signal corresponding to the amount of received light output from the FMD 150 and outputs it to the CPU 21.
  • the CPU 21 determines to amplify or decrease the light amount of the laser light source 110 by processing by the laser control unit 21a.
  • the laser control unit 21 a transmits a control signal for changing the light emission amount of the laser light source 110 to the laser driving circuit 22. Thereby, the power of the laser beam emitted from the laser light source 110 is controlled to be substantially constant.
  • the imaging signal processing circuit 24 controls the CMOS image sensor 240 and sequentially takes in the signal (charge) of each pixel generated by the CMOS image sensor 240 for each line. Then, the captured signals are sequentially output to the CPU 21. Based on the signal (imaging signal) supplied from the imaging signal processing circuit 24, the CPU 21 calculates the distance from the information acquisition device 1 to each part of the detection target by processing by the distance acquisition unit 21b.
  • the input / output circuit 25 controls data communication with the information processing apparatus 2.
  • the information processing apparatus 2 includes a CPU 31, an input / output circuit 32, and a memory 33.
  • the information processing apparatus 2 has a configuration for performing communication with the television 3 and for reading information stored in an external memory such as a CD-ROM and installing it in the memory 33.
  • an external memory such as a CD-ROM
  • the configuration of these peripheral circuits is not shown for the sake of convenience.
  • the CPU 31 controls each unit according to a control program (application program) stored in the memory 33.
  • a control program application program
  • the CPU 31 is provided with the function of the object detection unit 31a for detecting an object in the image.
  • a control program is read from a CD-ROM by a drive device (not shown) and installed in the memory 33, for example.
  • the object detection unit 31a detects a person in the image and its movement from the three-dimensional distance information supplied from the information acquisition device 1. Then, a process for operating the character on the television screen according to the detected movement is executed by the control program.
  • the object detection unit 31 a detects a person in the image and its movement (gesture) from the three-dimensional distance information supplied from the information acquisition device 1. To do. Then, processing for controlling functions (channel switching, volume adjustment, etc.) of the television 3 is executed by the control program in accordance with the detected movement (gesture).
  • the input / output circuit 32 controls data communication with the information acquisition device 1.
  • FIG. 3 is a perspective view showing an installation state of the projection optical system 100 and the light receiving optical system 200.
  • the projection optical system 100 and the light receiving optical system 200 are disposed on the base plate 300.
  • the optical members constituting the projection optical system 100 are installed in the housing 100a, and the housing 100a is installed on the base plate 300. Thereby, the projection optical system 100 is arranged on the base plate 300.
  • Reference numerals 150a and 240a denote FPCs (flexible printed circuit boards) for supplying signals from the FMD 150 and the CMOS image sensor 240 to a circuit board (not shown), respectively.
  • the optical member constituting the light receiving optical system 200 is installed in the holder 200a, and this holder 200a is attached to the base plate 300 from the back surface of the base plate 300. As a result, the light receiving optical system 200 is disposed on the base plate 300.
  • the height in the Z-axis direction is higher than that of the projection optical system 100.
  • the periphery of the arrangement position of the light receiving optical system 200 is raised by one step in the Z-axis direction.
  • the positions of the exit pupil of the projection optical system 100 and the entrance pupil of the light receiving optical system 200 substantially coincide with each other in the Z-axis direction. Further, the projection optical system 100 and the light receiving optical system 200 are arranged with a predetermined distance in the X-axis direction so that the projection center of the projection optical system 100 and the imaging center of the light-receiving optical system 200 are aligned on a straight line parallel to the X axis. Installed at.
  • the installation interval between the projection optical system 100 and the light receiving optical system 200 is set according to the distance between the information acquisition device 1 and the reference plane of the target area.
  • the distance between the reference plane and the information acquisition device 1 varies depending on how far away the target is to be detected. The closer the distance to the target to be detected is, the narrower the installation interval between the projection optical system 100 and the light receiving optical system 200 is. Conversely, as the distance to the target to be detected increases, the installation interval between the projection optical system 100 and the light receiving optical system 200 increases.
  • FIG. 4A is a diagram schematically showing the irradiation state of the laser light on the target region
  • FIG. 4B is a diagram schematically showing the light receiving state of the laser light in the CMOS image sensor 240.
  • FIG. 5B shows a flat surface (screen) in the target area and a light receiving state when a person is present in front of the screen.
  • the projection optical system 100 irradiates a target region with laser light having a dot pattern (hereinafter, the entire laser light having this pattern is referred to as “DP light”). .
  • the luminous flux region of DP light is indicated by a solid line frame.
  • dot regions hereinafter simply referred to as “dots” in which the intensity of the laser light is increased by the diffraction action by the DOE 140 are scattered according to the dot pattern by the diffraction action by the DOE 140.
  • the DP light reflected thereby is distributed on the CMOS image sensor 240 as shown in FIG.
  • the entire DP light receiving area on the CMOS image sensor 240 is indicated by a dashed frame, and the DP light receiving area incident on the imaging effective area of the CMOS image sensor 240 is indicated by a solid frame.
  • the effective imaging area of the CMOS image sensor 240 is an area where a signal is output as a sensor among areas where the CMOS image sensor 240 receives DP light, and has a size of, for example, VGA (640 pixels ⁇ 480 pixels).
  • VGA 640 pixels ⁇ 480 pixels
  • FIG. 5 is a diagram for explaining a reference pattern setting method used in the distance detection method.
  • a flat reflection plane RS perpendicular to the Z-axis direction is disposed at a position at a predetermined distance Ls from the projection optical system 100.
  • the emitted DP light is reflected by the reflection plane RS and enters the CMOS image sensor 240 of the light receiving optical system 200.
  • an electrical signal for each pixel in the effective imaging area is output from the CMOS image sensor 240.
  • the output electric signal value (pixel value) for each pixel is developed on the memory 26 of FIG.
  • an image including all pixel values obtained by reflection from the reflection surface RS is referred to as a “reference image”, and the reflection surface RS is referred to as a “reference surface”.
  • FIG. 5B shows a state in which the light receiving surface is seen through in the positive direction of the Z axis from the back side of the CMOS image sensor 240. The same applies to the drawings after FIG.
  • a plurality of segment areas having a predetermined size are set for the reference pattern area thus set.
  • the size of the segment area is determined in consideration of the contour extraction accuracy of the object based on the obtained distance information and the load of the calculation amount of distance detection for the CPU 21. The relationship between the size of the segment area and the contour extraction accuracy of the object will be described later with reference to FIG.
  • each segment area is indicated by 7 pixels ⁇ 7 pixels, and the center pixel of each segment area is indicated by a cross. That is, the segment area is set to a square whose side length is seven times the interval between adjacent pixels.
  • the segment areas are set so that adjacent segment areas are arranged at intervals of one pixel in the X-axis direction and the Y-axis direction with respect to the reference pattern area. That is, a certain segment area is set at a position shifted by one pixel with respect to a segment area adjacent to the segment area in the X-axis direction and the Y-axis direction. At this time, each segment area is dotted with dots in a unique pattern. Therefore, the pattern of pixel values in the segment area is different for each segment area.
  • the relationship between the segment area interval and the distance detection resolution in the comparative example will be described later with reference to FIG.
  • reference pattern area on the CMOS image sensor 240 information on the position of the reference pattern area on the CMOS image sensor 240, pixel values (reference patterns) of all pixels included in the reference pattern area, and segment area information set for the reference pattern area are shown in FIG. 2 memory 26. These pieces of information stored in the memory 26 are hereinafter referred to as “reference templates”.
  • the CPU 21 calculates the distance to each part of the object based on the shift amount of the dot pattern in each segment area obtained from the reference template.
  • DP light corresponding to a predetermined segment area Sn on the reference pattern is reflected by the object, and the segment area Sn. It is incident on a different region Sn ′. Since the projection optical system 100 and the light receiving optical system 200 are adjacent to each other in the X-axis direction, the displacement direction of the region Sn ′ with respect to the segment region Sn is parallel to the X-axis. In the case of FIG. 5A, since the object is at a position closer than the distance Ls, the region Sn 'is displaced in the positive direction of the X axis with respect to the segment region Sn. If the object is at a position farther than the distance Ls, the region Sn ′ is displaced in the negative X-axis direction with respect to the segment region Sn.
  • the distance Lr from the projection optical system 100 to the portion of the object irradiated with DP light (DPn) is triangulated using the distance Ls.
  • the distance from the projection optical system 100 is calculated for the part of the object corresponding to another segment area.
  • Non-Patent Document 1 The 19th Annual Conference of the Robotics Society of Japan (September 18-20, 2001), Proceedings, P1279-1280).
  • the CMOS image sensor 240 it is detected to which position the segment region Sn of the reference template has been displaced at the time of actual measurement. This detection is performed by collating the dot pattern obtained from the DP light irradiated onto the CMOS image sensor 240 at the time of actual measurement with the dot pattern included in the segment region Sn.
  • an image made up of all the pixel values obtained from the DP light irradiated to the imaging effective area on the CMOS image sensor 240 at the time of actual measurement will be referred to as “measured image”.
  • the effective imaging area of the CMOS image sensor 240 at the time of actual measurement is, for example, the size of VGA (640 pixels ⁇ 480 pixels) as in the case of acquiring the reference image.
  • FIGS. 6A to 6E are diagrams for explaining such a distance detection method.
  • FIG. 6A is a diagram showing a reference pattern region set in a standard image on the CMOS image sensor 240
  • FIG. 6B is a diagram showing an actually measured image on the CMOS image sensor 240 at the time of actual measurement.
  • FIGS. 6C to 6E are diagrams for explaining a method for collating the dot pattern of the DP light included in the actual measurement image and the dot pattern included in the segment area of the reference template.
  • FIGS. 6A and 6B only a part of the segment areas is shown for convenience.
  • FIG. 4 (b) there is a person in front of the reference plane as a detection target object, and the image of the person is reflected. It is shown.
  • a search range Ri is set for the segment area Si on the actual measurement image.
  • the search range Ri has a predetermined width in the X-axis direction.
  • the segment area Si is sent pixel by pixel in the search range Ri in the X-axis direction, and the dot pattern of the segment area Si is compared with the dot pattern on the measured image at each feed position.
  • a region corresponding to each feed position on the actually measured image is referred to as a “comparison region”.
  • a plurality of comparison areas having the same size as the segment area Si are set in the search range Ri, and the comparison areas adjacent in the X-axis direction are shifted by one pixel from each other.
  • the search range Ri is determined by the direction in which the detection target object moves away from the reference plane toward the information acquisition device 1 and the distance that can be detected in the approaching direction. In FIG. 6, there is a range of a position that is shifted by x pixels in the X-axis positive direction from a position shifted by x pixels in the X-axis negative direction from the pixel position on the actual measurement image corresponding to the pixel position of the segment region Si on the reference image.
  • the search range Ri is set.
  • the degree of matching between the dot pattern of the segment area Si stored in the reference template and the dot pattern of the DP light of the measured image is obtained at each feed position. It is done. As described above, the segment area Si is sent only in the X-axis direction within the search range Ri as described above. Normally, the dot pattern of the segment area set by the reference template is a predetermined value in the X-axis direction at the time of actual measurement. This is because the displacement occurs only within the range.
  • the dot pattern corresponding to the segment area may protrude from the actual measurement image in the X-axis direction.
  • the dot pattern corresponding to the segment area S1 on the negative side of the X axis on the most negative side of the reference pattern area is reflected by an object far from the reference plane, the dot pattern corresponding to the segment area S1 is more than the actual measurement image.
  • Positioned in the negative X-axis direction since the dot pattern corresponding to the segment area is not within the effective imaging area of the CMOS image sensor 240, this area cannot be properly matched. However, since areas other than the edge region can be appropriately matched, the influence on the object distance detection is small.
  • the effective imaging area of the CMOS image sensor 240 at the time of actual measurement can be made larger than the effective imaging area of the CMOS image sensor 240 at the time of acquiring the reference image.
  • an effective imaging area is set with a size of VGA (640 pixels ⁇ 480 pixels) at the time of acquiring a reference image, it is 30 pixels larger in the X-axis positive direction and the X-axis negative direction than that when actually measured. Set the effective imaging area by size. As a result, the actually measured image becomes larger than the reference image, but the edge region can also be appropriately matched.
  • the pixel value of each pixel in the reference pattern area and the pixel value of each pixel in each segment area of the measured image are binarized and stored in the memory 26.
  • the pixel values of the reference image and the actually measured image are 8-bit gradations, among the pixel values of 0 to 255, pixels that are equal to or greater than a predetermined threshold are pixels whose pixel value is 1 and pixels that are less than the predetermined threshold are pixels
  • the value is converted to 0 and stored in the memory 26.
  • the similarity between the comparison region and the segment region Si is obtained. That is, the difference between the pixel value of each pixel in the segment area Si and the pixel value of the pixel corresponding to the comparison area is obtained.
  • a value Rsad obtained by adding the obtained difference to all the pixels in the comparison region is acquired as a value indicating the similarity.
  • FIG. 6D the value Rsad is obtained for all the comparison regions in the search range Ri for the segment region Si.
  • FIG. 6E is a graph schematically showing the magnitude of the value Rsad at each feed position in the search range Ri.
  • the minimum value Bt1 is referred to from the obtained value Rsad.
  • the second smallest value Bt2 is referred to from the obtained value Rsad. If the difference value Es between the minimum value Bt1 and the second smallest value Bt2 is less than the threshold value, the search for the segment area Si is regarded as an error.
  • the comparison area Ci corresponding to the minimum value Bt1 is determined as the movement area of the segment area Si.
  • the comparison area Ci is detected at a position shifted by ⁇ pixels in the X-axis positive direction from the pixel position Si0 on the measured image at the same position as the pixel position of the segment area Si on the reference image. The This is because the dot pattern of the DP light on the measured image is displaced in the X-axis positive direction from the segment area Si on the reference image by a detection target object (person) that is present at a position closer to the reference plane.
  • segment area search is performed for all the segment areas from segment area S1 to segment area Sn.
  • the segment area Si when the segment area Si is detected, it is expressed in gray scales from white to black according to the amount of deviation of the detected position of the segment area Si with respect to the pixel position Si0 (hereinafter referred to as “pixel deviation amount”).
  • pixel deviation amount This value is stored in the memory 26 as the distance information of the segment area Si.
  • a tone of a color closer to black is assigned, and as the detection position of the segment region Si shifts in the positive X-axis direction in the search range Ri .
  • the gradation corresponding to the position shifted in the negative X-axis direction in the search range Ri that is, the gradation of the blackest color is stored in the memory 26.
  • an image in which the matching measurement result is expressed by white and black gradations is referred to as a “measurement image”.
  • FIG. 7 is a diagram showing a distance measurement example when the above distance detection method is used in a state where segment areas are set at intervals of one pixel as in the above comparative example.
  • FIG. 7A illustrates a case where a flat screen is disposed at a distance of 1000 mm from the information acquisition device 1 and a DP light is irradiated and reflected by a person standing at a distance of 800 mm from the information acquisition device 1. It is the actual measurement image which imaged light. The person sticks out his hand about 200 mm forward. That is, in the figure, the distance from the information acquisition device 1 is longer in the order of the person's hand, the trunk, and the screen.
  • FIG. 7B uses the actually measured image of FIG. 7A in a state where the segment area is set at a size of 7 pixels ⁇ 7 pixels and at an interval of 1 pixel as shown in FIG. 5C. It is a figure which shows the measurement image which measured matching.
  • FIG. 7C shows a measurement image obtained by measuring matching using the actual measurement image of FIG. 7A in a state where the size is 15 pixels ⁇ 15 pixels and the segment areas are set at intervals of one pixel.
  • FIG. 7 uses a reference image obtained by irradiating DP light in a state where only the screen is arranged as a reference plane at the same position as the screen in the actually measured image of FIG. 7A and capturing the reflected DP light.
  • the screen portion is shown with gradation close to black, and the person portion is shown with gradation close to white.
  • the hand portion of the person is shown with a gradation closer to white compared to other body portions. In this way, it can be seen that a rough human shape can be recognized in each measurement result.
  • the error at the boundary between the person and the screen is shown in FIG. 6E because the dot pattern reflected by the screen and the dot pattern reflected by the person are included in the comparison area corresponding to the segment area. This is because the difference value between the smallest Rsad and the second smallest Rsad does not exceed the threshold value.
  • the error in the belt position is caused by a decrease in the dot pattern received by the CMOS image sensor 240 due to the belt having a black color with a low reflectance.
  • segment area size As described above, when the size of the segment area is small, the uniqueness of the dot pattern included in the segment area decreases, and error areas frequently occur. Therefore, in order to acquire distance information appropriately, a certain segment area size is required, such as 15 pixels ⁇ 15 pixels in FIG. 7C.
  • the apparent sharpness of the contour of the object in the measurement image deteriorates.
  • the apparent sharpness of the contour of the object is referred to as “resolution”.
  • the sense of resolution is a measure of the fineness of an image felt by a person and is a subjective evaluation standard for a person.
  • FIG. 8 is a diagram for explaining the relationship between the size of the segment area and the resolution.
  • FIG. 8A is a diagram schematically showing the right hand peripheral portion of the actually measured image shown in FIG. 7A
  • FIG. 8B is a diagram showing the inside of the right hand shown in FIG. It is a partial enlarged view of part A1.
  • FIG. 8 (c) is a measurement image of the right hand periphery as a result of measuring matching in a state where the size is 7 pixels ⁇ 7 pixels and the segment area is set at an interval of 1 pixel.
  • FIG. 8D is a measurement image of the right-hand periphery as a result of measuring matching in a state where the size is 21 pixels ⁇ 21 pixels and the segment area is set at an interval of one pixel.
  • FIG. 8 (b) a position S0 where the pixel shift amount is 0 on the actually measured image of the segment area S having a size of 7 pixels ⁇ 7 pixels is shown. Since the index finger is at a shorter distance than the reference surface, the dots reflected by the index finger are imaged by the light receiving optical system 200 while being shifted in the negative X-axis direction. The size of the segment region S is approximately the same as the size of the index finger. Therefore, the comparison area of the segment area S includes only the dots reflected by the index finger. Therefore, the segment area S matches the comparison area C1 shifted in the positive direction of the X axis, and the distance information of the index finger can be obtained appropriately. For example, in the area A′1 shown in FIG. 8C, it can be seen that the contour of the index finger can be recognized.
  • FIG. 8B shows a position S′0 where the pixel shift amount is 0 on the actually measured image of the segment region S ′ having a size of 21 pixels ⁇ 21 pixels.
  • the comparison area of the segment area S ′ dots reflected by the screen and dots reflected by the index finger are mixed. The number of dots included in the comparison area is much more reflected from the screen than the index finger. Since the screen is at the same position as the reference plane, the positions of the dots reflected by the screen are not substantially deviated from the position of the reference image. Therefore, the segment area S ′ has a higher matching rate with the dots reflected by the screen than the dots reflected by the index finger and matches with the comparison area C ′, so that screen distance information can be obtained.
  • the index finger cannot be recognized, and only screen distance information is obtained. Further, comparing FIG. 8C and FIG. 8D, in the case of FIG. 8D, the tip of the thumb, the gap between the middle finger and the ring finger, etc. cannot be recognized, and the resolution is considerably inferior. I understand that.
  • FIG. 9 is a diagram for explaining the relationship between the interval between the segment areas and the resolution of distance detection in the comparative example.
  • a segment area sx is set in an area where the segment area s0 is shifted by one pixel in the positive direction of the X axis.
  • the segment area sy is set in an area where the segment area s0 is shifted by one pixel in the positive Y-axis direction.
  • the size of the segment area is set to 7 pixels ⁇ 7 pixels.
  • the interval between the center p0 of the segment region s0, the center px of the segment region sx, and the center py of the segment region sy is an interval corresponding to one pixel corresponding to the shift width of the segment region.
  • the distance to the target object is detected with a fineness corresponding to one pixel vertically and horizontally. That is, as shown in FIG. 9B, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution.
  • the distance can be detected in units of one pixel, and the distance can be detected with substantially the same resolution as the resolution of the actual measurement image and the reference image output from the CMOS image sensor 240.
  • the inventors of the present application performed a simulation in which a plurality of distance detection resolution heights were set for each segment area size, and compared the degree of degradation of the resolution of the measurement image.
  • an image having a smaller size than the measurement image is obtained by extracting the gradation for each predetermined pixel in the X-axis direction and the Y-axis direction from the measurement image obtained in the state where the segment area is set at an interval of one pixel.
  • an image obtained by enlarging the generated image to the same size as the measurement image is shown.
  • FIG. 10 is a diagram showing a simulation result.
  • FIG. 10 shows the measurement image of the right hand part when the segment area size is 7 pixels ⁇ 7 pixels
  • the middle part shows the measurement image of the right hand part when the size of the segment area is 15 pixels ⁇ 15 pixels.
  • the lower part a measurement image of the right hand part in the case where the size of the segment area is 21 pixels ⁇ 21 pixels is shown.
  • the inventor of the present application subjectively selected a measurement image of such a simulation result to such an extent that the resolution of the hand outline of the measurement image is not impaired when the interval between the segment areas is one pixel.
  • the resolution of the outline of the hand is not substantially impaired until the interval between the segment areas is 2 pixels.
  • the interval between the segment areas is 3 pixels, the sharpness (resolution) of the finger outline is lost compared to the case of 1 pixel.
  • the resolution of the hand outline is not substantially impaired until the interval between the segment areas is 4 pixels.
  • the interval between the segment areas is 8 pixels, the sharpness (feeling of resolution) of the finger outline is largely lost compared to the case of 1 pixel.
  • the resolution of the contour of the hand is not substantially impaired until the interval between the segment areas is 4 pixels.
  • the interval between the segment areas is 8 pixels, the sharpness (feeling of resolution) of the finger outline is largely lost compared to the case of 1 pixel.
  • the size of the segment area is 7 pixels ⁇ 7 pixels, 2 that is an integer closest to the division value 1.75 obtained by dividing 7 by 4 can be obtained as the upper limit pixel number of the segment area interval. . Further, when the size of the segment area is 15 pixels ⁇ 15 pixels, 4 which is an integer closest to the division value 3.75 obtained by dividing 15 by 4 can be obtained as the upper limit number of pixels of the segment area. Further, when the size of the segment area is 21 pixels ⁇ 21 pixels, 5 which is an integer closest to the division value 5.25 obtained by dividing 21 by 4 is obtained as the upper limit number of pixels of the segment area interval. Can do. These results substantially coincide with the number of pixels corresponding to the measurement image selected in the above evaluation of resolution.
  • the integer closest to the division value is the number of pixels that defines the interval of the segment area.
  • the integer obtained by rounding off the decimal point of the division value is the interval of the segment area.
  • an integer obtained by rounding up the fractional value after the decimal point may be used as the number of pixels defining the interval between the segment areas. For example, when the size of the segment area is 15 pixels ⁇ 15 pixels, the decimal point may be rounded down to set the upper limit of the segment area interval to 3 pixels.
  • the upper limit of the segment area interval increases as the size of the segment area increases. This is because the larger the segment area, the worse the resolution when the interval between the segment areas is one pixel and the fewer matching errors.
  • the resolution of the measurement image is originally high when the interval between the segment areas is 1 pixel.
  • the feeling must also be high.
  • the size of the segment area is 7 pixels ⁇ 7 pixels, since the size of the segment area is small, a matching error is likely to occur. For this reason, the matching error and the decrease in resolution caused by increasing the interval between the segment areas are combined, so that the resolution is greatly deteriorated even if the interval between the segment areas is slightly increased. Therefore, in the measurement image, in order to obtain a resolution that is not inferior to that when the interval between the segment regions is one pixel, the interval between the segment regions cannot be increased greatly.
  • the resolution of the measurement image is originally low when the interval between the segment areas is 1 pixel.
  • the resolution may be low.
  • the size of the segment area is 21 pixels ⁇ 21 pixels, since the size of the segment area is large, a matching error hardly occurs. For this reason, even if the interval between the segment areas is widened, the resolution is unlikely to deteriorate. Therefore, even when the interval between the segment areas is greatly widened, it is possible to obtain a resolution that is not much different from that when the interval between the segment areas is one pixel in the measurement image.
  • the interval between the segment areas can be set larger as the size of the segment area becomes larger.
  • the interval between the segment areas is set with an upper limit of about 1 ⁇ 4 of the number of pixels included in one side of the segment area.
  • the processing load on the CPU 21 can be greatly reduced without significantly impairing the resolution.
  • the interval between the segment areas is set to about 1/4 of the number of pixels included in one side of the segment area.
  • the interval of the segment areas In order to reduce the processing burden on the CPU 21 as compared with the case of one pixel interval, it is necessary to set the interval of the segment areas to 2 pixels or more and to the above upper limit value or less.
  • the upper limit value is set to about 1 ⁇ 4 of the number of pixels included in one side of the segment area. For this reason, unless the size of the segment area is 5 pixels ⁇ 5 pixels or more, the interval between the segment areas cannot be set to 2 pixels or more.
  • the size of the segment area is 4 pixels ⁇ 4 pixels, the division value obtained by dividing 4 by 4 is 1, and the interval between the segment areas cannot be set larger than 1.
  • the size of the segment area is 5 pixels ⁇ 5 pixels, the division value obtained by dividing 5 by 4 is 1.25, and the interval between the segment areas is close to 1.25 (rounded up after the decimal point) It may be set to. Therefore, when applying the examination in the simulation result, the size of the segment area needs to be 5 pixels ⁇ 5 pixels or more.
  • FIG. 11 is a diagram for explaining the relationship between the interval between segment areas and the resolution of distance detection in the present embodiment.
  • the interval between the segment areas is set to about 1 ⁇ 4 of the number of pixels included in one side of the segment area.
  • the segment area S0 is moved to the right side of the segment area S0 by shifting the segment area S0 by 2 pixels in the X-axis positive direction.
  • Region Sx is set.
  • the segment area Sy is set in an area where the segment area S0 is shifted by two pixels in the positive Y-axis direction.
  • the interval between the center P0 of the segment area S0, the center Px of the segment area Sx, and the center py of the segment area Sy is a distance corresponding to two pixels corresponding to the shift width of the segment area.
  • the distance to the target object is detected at a distance corresponding to two pixels in the vertical and horizontal directions. That is, as shown in FIG. 11B, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution. Therefore, in the present embodiment, the resolution can be halved and the amount of computation is 1 ⁇ 4 compared to the case of the comparative example. For example, four circles when the interval between the segment areas is 2 pixels corresponds to 16 circles when the interval between the segment areas is 1 pixel.
  • the segment area S′0 is set to the right side of the segment area S′0 by 4 pixels in the positive direction of the X axis.
  • the segment area SP′x is set in the shifted area.
  • the segment area S′y is set in an area where the segment area S′0 is shifted by 4 pixels in the positive Y-axis direction.
  • the interval between the center P′0 of the segment area S′0, the center P′x of the segment area S′x, and the center p′y of the segment area S′y is 4 pixels corresponding to the shift width of the segment area. It becomes.
  • the distance to the target object is detected at a distance corresponding to four pixels vertically and horizontally. That is, as shown in FIG. 11C, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution. Therefore, in the present embodiment, the resolution can be reduced to 1/4 and the amount of calculation is 1/16 compared to the case where the interval between the segment areas is one pixel. For example, four circles when the segment area interval is 4 pixels correspond to 64 circles when the segment area interval is 1 pixel.
  • the amount of calculation is reduced to 1 / (pixel of the segment area interval) compared to the case where the interval between the segment areas is set to 1 pixel. (The square of the number).
  • the segment area needs to have a certain size, such as 15 pixels ⁇ 15 pixels. And the amount of calculation can be greatly reduced.
  • the resolution of the segment area is reduced to about 1/4 pixel of the number of pixels included in one side of the segment area as compared with the case where the interval of the segment area is 1 pixel. Therefore, the amount of calculation can be reduced while maintaining the accuracy of object detection.
  • FIG. 12A is a diagram showing a flow of an outline of processing up to the setting of the segment area in the present embodiment.
  • FIG. 12B shows a flow of processing for setting the size and interval of the segment area in this embodiment, and
  • FIG. 12C shows a table T that holds the values of the size and interval of the segment area.
  • FIG. These processes are performed by the setting person using the setting device when setting up the information acquisition device 1.
  • the size of the segment area and the interval between the segment areas are set (S1). Specifically, as shown in FIG. 12B, the setter first sets the vertical and horizontal sizes of the segment area (S101).
  • the vertical and horizontal sizes of the set segment area are stored in the memory 26.
  • the same value is used for the vertical and horizontal sizes of the segment areas, and the segment areas are set to be square.
  • the setting device divides the number of pixels included in one side of the set segment area by 4 (S102). Then, the setting device rounds off the fractional value after the decimal point (S103). The setting device sets the value thus obtained in the memory 26 in association with the size of the segment area as the interval of the segment area as in the table T shown in FIG. 12C (S104). Then, the setting device determines whether an instruction to complete the setting of the size and interval of the segment area is input by the setter (S105). When an instruction to complete the setting is not input (S105: NO), the setting device returns the process to S101 and accepts the setting of the size and interval of another segment area.
  • the setting apparatus ends the setting process of the segment area size and the segment area interval.
  • an appropriate distance detection resolution that does not deteriorate the resolution of the measurement image with respect to the size of the segment area set in S101 is determined.
  • the setter causes the CPU 21 to irradiate the DP light through the setting device in a state where the reference plane is arranged in the target area, and obtain a reference image (S2). Then, the CPU 21 is caused to set the segment area based on the acquired reference image, the size of the segment area set in S1 and the interval between the segment areas (S3).
  • FIG. 13 is a diagram showing a flow of segment area setting processing.
  • the CPU 21 sets a reference pattern area on the standard image (S301), information on the position of the reference pattern area on the CMOS image sensor 240, and all the pixels included in the reference pattern area. Are stored in the memory 26 (S302).
  • the setter specifies the size of the segment area via the setting device.
  • the CPU 21 specifies the size of the vertical and horizontal widths of the segment area stored in the memory 26 (S303).
  • the segment area interval corresponding to the size of the segment area is read from the table generated in S104 and set to the variable n (S304).
  • a plurality of segment area sizes can be selected, and the interval between the segment areas is set according to the size of the segment area. For example, when the size of the segment area of 15 pixels ⁇ 15 pixels is selected, 2 pixels are set as the variable n as the interval between the segment areas.
  • the CPU 21 determines whether or not the position of the segment area Si has reached the right end of the reference pattern area (S308). If the position of the segment area Si has not reached the right end of the reference pattern area (S308: NO), 1 is added to i (S309), and an area shifted by n pixels in the X-axis positive direction from the position of the segment area Si is obtained. By designating, the segment area Si is set (S310). Thereafter, the CPU 21 returns the process to S307.
  • the position of the segment area Si is the lower end of the reference pattern area. Is determined (S311).
  • the segment area Si When the segment area Si is set from the left end at the upper end of the reference pattern area to the right end at the lower end, and the position information of the segment area Si is stored in the memory 26 (S311: YES), FIG. 11 (a) or FIG. 11 (b).
  • the segment areas are set so as to be arranged at intervals of n pixels in the X-axis direction and the Y-axis direction in accordance with the size of the segment area, and the processing ends. Note that the pattern of pixel values in each segment area differs for each segment area.
  • information on the position of the reference pattern area on the CMOS image sensor 240 pixel values (reference patterns) of all pixels included in the reference pattern area, and information on the position of the segment area set for the reference pattern area are obtained. It is stored in the memory 26 as a reference template. As the segment area information, only the information regarding the position on the CMOS image sensor 240 is stored in the memory 26, but the pixel value in the segment area may be stored.
  • the CPU 21 in FIG. 2 determines the distance to each part of the object based on the amount of deviation of the dot pattern in each segment area obtained from the reference template. Is calculated. As described above, the distance calculation method is calculated based on the triangulation method using the displacement amount of the segment area.
  • the interval between the segment areas is set to be large according to the size of the segment area, so that it is possible to suppress an increase in the amount of calculation required for distance detection.
  • the interval between the segment areas is set to about 1 ⁇ 4 pixel of the size of the segment area, so that an increase in the amount of calculation for distance detection is suppressed while maintaining the object detection accuracy. be able to.
  • the segment area interval corresponding to the segment area size is set in the table T in advance, but the segment area interval is not held in the table and the segment area size is read. May be directly calculated.
  • FIG. 14 is a diagram showing a flow of the segment area setting process in this case.
  • the same processing as in FIG. 13 in the above embodiment is denoted by the same reference numeral, and the description thereof is omitted.
  • the CPU 21 divides the number of pixels included in one side of the segment area by 4 (S321). Then, the value after the decimal point is rounded off (S322). The value thus obtained is set in the variable n as the interval between the segment areas (S304). Thereafter, similarly to the above embodiment, the segment regions Si are set at n pixel intervals (S305 to S312).
  • the distance between the segment areas is set to about 1 ⁇ 4 of the number of pixels included in one side of the segment area, thereby reducing the amount of calculation of the distance while maintaining the object detection accuracy.
  • the interval between the segment regions may be slightly reduced or increased depending on the required detection accuracy.
  • the size of the segment area can be selected by the setter, but the size of the segment area may be determined in advance.
  • the interval between the segment areas is set to about 1/4 of the number of pixels included in one side of the segment area.
  • the interval between the segment areas can be increased according to the size of the segment area.
  • other values may be set.
  • the interval of the segment areas may be set to the number of pixels between 2 pixels and the upper limit value, with the upper limit value being approximately 1 ⁇ 4 of the number of pixels included in one side of the segment area.
  • the optimum value of the segment area interval is individually determined in advance from the number of pixels between 2 pixels and the upper limit value, and the segment area interval is determined by holding the value in a table. You may decide.
  • the segment areas are set to have the same vertical and horizontal sizes and become square, but may be set so that one side is larger than the other side.
  • the interval between the segment areas is determined based on the number of pixels included on either side.
  • an error is determined based on whether the difference between Rsad with the highest matching rate and Rsad with the next highest matching rate exceeds a threshold.
  • An error may be determined based on whether Rsad having the highest collation rate exceeds a predetermined threshold.
  • the pixel values of the pixels included in the segment area and the comparison area are binarized before calculating the matching rate between the segment area and the comparison area. You may match using a value as it is.
  • the pixel value obtained by the CMOS image sensor 240 is binarized as it is.
  • the pixel value is subjected to correction processing such as predetermined pixel weighting processing and background light removal processing. After performing, it may be binarized or multi-valued.
  • the distance information is obtained using the triangulation method and stored in the memory 26.
  • the distance using the triangulation method is set.
  • the displacement amount (pixel shift amount) of the segment area may be acquired as the distance information without calculating.
  • the FMD 150 is used for the projection optical system 100, but the FMD 150 may be omitted.
  • the filter 230 is disposed to remove light in a wavelength band other than the wavelength band of the laser light irradiated to the target region.
  • light other than the laser light irradiated to the target region is used.
  • the filter 230 can be omitted.
  • the arrangement position of the aperture 210 may be between any two imaging lenses.
  • the CMOS image sensor 240 is used as the light receiving element, but a CCD image sensor can be used instead. Furthermore, the configurations of the projection optical system 100 and the light receiving optical system 200 can be changed as appropriate. Further, the information acquisition device 1 and the information processing device 2 may be integrated, or the information acquisition device 1 and the information processing device 2 may be integrated with a television, a game machine, or a personal computer.
  • DESCRIPTION OF SYMBOLS 1 Information acquisition apparatus 21 ... CPU (distance acquisition part) 21b ... Distance acquisition unit 24 ... Imaging signal processing circuit (distance acquisition unit) 26 ... Memory (storage unit) DESCRIPTION OF SYMBOLS 100 ... Projection optical system 110 ... Laser light source 120 ... Collimator lens 140 ... DOE (diffractive optical element) 200 ... Light receiving optical system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Measurement Of Optical Distance (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

Provided are an object detection device and an information acquisition device able to suppress an increase in amount of computation while maintaining the precision of object detection. The information acquisition device (1) is provided with: a projecting optical system (100); a light reception optical system (200) that images a target region by means of a CMOS image sensor (240); a memory (26) that saves a baseline image captured when laser light is radiated at a baseline surface; and a distance acquisition unit that acquires distance by means of setting segment regions in the baseline image and comparing dots within the segment regions and the actual measurement image captured during distance measurement. The distance acquisition unit sets the size of the segment regions in a manner so as to form a square shape of at least five pixels vertically and at least five pixels horizontally, and sets the spacing between adjacent segment regions to a distance that is at least a distance of two pixels and a distance that is no greater than an upper limit value that is in accordance with the size of the segment regions. As a result, it is possible to suppress an increase in computation without loss of object detection precision.

Description

物体検出装置および情報取得装置Object detection device and information acquisition device
 本発明は、目標領域に光を投射したときの反射光の状態に基づいて目標領域内の物体を検出する物体検出装置および当該物体検出装置に用いて好適な情報取得装置に関する。 The present invention relates to an object detection apparatus that detects an object in a target area based on a state of reflected light when light is projected onto the target area, and an information acquisition apparatus suitable for use in the object detection apparatus.
 従来、光を用いた物体検出装置が種々の分野で開発されている。いわゆる距離画像センサを用いた物体検出装置では、2次元平面上の平面的な画像のみならず、検出対象物体の奥行き方向の形状や動きを検出することができる。かかる物体検出装置では、レーザ光源やLED(Light Emitting Diode)から、予め決められた波長帯域の光が目標領域に投射され、その反射光がCMOSイメージセンサ等の受光素子により受光される。距離画像センサとして、種々のタイプのものが知られている。 Conventionally, an object detection device using light has been developed in various fields. An object detection apparatus using a so-called distance image sensor can detect not only a planar image on a two-dimensional plane but also the shape and movement of the detection target object in the depth direction. In such an object detection device, light in a predetermined wavelength band is projected from a laser light source or LED (Light-Emitting-Diode) onto a target area, and the reflected light is received by a light-receiving element such as a CMOS image sensor. Various types of distance image sensors are known.
 所定のドットパターンを持つレーザ光を目標領域に照射するタイプの距離画像センサでは、ドットパターンを持つレーザ光の目標領域からの反射光が受光素子によって受光される。そして、ドットの受光素子上の受光位置に基づいて、三角測量法を用いて、検出対象物体の各部(検出対象物体上の各ドットの照射位置)までの距離が検出される(たとえば、非特許文献1)。 In a distance image sensor of a type that irradiates a target region with laser light having a predetermined dot pattern, reflected light from the target region of laser light having a dot pattern is received by a light receiving element. Based on the light receiving position of the dot on the light receiving element, the distance to each part of the detection target object (irradiation position of each dot on the detection target object) is detected using triangulation (for example, non-patent) Reference 1).
 上記物体検出装置では、所定距離だけ離れた位置に基準面を配したときにイメージセンサにより撮像されるドットパターンと、実測時にイメージセンサにより撮像されるドットパターンとが比較されて、距離の検出が行われる。たとえば、基準面に対するドットパターンに複数の領域が設定される。物体検出装置は、各領域に含まれるドットが実測時に撮像したドットパターン上のどの位置に移動したかに基づいて、領域毎に、対象物体までの距離を検出する。そして、物体検出装置は、領域毎に得られた距離情報をもとに、物体の形状、輪郭および動き等を検出する。 In the object detection device, the dot pattern captured by the image sensor when the reference plane is arranged at a position separated by a predetermined distance is compared with the dot pattern captured by the image sensor at the time of actual measurement to detect the distance. Done. For example, a plurality of areas are set in the dot pattern with respect to the reference plane. The object detection device detects the distance to the target object for each region based on the position on the dot pattern captured at the time of actual measurement of the dots included in each region. Then, the object detection device detects the shape, contour, movement, and the like of the object based on the distance information obtained for each region.
 この場合、ドットパターンに設定される領域の数が多い程、対象物体に対して、撮像領域の面内方向に細かく距離を検出でき、距離検出の分解能が増加する。たとえば、隣り合う領域がイメージセンサ上において互いに1画素だけずれるように、各領域を設定することにより、距離検出の分解能を顕著に高めることができる。しかしながら、このように領域を設定すると、距離検出のための演算量が膨大となり、処理負担が大きくなるとの問題が生じる。距離検出の分解能は、取得された距離情報により検出対象物体の形状や輪郭が把握可能な程度に設定されれば良い。 In this case, as the number of areas set in the dot pattern increases, the distance can be detected more finely in the in-plane direction of the imaging area with respect to the target object, and the distance detection resolution increases. For example, the resolution of distance detection can be remarkably enhanced by setting each region so that adjacent regions are shifted from each other by one pixel on the image sensor. However, when the area is set in this way, there is a problem that the amount of calculation for distance detection becomes enormous and the processing load increases. The resolution of distance detection may be set to such an extent that the shape and contour of the detection target object can be grasped by the acquired distance information.
 本発明は、この点に鑑みてなされたものであり、物体検出の精度を保ちつつ、演算量の増加を抑えることが可能な情報取得装置および物体検出装置を提供することを目的とする。 The present invention has been made in view of this point, and an object of the present invention is to provide an information acquisition apparatus and an object detection apparatus that can suppress an increase in the amount of calculation while maintaining the accuracy of object detection.
 本発明の第1の態様は、光を用いて目標領域の情報を取得する情報取得装置に関する。本態様に係る情報取得装置は、目標領域に所定のドットパターンでレーザ光を投射する投射光学系と、前記投射光学系に対して所定の距離だけ横方向に離れて並ぶように配置され、前記目標領域をイメージセンサにより撮像する受光光学系と、基準面に前記レーザ光を照射したときに前記受光光学系により撮像された基準ドットパターンを保持する記憶部と、前記基準ドットパターンにセグメント領域を設定し、距離測定時に目標領域を撮像して取得された実測ドットパターンと前記セグメント領域内のドットとを照合することにより、前記セグメント領域に対応する前記目標領域内の位置について距離を取得する距離取得部と、を備える。前記距離取得部は、前記イメージセンサ上において縦が5画素以上、横が5画素以上の四角形状となるよう前記セグメント領域の大きさを設定し、隣り合う前記セグメント領域の間隔を、2画素分の距離以上で、且つ、前記セグメント領域の大きさに応じた上限値以下の距離に設定する。 1st aspect of this invention is related with the information acquisition apparatus which acquires the information of a target area | region using light. The information acquisition device according to this aspect is arranged so as to be lined up with a projection optical system that projects a laser beam with a predetermined dot pattern on a target area and spaced apart by a predetermined distance from the projection optical system, A light receiving optical system that images a target area with an image sensor, a storage unit that holds a reference dot pattern imaged by the light receiving optical system when the laser beam is irradiated on a reference surface, and a segment area in the reference dot pattern The distance for which the distance is acquired for the position in the target area corresponding to the segment area by collating the measured dot pattern acquired by imaging the target area during distance measurement and the dots in the segment area An acquisition unit. The distance acquisition unit sets the size of the segment area so that the image sensor has a quadrangular shape with 5 pixels or more in the vertical direction and 5 pixels or more in the horizontal direction, and sets the interval between the adjacent segment areas by 2 pixels. And a distance not more than the upper limit value corresponding to the size of the segment area.
 本発明の第2の態様は、物体検出装置に関する。本態様に係る物体検出装置は、上記第1の態様に係る情報取得装置を有する。 The second aspect of the present invention relates to an object detection apparatus. The object detection apparatus according to this aspect includes the information acquisition apparatus according to the first aspect.
 本発明によれば、物体検出の精度を保ちつつ、演算量の増加を抑えることが可能な情報取得装置および物体検出装置を提供することができる。 According to the present invention, it is possible to provide an information acquisition device and an object detection device that can suppress an increase in the amount of calculation while maintaining the accuracy of object detection.
 本発明の効果ないし意義は、以下に示す実施の形態の説明により更に明らかとなろう。ただし、以下に示す実施の形態は、あくまでも、本発明を実施化する際の一つの例示であって、本発明は、以下の実施の形態により何ら制限されるものではない。 The effect or significance of the present invention will become more apparent from the following description of embodiments. However, the embodiment described below is merely an example when the present invention is implemented, and the present invention is not limited to the following embodiment.
実施の形態に係る物体検出装置の構成を示す図である。It is a figure which shows the structure of the object detection apparatus which concerns on embodiment. 実施の形態に係る情報取得装置と情報処理装置の構成を示す図である。It is a figure which shows the structure of the information acquisition apparatus and information processing apparatus which concern on embodiment. 実施の形態に係る投射光学系と受光光学系の外観を示す斜視図である。It is a perspective view which shows the external appearance of the projection optical system which concerns on embodiment, and a light-receiving optical system. 実施の形態に係る目標領域に対するレーザ光の照射状態とイメージセンサ上のレーザ光の受光状態を示す図である。It is a figure which shows the irradiation state of the laser beam with respect to the target area | region which concerns on embodiment, and the light reception state of the laser beam on an image sensor. 実施の形態に係る参照パターンの生成方法を説明する図である。It is a figure explaining the production | generation method of the reference pattern which concerns on embodiment. 実施の形態に係る距離検出手法を説明する図である。It is a figure explaining the distance detection method which concerns on embodiment. 実施の形態に係る距離検出結果を示す図である。It is a figure which shows the distance detection result which concerns on embodiment. 実施の形態に係るセグメント領域の大きさと解像感の関係について説明する図である。It is a figure explaining the relationship between the magnitude | size of the segment area | region and resolution feeling which concerns on embodiment. 比較例に係るセグメント領域の大きさと距離検出の分解能について説明する図である。It is a figure explaining the magnitude | size of the segment area | region and the resolution of distance detection which concern on a comparative example. セグメント領域の大きさ別に距離検出の分解能を複数設定したシミュレーション結果を示す図である。It is a figure which shows the simulation result which set multiple resolutions of distance detection according to the magnitude | size of a segment area | region. 実施の形態に係るセグメント領域の大きさと距離検出の分解能について説明する図である。It is a figure explaining the magnitude | size of the segment area | region and resolution | decomposability of distance detection which concern on embodiment. 実施の形態に係るセグメント領域の大きさとセグメント領域の間隔の設定の処理の流れを示す図である。It is a figure which shows the flow of a process of the setting of the magnitude | size of a segment area | region and the space | interval of a segment area | region which concerns on embodiment. 実施の形態に係るセグメント領域の設定の処理の流れを示す図である。It is a figure which shows the flow of a process of the setting of the segment area | region which concerns on embodiment. 変更例に係るセグメント領域の間隔の設定、およびセグメント領域の設定処理の流れを説明する図である。It is a figure explaining the setting of the space | interval of the segment area | region which concerns on the example of a change, and the flow of a setting process of a segment area | region.
 以下、本発明の実施の形態につき図面を参照して説明する。本実施の形態には、所定のドットパターンを持つレーザ光を目標領域に照射するタイプの情報取得装置が例示されている。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the present embodiment, an information acquisition device of a type that irradiates a target area with laser light having a predetermined dot pattern is exemplified.
 まず、図1に本実施の形態に係る物体検出装置の概略構成を示す。図示の如く、物体検出装置は、情報取得装置1と、情報処理装置2とを備えている。テレビ3は、情報処理装置2からの信号によって制御される。 First, FIG. 1 shows a schematic configuration of the object detection apparatus according to the present embodiment. As illustrated, the object detection device includes an information acquisition device 1 and an information processing device 2. The television 3 is controlled by a signal from the information processing device 2.
 情報取得装置1は、目標領域全体に赤外光を投射し、その反射光をCMOSイメージセンサにて受光することにより、目標領域にある物体各部の距離(以下、「3次元距離情報」という)を取得する。取得された3次元距離情報は、ケーブル4を介して情報処理装置2に送られる。 The information acquisition device 1 projects infrared light over the entire target area and receives the reflected light with a CMOS image sensor, whereby the distance between each part of the object in the target area (hereinafter referred to as “three-dimensional distance information”). To get. The acquired three-dimensional distance information is sent to the information processing apparatus 2 via the cable 4.
 情報処理装置2は、たとえば、テレビ制御用のコントローラやゲーム機、パーソナルコンピュータ等である。情報処理装置2は、情報取得装置1から受信した3次元距離情報に基づき、目標領域における物体を検出し、検出結果に基づきテレビ3を制御する。 The information processing apparatus 2 is, for example, a controller for TV control, a game machine, a personal computer, or the like. The information processing device 2 detects an object in the target area based on the three-dimensional distance information received from the information acquisition device 1, and controls the television 3 based on the detection result.
 たとえば、情報処理装置2は、受信した3次元距離情報に基づき人を検出するとともに、3次元距離情報の変化から、その人の動きを検出する。たとえば、情報処理装置2がテレビ制御用のコントローラである場合、情報処理装置2には、受信した3次元距離情報からその人のジェスチャを検出するとともに、ジェスチャに応じてテレビ3に制御信号を出力するアプリケーションプログラムがインストールされている。この場合、ユーザは、テレビ3を見ながら所定のジェスチャをすることにより、チャンネル切り替えやボリュームのUp/Down等、所定の機能をテレビ3に実行させることができる。 For example, the information processing apparatus 2 detects a person based on the received three-dimensional distance information and detects the movement of the person from the change in the three-dimensional distance information. For example, when the information processing device 2 is a television control controller, the information processing device 2 detects the person's gesture from the received three-dimensional distance information, and outputs a control signal to the television 3 in accordance with the gesture. The application program to be installed is installed. In this case, the user can cause the television 3 to execute a predetermined function such as channel switching or volume up / down by making a predetermined gesture while watching the television 3.
 また、たとえば、情報処理装置2がゲーム機である場合、情報処理装置2には、受信した3次元距離情報からその人の動きを検出するとともに、検出した動きに応じてテレビ画面上のキャラクタを動作させ、ゲームの対戦状況を変化させるアプリケーションプログラムがインストールされている。この場合、ユーザは、テレビ3を見ながら所定の動きをすることにより、自身がテレビ画面上のキャラクタとしてゲームの対戦を行う臨場感を味わうことができる。 Further, for example, when the information processing device 2 is a game machine, the information processing device 2 detects the person's movement from the received three-dimensional distance information, and displays a character on the television screen according to the detected movement. An application program that operates and changes the game battle situation is installed. In this case, the user can experience a sense of realism in which he / she plays a game as a character on the television screen by making a predetermined movement while watching the television 3.
 図2は、情報取得装置1と情報処理装置2の構成を示す図である。 FIG. 2 is a diagram showing the configuration of the information acquisition device 1 and the information processing device 2.
 情報取得装置1は、光学部の構成として、投射光学系100と受光光学系200とを備えている。投射光学系100と受光光学系200は、X軸方向に並ぶように、情報取得装置1に配置される。 The information acquisition apparatus 1 includes a projection optical system 100 and a light receiving optical system 200 as a configuration of an optical unit. The projection optical system 100 and the light receiving optical system 200 are arranged in the information acquisition apparatus 1 so as to be aligned in the X-axis direction.
 投射光学系100は、レーザ光源110と、コリメータレンズ120と、リーケージミラー130と、回折光学素子(DOE:Diffractive Optical Element)140と、FMD(FrontMonitor Diode)150とを備えている。また、受光光学系200は、アパーチャ210と、撮像レンズ220と、フィルタ230と、CMOSイメージセンサ240とを備えている。この他、情報取得装置1は、回路部の構成として、CPU(Central Processing Unit)21と、レーザ駆動回路22と、PD信号処理回路23と、撮像信号処理回路24と、入出力回路25と、メモリ26を備えている。 The projection optical system 100 includes a laser light source 110, a collimator lens 120, a leakage mirror 130, a diffractive optical element (DOE: Diffractive Optical Element) 140, and an FMD (Front Monitor Diode) 150. The light receiving optical system 200 includes an aperture 210, an imaging lens 220, a filter 230, and a CMOS image sensor 240. In addition, the information acquisition apparatus 1 includes a CPU (Central Processing Unit) 21, a laser driving circuit 22, a PD signal processing circuit 23, an imaging signal processing circuit 24, an input / output circuit 25, A memory 26 is provided.
 レーザ光源110は、受光光学系200から離れる方向(X軸負方向)に波長830nm程度の狭波長帯域のレーザ光を出力する。コリメータレンズ120は、レーザ光源110から出射されたレーザ光を平行光から僅かに広がった光(以下、単に「平行光」という)に変換する。 The laser light source 110 outputs laser light in a narrow wavelength band with a wavelength of about 830 nm in a direction away from the light receiving optical system 200 (X-axis negative direction). The collimator lens 120 converts the laser light emitted from the laser light source 110 into light slightly spread from parallel light (hereinafter simply referred to as “parallel light”).
 リーケージミラー130は、誘電体薄膜の多層膜からなり、反射率が100%よりも若干低く、透過率が反射率よりも数段小さくなるように膜の層数や膜厚が設計されている。リーケージミラー130は、コリメータレンズ120側から入射されたレーザ光の大部分をDOE140に向かう方向(Z軸方向)に反射し、残りの一部分をFMD150に向かう方向(X軸負方向)に透過する。 The leakage mirror 130 is composed of a multilayer film of dielectric thin films, and the number of layers and the thickness of the film are designed so that the reflectance is slightly lower than 100% and the transmittance is several steps smaller than the reflectance. The leakage mirror 130 reflects most of the laser light incident from the collimator lens 120 side in the direction toward the DOE 140 (Z-axis direction) and transmits the remaining part in the direction toward the FMD 150 (X-axis negative direction).
 DOE140は、入射面に回折パターンを有する。この回折パターンによる回折作用により、DOE140に入射したレーザ光は、ドットパターンのレーザ光に変換されて、目標領域に照射される。回折パターンは、たとえば、ステップ型の回折ホログラムが所定のパターンで形成された構造とされる。回折ホログラムは、コリメータレンズ120により平行光とされたレーザ光をドットパターンのレーザ光に変換するよう、パターンとピッチが調整されている。 The DOE 140 has a diffraction pattern on the incident surface. Due to the diffraction effect of the diffraction pattern, the laser light incident on the DOE 140 is converted into a dot pattern laser light and irradiated onto the target region. The diffraction pattern has, for example, a structure in which a step type diffraction hologram is formed in a predetermined pattern. The diffraction hologram is adjusted in pattern and pitch so as to convert the laser light converted into parallel light by the collimator lens 120 into laser light of a dot pattern.
 DOE140は、リーケージミラー130から入射されたレーザ光を、放射状に広がるドットパターンのレーザ光として、目標領域に照射する。ドットパターンの各ドットの大きさは、DOE140に入射する際のレーザ光のビームサイズに応じたものとなる。 The DOE 140 irradiates the target region with the laser beam incident from the leakage mirror 130 as a laser beam having a dot pattern that spreads radially. The size of each dot in the dot pattern depends on the beam size of the laser light when entering the DOE 140.
 FMD150は、リーケージミラー130を透過したレーザ光を受光し、受光量に応じた電気信号を出力する。 The FMD 150 receives the laser light transmitted through the leakage mirror 130 and outputs an electrical signal corresponding to the amount of light received.
 目標領域から反射されたレーザ光は、アパーチャ210を介して撮像レンズ220に入射する。 The laser light reflected from the target area enters the imaging lens 220 through the aperture 210.
 アパーチャ210は、撮像レンズ220のFナンバーに合うように、外部からの光に絞りを掛ける。撮像レンズ220は、アパーチャ210を介して入射された光をCMOSイメージセンサ240上に集光する。フィルタ230は、レーザ光源110の出射波長(830nm程度)を含む赤外の波長帯域の光を透過し、可視光の波長帯域をカットするIRフィルタ(Infrared Filter)である。 The aperture 210 stops the light from the outside so as to match the F number of the imaging lens 220. The imaging lens 220 collects the light incident through the aperture 210 on the CMOS image sensor 240. The filter 230 is an IR filter (Infrared Filter) that transmits light in the infrared wavelength band including the emission wavelength (about 830 nm) of the laser light source 110 and cuts the wavelength band of visible light.
 CMOSイメージセンサ240は、撮像レンズ220にて集光された光を受光して、画素毎に、受光量に応じた信号(電荷)を撮像信号処理回路24に出力する。ここで、CMOSイメージセンサ240は、各画素における受光から高レスポンスでその画素の信号(電荷)を撮像信号処理回路24に出力できるよう、信号の出力速度が高速化されている。 The CMOS image sensor 240 receives the light collected by the imaging lens 220 and outputs a signal (charge) corresponding to the amount of received light to the imaging signal processing circuit 24 for each pixel. Here, in the CMOS image sensor 240, the output speed of the signal is increased so that the signal (charge) of the pixel can be output to the imaging signal processing circuit 24 with high response from light reception in each pixel.
 CPU21は、メモリ26に格納された制御プログラムに従って各部を制御する。かかる制御プログラムによって、CPU21には、レーザ光源110を制御するためのレーザ制御部21aと、3次元距離情報を生成するための距離取得部21bの機能が付与される。 CPU 21 controls each unit according to a control program stored in memory 26. With this control program, the CPU 21 is provided with the functions of a laser control unit 21a for controlling the laser light source 110 and a distance acquisition unit 21b for generating three-dimensional distance information.
 レーザ駆動回路22は、CPU21からの制御信号に応じてレーザ光源110を駆動する。PD信号処理回路23は、FMD150から出力された受光量に応じた電圧信号を増幅およびデジタル化してCPU21に出力する。CPU21は、PD信号処理回路23から供給される信号をもとに、レーザ制御部21aによる処理によって、レーザ光源110の光量を増幅もしくは減少させる判断を行う。レーザ光源110の光量を変化させる必要があると判断された場合、レーザ制御部21aは、レーザ光源110の発光量を変化させる制御信号をレーザ駆動回路22に送信する。これにより、レーザ光源110から出射されるレーザ光のパワーが略一定に制御される。 The laser drive circuit 22 drives the laser light source 110 according to a control signal from the CPU 21. The PD signal processing circuit 23 amplifies and digitizes the voltage signal corresponding to the amount of received light output from the FMD 150 and outputs it to the CPU 21. Based on the signal supplied from the PD signal processing circuit 23, the CPU 21 determines to amplify or decrease the light amount of the laser light source 110 by processing by the laser control unit 21a. When it is determined that the light amount of the laser light source 110 needs to be changed, the laser control unit 21 a transmits a control signal for changing the light emission amount of the laser light source 110 to the laser driving circuit 22. Thereby, the power of the laser beam emitted from the laser light source 110 is controlled to be substantially constant.
 撮像信号処理回路24は、CMOSイメージセンサ240を制御して、CMOSイメージセンサ240で生成された各画素の信号(電荷)をライン毎に順次取り込む。そして、取り込んだ信号を順次CPU21に出力する。CPU21は、撮像信号処理回路24から供給される信号(撮像信号)をもとに、情報取得装置1から検出対象物の各部までの距離を、距離取得部21bによる処理によって算出する。入出力回路25は、情報処理装置2とのデータ通信を制御する。 The imaging signal processing circuit 24 controls the CMOS image sensor 240 and sequentially takes in the signal (charge) of each pixel generated by the CMOS image sensor 240 for each line. Then, the captured signals are sequentially output to the CPU 21. Based on the signal (imaging signal) supplied from the imaging signal processing circuit 24, the CPU 21 calculates the distance from the information acquisition device 1 to each part of the detection target by processing by the distance acquisition unit 21b. The input / output circuit 25 controls data communication with the information processing apparatus 2.
 情報処理装置2は、CPU31と、入出力回路32と、メモリ33を備えている。なお、情報処理装置2には、同図に示す構成の他、テレビ3との通信を行うための構成や、CD-ROM等の外部メモリに格納された情報を読み取ってメモリ33にインストールするためのドライブ装置等が配されるが、便宜上、これら周辺回路の構成は図示省略されている。 The information processing apparatus 2 includes a CPU 31, an input / output circuit 32, and a memory 33. In addition to the configuration shown in the figure, the information processing apparatus 2 has a configuration for performing communication with the television 3 and for reading information stored in an external memory such as a CD-ROM and installing it in the memory 33. However, the configuration of these peripheral circuits is not shown for the sake of convenience.
 CPU31は、メモリ33に格納された制御プログラム(アプリケーションプログラム)に従って各部を制御する。かかる制御プログラムによって、CPU31には、画像中の物体を検出するための物体検出部31aの機能が付与される。かかる制御プログラムは、たとえば、図示しないドライブ装置によってCD-ROMから読み取られ、メモリ33にインストールされる。 The CPU 31 controls each unit according to a control program (application program) stored in the memory 33. With such a control program, the CPU 31 is provided with the function of the object detection unit 31a for detecting an object in the image. Such a control program is read from a CD-ROM by a drive device (not shown) and installed in the memory 33, for example.
 たとえば、制御プログラムがゲームプログラムである場合、物体検出部31aは、情報取得装置1から供給される3次元距離情報から画像中の人およびその動きを検出する。そして、検出された動きに応じてテレビ画面上のキャラクタを動作させるための処理が制御プログラムにより実行される。 For example, when the control program is a game program, the object detection unit 31a detects a person in the image and its movement from the three-dimensional distance information supplied from the information acquisition device 1. Then, a process for operating the character on the television screen according to the detected movement is executed by the control program.
 また、制御プログラムがテレビ3の機能を制御するためのプログラムである場合、物体検出部31aは、情報取得装置1から供給される3次元距離情報から画像中の人およびその動き(ジェスチャ)を検出する。そして、検出された動き(ジェスチャ)に応じて、テレビ3の機能(チャンネル切り替えやボリューム調整、等)を制御するための処理が制御プログラムにより実行される。 When the control program is a program for controlling the function of the television 3, the object detection unit 31 a detects a person in the image and its movement (gesture) from the three-dimensional distance information supplied from the information acquisition device 1. To do. Then, processing for controlling functions (channel switching, volume adjustment, etc.) of the television 3 is executed by the control program in accordance with the detected movement (gesture).
 入出力回路32は、情報取得装置1とのデータ通信を制御する。 The input / output circuit 32 controls data communication with the information acquisition device 1.
 図3は、投射光学系100と受光光学系200の設置状態を示す斜視図である。 FIG. 3 is a perspective view showing an installation state of the projection optical system 100 and the light receiving optical system 200.
 投射光学系100と受光光学系200は、ベースプレート300に配置される。投射光学系100を構成する光学部材は、ハウジング100aに設置され、このハウジング100aがベースプレート300上に設置される。これにより、投射光学系100がベースプレート300上に配置される。150a、240aは、それぞれ、FMD150、CMOSイメージセンサ240からの信号を回路基板(図示せず)に供給するためのFPC(フレキシブルプリント基板)である。 The projection optical system 100 and the light receiving optical system 200 are disposed on the base plate 300. The optical members constituting the projection optical system 100 are installed in the housing 100a, and the housing 100a is installed on the base plate 300. Thereby, the projection optical system 100 is arranged on the base plate 300. Reference numerals 150a and 240a denote FPCs (flexible printed circuit boards) for supplying signals from the FMD 150 and the CMOS image sensor 240 to a circuit board (not shown), respectively.
 受光光学系200を構成する光学部材は、ホルダ200aに設置され、このホルダ200aが、ベースプレート300の背面からベースプレート300に取りつけられる。これにより、受光光学系200がベースプレート300に配置される。なお、受光光学系200は、Z軸方向に光学部材が並ぶため、投射光学系100と比べ、Z軸方向の高さが高くなっている。ベースプレート300は、Z軸方向の高さを抑えるために、受光光学系200の配置位置周辺がZ軸方向に一段高くなっている。 The optical member constituting the light receiving optical system 200 is installed in the holder 200a, and this holder 200a is attached to the base plate 300 from the back surface of the base plate 300. As a result, the light receiving optical system 200 is disposed on the base plate 300. In the light receiving optical system 200, since optical members are arranged in the Z-axis direction, the height in the Z-axis direction is higher than that of the projection optical system 100. In the base plate 300, in order to suppress the height in the Z-axis direction, the periphery of the arrangement position of the light receiving optical system 200 is raised by one step in the Z-axis direction.
 図3に示す設置状態において、投射光学系100の射出瞳と受光光学系200の入射瞳の位置は、Z軸方向において、略一致する。また、投射光学系100と受光光学系200は、投射光学系100の投射中心と受光光学系200の撮像中心がX軸に平行な直線上に並ぶように、X軸方向に所定の距離をもって並んで設置される。 3, the positions of the exit pupil of the projection optical system 100 and the entrance pupil of the light receiving optical system 200 substantially coincide with each other in the Z-axis direction. Further, the projection optical system 100 and the light receiving optical system 200 are arranged with a predetermined distance in the X-axis direction so that the projection center of the projection optical system 100 and the imaging center of the light-receiving optical system 200 are aligned on a straight line parallel to the X axis. Installed at.
 投射光学系100と受光光学系200の設置間隔は、情報取得装置1と目標領域の基準面との距離に応じて、設定される。どの程度離れた目標物を検出対象とするかによって、基準面と情報取得装置1との間の距離が変わる。検出対象の目標物までの距離が近くなるほど、投射光学系100と受光光学系200の設置間隔は狭くなる。逆に、検出対象の目標物までの距離が遠くなるほど、投射光学系100と受光光学系200の設置間隔は広くなる。 The installation interval between the projection optical system 100 and the light receiving optical system 200 is set according to the distance between the information acquisition device 1 and the reference plane of the target area. The distance between the reference plane and the information acquisition device 1 varies depending on how far away the target is to be detected. The closer the distance to the target to be detected is, the narrower the installation interval between the projection optical system 100 and the light receiving optical system 200 is. Conversely, as the distance to the target to be detected increases, the installation interval between the projection optical system 100 and the light receiving optical system 200 increases.
 図4(a)は、目標領域に対するレーザ光の照射状態を模式的に示す図、図4(b)は、CMOSイメージセンサ240におけるレーザ光の受光状態を模式的に示す図である。なお、同図(b)には、便宜上、目標領域に平坦な面(スクリーン)とスクリーンの前に人物が存在するときの受光状態が示されている。 FIG. 4A is a diagram schematically showing the irradiation state of the laser light on the target region, and FIG. 4B is a diagram schematically showing the light receiving state of the laser light in the CMOS image sensor 240. FIG. For the sake of convenience, FIG. 5B shows a flat surface (screen) in the target area and a light receiving state when a person is present in front of the screen.
 図4(a)に示すように、投射光学系100からは、ドットパターンを持ったレーザ光(以下、このパターンを持つレーザ光の全体を「DP光」という)が、目標領域に照射される。図4(a)には、DP光の光束領域が実線の枠によって示されている。DP光の光束中には、DOE140による回折作用によってレーザ光の強度が高められたドット領域(以下、単に「ドット」という)が、DOE140による回折作用によるドットパターンに従って点在している。 As shown in FIG. 4A, the projection optical system 100 irradiates a target region with laser light having a dot pattern (hereinafter, the entire laser light having this pattern is referred to as “DP light”). . In FIG. 4A, the luminous flux region of DP light is indicated by a solid line frame. In the light flux of DP light, dot regions (hereinafter simply referred to as “dots”) in which the intensity of the laser light is increased by the diffraction action by the DOE 140 are scattered according to the dot pattern by the diffraction action by the DOE 140.
 目標領域に平坦な面(スクリーン)が存在すると、これにより反射されたDP光は、図4(b)のように、CMOSイメージセンサ240上に分布する。 If there is a flat surface (screen) in the target area, the DP light reflected thereby is distributed on the CMOS image sensor 240 as shown in FIG.
 図4(b)には、CMOSイメージセンサ240上のDP光の全受光領域が破線の枠によって示され、CMOSイメージセンサ240の撮像有効領域に入射するDP光の受光領域が実線の枠によって示されている。CMOSイメージセンサ240の撮像有効領域は、CMOSイメージセンサ240がDP光を受光した領域のうち、センサとして信号を出力する領域であり、たとえば、VGA(640画素×480画素)のサイズである。また、同図(a)に示す目標領域上におけるDt0の光は、CMOSイメージセンサ240上では、同図(b)に示すDt’0の位置に入射する。スクリーンの前の人物の像は、CMOSイメージセンサ240上では、上下左右が反転して撮像される。 In FIG. 4B, the entire DP light receiving area on the CMOS image sensor 240 is indicated by a dashed frame, and the DP light receiving area incident on the imaging effective area of the CMOS image sensor 240 is indicated by a solid frame. Has been. The effective imaging area of the CMOS image sensor 240 is an area where a signal is output as a sensor among areas where the CMOS image sensor 240 receives DP light, and has a size of, for example, VGA (640 pixels × 480 pixels). Further, the light of Dt0 on the target area shown in FIG. 10A enters the position of Dt′0 shown in FIG. An image of a person in front of the screen is taken upside down on the CMOS image sensor 240 in the vertical and horizontal directions.
 ここで、図5、図6を参照して、上記距離検出の方法を説明する。 Here, the distance detection method will be described with reference to FIGS.
 図5は、上記距離検出手法に用いられる参照パターンの設定方法を説明する図である。 FIG. 5 is a diagram for explaining a reference pattern setting method used in the distance detection method.
 図5(a)に示すように、投射光学系100から所定の距離Lsの位置に、Z軸方向に垂直な平坦な反射平面RSが配置される。出射されたDP光は、反射平面RSによって反射され、受光光学系200のCMOSイメージセンサ240に入射する。これにより、CMOSイメージセンサ240から、撮像有効領域内の画素毎の電気信号が出力される。出力された画素毎の電気信号の値(画素値)は、図2のメモリ26上に展開される。以下、反射面RSからの反射によって得られた全画素値からなる画像を、「基準画像」、反射面RSを「基準面」と称する。そして、図5(b)に示すように、基準画像上に、「参照パターン領域」が設定される。なお、図5(b)には、CMOSイメージセンサ240の背面側から受光面をZ軸正方向に透視した状態が図示されている。図6以降の図においても同様である。 As shown in FIG. 5A, a flat reflection plane RS perpendicular to the Z-axis direction is disposed at a position at a predetermined distance Ls from the projection optical system 100. The emitted DP light is reflected by the reflection plane RS and enters the CMOS image sensor 240 of the light receiving optical system 200. Thereby, an electrical signal for each pixel in the effective imaging area is output from the CMOS image sensor 240. The output electric signal value (pixel value) for each pixel is developed on the memory 26 of FIG. Hereinafter, an image including all pixel values obtained by reflection from the reflection surface RS is referred to as a “reference image”, and the reflection surface RS is referred to as a “reference surface”. Then, as shown in FIG. 5B, a “reference pattern region” is set on the standard image. FIG. 5B shows a state in which the light receiving surface is seen through in the positive direction of the Z axis from the back side of the CMOS image sensor 240. The same applies to the drawings after FIG.
 こうして設定された参照パターン領域に対して、所定の大きさを有する複数のセグメント領域が設定される。セグメント領域の大きさは、得られる距離情報による物体の輪郭抽出精度とCPU21に対する距離検出の演算量の負荷を考慮して決定される。なお、セグメント領域の大きさと物体の輪郭抽出精度の関係については、追って、図8を参照して説明する。 A plurality of segment areas having a predetermined size are set for the reference pattern area thus set. The size of the segment area is determined in consideration of the contour extraction accuracy of the object based on the obtained distance information and the load of the calculation amount of distance detection for the CPU 21. The relationship between the size of the segment area and the contour extraction accuracy of the object will be described later with reference to FIG.
 図5(c)を参照して、比較例における参照パターン領域に設定されるセグメント領域について説明する。なお、図5(c)には、便宜上、各セグメント領域の大きさが7画素×7画素で示され、各セグメント領域の中央の画素が×印で示されている。すなわち、セグメント領域は、一辺の長さが隣り合う画素間の間隔の7倍である正方形に設定されている。 With reference to FIG. 5C, the segment area set in the reference pattern area in the comparative example will be described. In FIG. 5C, for the sake of convenience, the size of each segment area is indicated by 7 pixels × 7 pixels, and the center pixel of each segment area is indicated by a cross. That is, the segment area is set to a square whose side length is seven times the interval between adjacent pixels.
 比較例では、セグメント領域は、図5(c)に示すように、隣り合うセグメント領域が参照パターン領域に対してX軸方向およびY軸方向に1画素間隔で並ぶように設定される。すなわち、あるセグメント領域は、このセグメント領域のX軸方向およびY軸方向に隣り合うセグメント領域に対して1画素ずれた位置に設定される。このとき、各セグメント領域には、固有のパターンでドットが点在する。よって、セグメント領域内の画素値のパターンは、セグメント領域毎に異なっている。隣り合うセグメント領域の間隔が狭いほど、参照パターン領域内に含まれるセグメント領域の数が多くなり、目標領域の面内方向(X-Y平面方向)における距離検出の分解能が高められる。なお、比較例におけるセグメント領域の間隔と、距離検出の分解能の関係については、追って図9を参照して説明する。 In the comparative example, as shown in FIG. 5C, the segment areas are set so that adjacent segment areas are arranged at intervals of one pixel in the X-axis direction and the Y-axis direction with respect to the reference pattern area. That is, a certain segment area is set at a position shifted by one pixel with respect to a segment area adjacent to the segment area in the X-axis direction and the Y-axis direction. At this time, each segment area is dotted with dots in a unique pattern. Therefore, the pattern of pixel values in the segment area is different for each segment area. The smaller the interval between adjacent segment areas, the greater the number of segment areas included in the reference pattern area, and the resolution of distance detection in the in-plane direction (XY plane direction) of the target area is enhanced. The relationship between the segment area interval and the distance detection resolution in the comparative example will be described later with reference to FIG.
 こうして、CMOSイメージセンサ240上における参照パターン領域の位置に関する情報と、参照パターン領域に含まれる全画素の画素値(参照パターン)と、参照パターン領域に対して設定されるセグメント領域の情報が、図2のメモリ26に記憶される。メモリ26に記憶されるこれらの情報を、以下、「参照テンプレート」と称する。 Thus, information on the position of the reference pattern area on the CMOS image sensor 240, pixel values (reference patterns) of all pixels included in the reference pattern area, and segment area information set for the reference pattern area are shown in FIG. 2 memory 26. These pieces of information stored in the memory 26 are hereinafter referred to as “reference templates”.
 図2のCPU21は、投射光学系100から検出対象物体の各部までの距離を算出する際に、参照テンプレートを参照する。CPU21は、距離を算出する際に、参照テンプレートから得られる各セグメント領域内のドットパターンのずれ量に基づいて、物体の各部までの距離を算出する。 2 refers to the reference template when calculating the distance from the projection optical system 100 to each part of the detection target object. When calculating the distance, the CPU 21 calculates the distance to each part of the object based on the shift amount of the dot pattern in each segment area obtained from the reference template.
 たとえば、図5(a)に示すように距離Lsよりも近い位置に物体がある場合、参照パターン上の所定のセグメント領域Snに対応するDP光(DPn)は、物体によって反射され、セグメント領域Snとは異なる領域Sn’に入射する。投射光学系100と受光光学系200はX軸方向に隣り合っているため、セグメント領域Snに対する領域Sn’の変位方向はX軸に平行となる。図5(a)の場合、物体が距離Lsよりも近い位置にあるため、領域Sn’は、セグメント領域Snに対してX軸正方向に変位する。物体が距離Lsよりも遠い位置にあれば、領域Sn’は、セグメント領域Snに対してX軸負方向に変位する。 For example, as shown in FIG. 5A, when an object is present at a position closer than the distance Ls, DP light (DPn) corresponding to a predetermined segment area Sn on the reference pattern is reflected by the object, and the segment area Sn. It is incident on a different region Sn ′. Since the projection optical system 100 and the light receiving optical system 200 are adjacent to each other in the X-axis direction, the displacement direction of the region Sn ′ with respect to the segment region Sn is parallel to the X-axis. In the case of FIG. 5A, since the object is at a position closer than the distance Ls, the region Sn 'is displaced in the positive direction of the X axis with respect to the segment region Sn. If the object is at a position farther than the distance Ls, the region Sn ′ is displaced in the negative X-axis direction with respect to the segment region Sn.
 セグメント領域Snに対する領域Sn’の変位方向と変位量をもとに、投射光学系100からDP光(DPn)が照射された物体の部分までの距離Lrが、距離Lsを用いて、三角測量法に基づき算出される。同様にして、他のセグメント領域に対応する物体の部分について、投射光学系100からの距離が算出される。かかる算出手法の詳細は、たとえば、上記非特許文献1(第19回日本ロボット学会学術講演会(2001年9月18-20日)予稿集、P1279-1280)に示されている。 Based on the displacement direction and displacement amount of the region Sn ′ with respect to the segment region Sn, the distance Lr from the projection optical system 100 to the portion of the object irradiated with DP light (DPn) is triangulated using the distance Ls. Calculated based on Similarly, the distance from the projection optical system 100 is calculated for the part of the object corresponding to another segment area. The details of such a calculation method are described in, for example, Non-Patent Document 1 (The 19th Annual Conference of the Robotics Society of Japan (September 18-20, 2001), Proceedings, P1279-1280).
 かかる距離算出では、参照テンプレートのセグメント領域Snが、実測時においてどの位置に変位したかを検出する。この検出は、実測時にCMOSイメージセンサ240上に照射されたDP光から得られたドットパターンと、セグメント領域Snに含まれるドットパターンとを照合することによって行われる。以下、実測時にCMOSイメージセンサ240上の撮像有効領域に照射されたDP光から得られた全画素値からなる画像を、「実測画像」と称する。実測時のCMOSイメージセンサ240の撮像有効領域は、基準画像取得時と同様に、たとえば、VGA(640画素×480画素)のサイズである。 In this distance calculation, it is detected to which position the segment region Sn of the reference template has been displaced at the time of actual measurement. This detection is performed by collating the dot pattern obtained from the DP light irradiated onto the CMOS image sensor 240 at the time of actual measurement with the dot pattern included in the segment region Sn. Hereinafter, an image made up of all the pixel values obtained from the DP light irradiated to the imaging effective area on the CMOS image sensor 240 at the time of actual measurement will be referred to as “measured image”. The effective imaging area of the CMOS image sensor 240 at the time of actual measurement is, for example, the size of VGA (640 pixels × 480 pixels) as in the case of acquiring the reference image.
 図6(a)~(e)は、かかる距離検出の手法を説明する図である。図6(a)は、CMOSイメージセンサ240上における基準画像に設定された参照パターン領域を示す図であり、図6(b)は、実測時のCMOSイメージセンサ240上の実測画像を示す図であり、図6(c)~(e)は、実測画像に含まれるDP光のドットパターンと、参照テンプレートのセグメント領域に含まれるドットパターンとの照合方法を説明する図である。なお、図6(a)、(b)は、便宜上、一部のセグメント領域のみが示されている。また、図6(b)の実測画像には、便宜上、図4(b)のように、検出対象物体として基準面より前に人物が存在しており、人物の像が写り込んでいることが示されている。 FIGS. 6A to 6E are diagrams for explaining such a distance detection method. FIG. 6A is a diagram showing a reference pattern region set in a standard image on the CMOS image sensor 240, and FIG. 6B is a diagram showing an actually measured image on the CMOS image sensor 240 at the time of actual measurement. FIGS. 6C to 6E are diagrams for explaining a method for collating the dot pattern of the DP light included in the actual measurement image and the dot pattern included in the segment area of the reference template. In FIGS. 6A and 6B, only a part of the segment areas is shown for convenience. In addition, in the actual measurement image of FIG. 6 (b), for convenience, as shown in FIG. 4 (b), there is a person in front of the reference plane as a detection target object, and the image of the person is reflected. It is shown.
 図6(a)のセグメント領域Siの実測時における変位位置を探索する場合、図6(b)に示すように、実測画像上に、セグメント領域Siに対して探索範囲Riが設定される。探索範囲Riは、X軸方向に所定の幅を持っている。セグメント領域Siが探索範囲Riにおいて1画素ずつX軸方向に送られ、各送り位置において、セグメント領域Siのドットパターンと実測画像上のドットパターンとが比較される。以下、実測画像上の各送り位置に対応する領域を、「比較領域」と称する。探索範囲Riには、セグメント領域Siと同じサイズの比較領域が複数設定され、X軸方向に隣り合う比較領域は互いに1画素ずれている。 When searching for a displacement position at the time of actual measurement of the segment area Si in FIG. 6A, as shown in FIG. 6B, a search range Ri is set for the segment area Si on the actual measurement image. The search range Ri has a predetermined width in the X-axis direction. The segment area Si is sent pixel by pixel in the search range Ri in the X-axis direction, and the dot pattern of the segment area Si is compared with the dot pattern on the measured image at each feed position. Hereinafter, a region corresponding to each feed position on the actually measured image is referred to as a “comparison region”. A plurality of comparison areas having the same size as the segment area Si are set in the search range Ri, and the comparison areas adjacent in the X-axis direction are shifted by one pixel from each other.
 探索範囲Riは、検出対象物体が基準面よりも情報取得装置1に離れる方向、および近づく方向にどの程度の距離を検出可能な範囲とするかによって決定される。図6中では、基準画像上のセグメント領域Siの画素位置に対応する実測画像上の画素位置から、X軸負方向にx画素ずれた位置からX軸正方向にx画素ずれた位置の範囲が探索範囲Riに設定されている。 The search range Ri is determined by the direction in which the detection target object moves away from the reference plane toward the information acquisition device 1 and the distance that can be detected in the approaching direction. In FIG. 6, there is a range of a position that is shifted by x pixels in the X-axis positive direction from a position shifted by x pixels in the X-axis negative direction from the pixel position on the actual measurement image corresponding to the pixel position of the segment region Si on the reference image. The search range Ri is set.
 比較領域においてセグメント領域SiをX軸方向に1画素ずつ送りながら、各送り位置において、参照テンプレートに記憶されているセグメント領域Siのドットパターンと、実測画像のDP光のドットパターンのマッチング度合いが求められる。このようにセグメント領域Siを探索範囲Ri内においてX軸方向にのみ送るのは、上記のように、通常、参照テンプレートにより設定されたセグメント領域のドットパターンは、実測時において、X軸方向の所定の範囲内でのみ変位するためである。 While the segment area Si is fed pixel by pixel in the X axis direction in the comparison area, the degree of matching between the dot pattern of the segment area Si stored in the reference template and the dot pattern of the DP light of the measured image is obtained at each feed position. It is done. As described above, the segment area Si is sent only in the X-axis direction within the search range Ri as described above. Normally, the dot pattern of the segment area set by the reference template is a predetermined value in the X-axis direction at the time of actual measurement. This is because the displacement occurs only within the range.
 なお、実測時には、検出対象物体の位置によっては、セグメント領域に対応するドットパターンが実測画像からX軸方向にはみ出すことが起こり得る。たとえば、参照パターン領域の最もX軸負側のセグメント領域S1に対応するドットパターンが、基準面よりも遠距離の物体に反射された場合、セグメント領域S1に対応するドットパターンは、実測画像よりもX軸負方向に位置づけられる。この場合、セグメント領域に対応するドットパターンは、CMOSイメージセンサ240の撮像有効領域内にないため、この領域については、適正にマッチングすることができない。しかし、このような端の領域以外については、適正にマッチングすることができるため、物体の距離検出への影響は少ない。 At the time of actual measurement, depending on the position of the detection target object, the dot pattern corresponding to the segment area may protrude from the actual measurement image in the X-axis direction. For example, when the dot pattern corresponding to the segment area S1 on the negative side of the X axis on the most negative side of the reference pattern area is reflected by an object far from the reference plane, the dot pattern corresponding to the segment area S1 is more than the actual measurement image. Positioned in the negative X-axis direction. In this case, since the dot pattern corresponding to the segment area is not within the effective imaging area of the CMOS image sensor 240, this area cannot be properly matched. However, since areas other than the edge region can be appropriately matched, the influence on the object distance detection is small.
 なお、端の領域についても、適正にマッチングする場合には、実測時のCMOSイメージセンサ240の撮像有効領域を、基準画像取得時のCMOSイメージセンサ240の撮像有効領域よりも、大きくすることができるものを用いればよい。たとえば、基準画像取得時において、VGA(640画素×480画素)のサイズで撮像有効領域が設定された場合、実測時においては、それよりもX軸正方向およびX軸負方向に30画素分大きいサイズで撮像有効領域を設定する。これにより、実測画像が基準画像よりも大きくなるが、端の領域についても、適正にマッチングすることができる。 In addition, when matching the edge area appropriately, the effective imaging area of the CMOS image sensor 240 at the time of actual measurement can be made larger than the effective imaging area of the CMOS image sensor 240 at the time of acquiring the reference image. What is necessary is just to use. For example, when an effective imaging area is set with a size of VGA (640 pixels × 480 pixels) at the time of acquiring a reference image, it is 30 pixels larger in the X-axis positive direction and the X-axis negative direction than that when actually measured. Set the effective imaging area by size. As a result, the actually measured image becomes larger than the reference image, but the edge region can also be appropriately matched.
 上記マッチング度合いの検出時には、まず、参照パターン領域の各画素の画素値と実測画像の各セグメント領域の各画素の画素値が2値化されて、メモリ26に保持される。たとえば、基準画像および実測画像の画素値が8ビットの階調の場合、0~255の画素値のうち、所定の閾値以上の画素が、画素値1に、所定の閾値未満の画素が、画素値0に変換されて、メモリ26に保持される。その後、比較領域とセグメント領域Siとの間の類似度が求められる。すなわち、セグメント領域Siの各画素の画素値と、比較領域に対応する画素の画素値との差分が求められる。そして、求めた差分を比較領域の全ての画素について加算した値Rsadが、類似度を示す値として取得される。 When detecting the matching degree, first, the pixel value of each pixel in the reference pattern area and the pixel value of each pixel in each segment area of the measured image are binarized and stored in the memory 26. For example, when the pixel values of the reference image and the actually measured image are 8-bit gradations, among the pixel values of 0 to 255, pixels that are equal to or greater than a predetermined threshold are pixels whose pixel value is 1 and pixels that are less than the predetermined threshold are pixels The value is converted to 0 and stored in the memory 26. Thereafter, the similarity between the comparison region and the segment region Si is obtained. That is, the difference between the pixel value of each pixel in the segment area Si and the pixel value of the pixel corresponding to the comparison area is obtained. A value Rsad obtained by adding the obtained difference to all the pixels in the comparison region is acquired as a value indicating the similarity.
 たとえば、図6(c)のように、一つのセグメント領域中に、m列×n行の画素が含まれている場合、セグメント領域のi列、j行の画素の画素値T(i,j)と、比較領域のi列、j行の画素の画素値I(i,j)との差分が求められる。そして、セグメント領域の全ての画素について差分が求められ、その差分の総和により、図6(c)に示す式の値Rsadが求められる。値Rsadが小さい程、セグメント領域と比較領域との間の類似度が高い。 For example, as shown in FIG. 6C, when pixels of m columns × n rows are included in one segment area, the pixel values T (i, j) of the pixels of i columns and j rows of the segment area ) And the pixel value I (i, j) of the pixel in the comparison area i column and j row. Then, the difference is obtained for all the pixels in the segment area, and the value Rsad of the equation shown in FIG. 6C is obtained from the sum of the differences. The smaller the value Rsad, the higher the degree of similarity between the segment area and the comparison area.
 こうして、図6(d)に示すように、セグメント領域Siについて、探索範囲Riの全ての比較領域に対して値Rsadが求められる。図6(e)は、探索範囲Riの各送り位置における値Rsadの大小が模式的に示されたグラフである。セグメント領域Siについて、探索範囲Riの全ての比較領域に対して値Rsadが求められると、まず、求めた値Rsadの中から、最小値Bt1が参照される。次に、求めた値Rsadの中から、2番目に小さい値Bt2が参照される。最小値Bt1と2番目に小さい値Bt2との差分値Esが閾値未満であれば、セグメント領域Siの探索はエラーとされる。他方、差分値Esが閾値以上であれば、最小値Bt1に対応する比較領域Ciが、セグメント領域Siの移動領域と判定される。図6(d)のように、比較領域Ciは、基準画像上のセグメント領域Siの画素位置と同位置の実測画像上の画素位置Si0よりもX軸正方向にα画素ずれた位置で検出される。これは、基準面よりも近い位置に存在する検出対象物体(人物)によって、実測画像上のDP光のドットパターンが基準画像上のセグメント領域SiよりもX軸正方向に変位したためである。 Thus, as shown in FIG. 6D, the value Rsad is obtained for all the comparison regions in the search range Ri for the segment region Si. FIG. 6E is a graph schematically showing the magnitude of the value Rsad at each feed position in the search range Ri. When the value Rsad is obtained for all the comparison regions in the search range Ri for the segment region Si, first, the minimum value Bt1 is referred to from the obtained value Rsad. Next, the second smallest value Bt2 is referred to from the obtained value Rsad. If the difference value Es between the minimum value Bt1 and the second smallest value Bt2 is less than the threshold value, the search for the segment area Si is regarded as an error. On the other hand, if the difference value Es is equal to or greater than the threshold value, the comparison area Ci corresponding to the minimum value Bt1 is determined as the movement area of the segment area Si. As shown in FIG. 6D, the comparison area Ci is detected at a position shifted by α pixels in the X-axis positive direction from the pixel position Si0 on the measured image at the same position as the pixel position of the segment area Si on the reference image. The This is because the dot pattern of the DP light on the measured image is displaced in the X-axis positive direction from the segment area Si on the reference image by a detection target object (person) that is present at a position closer to the reference plane.
 こうして、実測時に取得されたDP光のドットパターンから、各セグメント領域の変位位置が探索されると、上記のように、その変位位置に基づいて、三角測量法により、各セグメント領域に対応する検出対象物体の部位までの距離が求められる。 Thus, when the displacement position of each segment region is searched from the dot pattern of DP light acquired at the time of actual measurement, detection corresponding to each segment region is performed by triangulation based on the displacement position as described above. The distance to the part of the target object is obtained.
 このようにして、セグメント領域S1~セグメント領域Snまで全てのセグメント領域について、上記同様のセグメント領域の探索が行われる。 In this way, the same segment area search is performed for all the segment areas from segment area S1 to segment area Sn.
 次に、上記距離検出手法を用いて本願発明者がマッチングの測定を行った測定例について説明する。この測定例では、セグメント領域Siが検出された場合、上記画素位置Si0に対するセグメント領域Siの検出位置のずれ量(以下、「画素ずれ量」という)に応じて白から黒の階調で表現された値が、セグメント領域Siの距離情報として、メモリ26に記憶される。セグメント領域Siの検出位置が探索範囲RiにおいてX軸負方向にずれるほど黒に近く、セグメント領域Siの検出位置が探索範囲RiにおいてX軸正方向にずれるほど白に近い色の階調が割り当てられる。また、セグメント領域Siの探索がエラーとなった場合、探索範囲Riにおいて最もX軸負方向にずれた位置に対応する階調、すなわち、最も黒い色の階調がメモリ26に記憶される。このように、マッチングの測定結果を白と黒の階調で表現した画像を、「測定画像」と称する。 Next, a measurement example in which the inventor of the present application performed matching measurement using the distance detection method will be described. In this measurement example, when the segment area Si is detected, it is expressed in gray scales from white to black according to the amount of deviation of the detected position of the segment area Si with respect to the pixel position Si0 (hereinafter referred to as “pixel deviation amount”). This value is stored in the memory 26 as the distance information of the segment area Si. As the detection position of the segment region Si shifts in the negative X-axis direction in the search range Ri, a tone of a color closer to black is assigned, and as the detection position of the segment region Si shifts in the positive X-axis direction in the search range Ri . Further, when an error occurs in the search for the segment area Si, the gradation corresponding to the position shifted in the negative X-axis direction in the search range Ri, that is, the gradation of the blackest color is stored in the memory 26. In this way, an image in which the matching measurement result is expressed by white and black gradations is referred to as a “measurement image”.
 図7は、上記比較例のように、1画素間隔でセグメント領域が設定された状態で、上記距離検出手法を用いた場合の距離測定例を示す図である。 FIG. 7 is a diagram showing a distance measurement example when the above distance detection method is used in a state where segment areas are set at intervals of one pixel as in the above comparative example.
 図7(a)は、情報取得装置1から距離1000mmの位置に平坦なスクリーンが配置され、情報取得装置1から距離800mmの位置に人物が立った状態でDP光を照射し、反射されたDP光を撮像した実測画像である。人物は、略200mmほど前方に手を突き出している。すなわち、図中では、人物の手、胴体、スクリーンの順で情報取得装置1からの距離が遠くなっている。 FIG. 7A illustrates a case where a flat screen is disposed at a distance of 1000 mm from the information acquisition device 1 and a DP light is irradiated and reflected by a person standing at a distance of 800 mm from the information acquisition device 1. It is the actual measurement image which imaged light. The person sticks out his hand about 200 mm forward. That is, in the figure, the distance from the information acquisition device 1 is longer in the order of the person's hand, the trunk, and the screen.
 図7(b)は、図5(c)のように、7画素×7画素の大きさ、かつ、1画素間隔でセグメント領域が設定された状態で、図7(a)の実測画像を用いてマッチングを測定した測定画像を示す図である。また、図7(c)は、15画素×15画素の大きさ、かつ、1画素間隔でセグメント領域が設定された状態で、図7(a)の実測画像を用いてマッチングを測定した測定画像を示す図である。なお、それぞれ、図7(a)の実測画像におけるスクリーンと同位置に基準面として、スクリーンのみが配置された状態でDP光を照射し、反射されたDP光を撮像した基準画像を用いて、マッチングを行った。 FIG. 7B uses the actually measured image of FIG. 7A in a state where the segment area is set at a size of 7 pixels × 7 pixels and at an interval of 1 pixel as shown in FIG. 5C. It is a figure which shows the measurement image which measured matching. FIG. 7C shows a measurement image obtained by measuring matching using the actual measurement image of FIG. 7A in a state where the size is 15 pixels × 15 pixels and the segment areas are set at intervals of one pixel. FIG. In addition, using a reference image obtained by irradiating DP light in a state where only the screen is arranged as a reference plane at the same position as the screen in the actually measured image of FIG. 7A and capturing the reflected DP light, Matching was done.
 図7(b)、(c)を参照して、スクリーンの部分が黒に近い階調で示され、人物の部分が白に近い階調で示されている。また、人物の手の部分は、他の胴体部分に比べて、さらに白に近い階調で示されている。このように、それぞれの測定結果において、大まかな人物の形状を認識できていることがわかる。 Referring to FIGS. 7B and 7C, the screen portion is shown with gradation close to black, and the person portion is shown with gradation close to white. In addition, the hand portion of the person is shown with a gradation closer to white compared to other body portions. In this way, it can be seen that a rough human shape can be recognized in each measurement result.
 図7(b)の場合、図中、多数の黒い点が示されている。すなわち、セグメント領域の探索がエラーとなった領域が多数含まれている。他方、図7(c)の場合、人物とスクリーンの境界、人物の胴体と手の境界、および黒いベルトの位置以外の領域においては、黒い点は、ほぼ存在しない。すなわち、大部分の領域において、セグメント領域の探索がエラーとならず、適正に距離情報が得られている。 In the case of FIG. 7B, many black dots are shown in the figure. That is, there are many areas in which the segment area search has become an error. On the other hand, in the case of FIG. 7C, there are almost no black dots in the areas other than the boundary between the person and the screen, the boundary between the human body and the hand, and the position of the black belt. That is, in most areas, the search for the segment area does not cause an error, and the distance information is appropriately obtained.
 なお、人物とスクリーンの境界におけるエラーは、セグメント領域に対応する比較領域内にスクリーンによって反射されたドットパターンと人物によって反射されたドットパターンが同程度含まれることによって、図6(e)に示した最小値のRsadと2番目に小さいRsadの差分値が閾値を超えないために生じたものである。また、ベルトの位置におけるエラーは、ベルトが反射率の低い黒色であることにより、CMOSイメージセンサ240に受光されるドットパターンが減少したため生じたものである。 Note that the error at the boundary between the person and the screen is shown in FIG. 6E because the dot pattern reflected by the screen and the dot pattern reflected by the person are included in the comparison area corresponding to the segment area. This is because the difference value between the smallest Rsad and the second smallest Rsad does not exceed the threshold value. The error in the belt position is caused by a decrease in the dot pattern received by the CMOS image sensor 240 due to the belt having a black color with a low reflectance.
 このように、セグメント領域の大きさが小さいと、セグメント領域内に含まれるドットパターンのユニーク性が減少し、エラーの領域が多発する。したがって、適正に距離情報を取得するためには、図7(c)の15画素×15画素のように、ある程度のセグメント領域の大きさが必要である。 As described above, when the size of the segment area is small, the uniqueness of the dot pattern included in the segment area decreases, and error areas frequently occur. Therefore, in order to acquire distance information appropriately, a certain segment area size is required, such as 15 pixels × 15 pixels in FIG. 7C.
 しかし、セグメント領域の大きさが大きくなると、測定画像における物体の輪郭の見た目の鮮鋭さは、劣化したものとなる。以下、物体の輪郭の見た目の鮮鋭さを「解像感」と称する。解像感は、人が見た目で感じる画像の精細さの尺度であり、人の主観的な評価基準である。 However, as the size of the segment area increases, the apparent sharpness of the contour of the object in the measurement image deteriorates. Hereinafter, the apparent sharpness of the contour of the object is referred to as “resolution”. The sense of resolution is a measure of the fineness of an image felt by a person and is a subjective evaluation standard for a person.
 図8は、セグメント領域の大きさと解像感の関係について説明する図である。 FIG. 8 is a diagram for explaining the relationship between the size of the segment area and the resolution.
 図8(a)は、図7(a)に示した実測画像の右手周辺部を模式的に示した図、図8(b)は、図8(a)に示す右手の内、人差し指の周辺部A1の一部拡大図である。 図8(c)は、7画素×7画素の大きさ、かつ、1画素間隔でセグメント領域が設定された状態でマッチングを測定した結果の右手周辺部の測定画像である。図8(d)は、21画素×21画素の大きさ、かつ、1画素間隔でセグメント領域が設定された状態でマッチングを測定した結果の右手周辺部の測定画像である。 FIG. 8A is a diagram schematically showing the right hand peripheral portion of the actually measured image shown in FIG. 7A, and FIG. 8B is a diagram showing the inside of the right hand shown in FIG. It is a partial enlarged view of part A1. FIG. 8 (c) is a measurement image of the right hand periphery as a result of measuring matching in a state where the size is 7 pixels × 7 pixels and the segment area is set at an interval of 1 pixel. FIG. 8D is a measurement image of the right-hand periphery as a result of measuring matching in a state where the size is 21 pixels × 21 pixels and the segment area is set at an interval of one pixel.
 図8(b)を参照して、大きさが7画素×7画素のセグメント領域Sの実測画像上の画素ずれ量0の位置S0が示されている。人差し指は、基準面よりも近距離にあるため、人差し指によって反射されたドットは、X軸負方向にずれて受光光学系200に撮像される。セグメント領域Sの大きさは、おおよそ、人差し指の大きさと同程度である。よって、セグメント領域Sの比較領域には、ほぼ人差し指によって反射されたドットのみが含まれる。したがって、セグメント領域Sは、X軸正方向にずれた比較領域C1とマッチングし、適正に人差し指の距離情報が得られる。たとえば、図8(c)に示すA’1の領域では、人差し指の輪郭を認識できていることがわかる。 Referring to FIG. 8 (b), a position S0 where the pixel shift amount is 0 on the actually measured image of the segment area S having a size of 7 pixels × 7 pixels is shown. Since the index finger is at a shorter distance than the reference surface, the dots reflected by the index finger are imaged by the light receiving optical system 200 while being shifted in the negative X-axis direction. The size of the segment region S is approximately the same as the size of the index finger. Therefore, the comparison area of the segment area S includes only the dots reflected by the index finger. Therefore, the segment area S matches the comparison area C1 shifted in the positive direction of the X axis, and the distance information of the index finger can be obtained appropriately. For example, in the area A′1 shown in FIG. 8C, it can be seen that the contour of the index finger can be recognized.
 また、図8(b)には、大きさが21画素×21画素のセグメント領域S’の実測画像上の画素ずれ量0の位置S’0が示されている。セグメント領域S’の比較領域には、スクリーンによって反射されたドットと、人差し指によって反射されたドットが混在している。比較領域に含まれるドットは、人差し指よりもスクリーンから反射されたドットの方がかなり多い。スクリーンは、基準面と同位置にあるため、スクリーンによって反射されたドットの位置は、基準画像の位置からほぼずれない。したがって、セグメント領域S’は、人差し指によって反射されたドットよりも、スクリーンによって反射されたドットとのマッチング率が高くなり、比較領域C’とマッチングするため、スクリーンの距離情報が得られる。このため、図8(d)に示す領域A’’1では、人差し指を認識することができず、スクリーンの距離情報のみが得られている。また、図8(c)と図8(d)を比較すると、図8(d)の場合のほうが、親指の先端や、中指と薬指の隙間等が認識できず、解像感がかなり劣っていることがわかる。 Further, FIG. 8B shows a position S′0 where the pixel shift amount is 0 on the actually measured image of the segment region S ′ having a size of 21 pixels × 21 pixels. In the comparison area of the segment area S ′, dots reflected by the screen and dots reflected by the index finger are mixed. The number of dots included in the comparison area is much more reflected from the screen than the index finger. Since the screen is at the same position as the reference plane, the positions of the dots reflected by the screen are not substantially deviated from the position of the reference image. Therefore, the segment area S ′ has a higher matching rate with the dots reflected by the screen than the dots reflected by the index finger and matches with the comparison area C ′, so that screen distance information can be obtained. For this reason, in the area A ″ 1 shown in FIG. 8D, the index finger cannot be recognized, and only screen distance information is obtained. Further, comparing FIG. 8C and FIG. 8D, in the case of FIG. 8D, the tip of the thumb, the gap between the middle finger and the ring finger, etc. cannot be recognized, and the resolution is considerably inferior. I understand that.
 このように、セグメント領域内に異なる距離位置からのドットパターンが混在すると、最もマッチング率の高い1つの距離情報が得られることとなる。したがって、セグメント領域の大きさが大きくなればなるほど、測定画像の物体の輪郭の見た目の鮮鋭さ(解像感)は、劣化したものとなる。 In this way, when dot patterns from different distance positions are mixed in the segment area, one distance information with the highest matching rate can be obtained. Therefore, the larger the size of the segment area, the worse the apparent sharpness (resolution feeling) of the contour of the object of the measurement image.
 図9は、比較例におけるセグメント領域の間隔と、距離検出の分解能の関係を説明する図である。 FIG. 9 is a diagram for explaining the relationship between the interval between the segment areas and the resolution of distance detection in the comparative example.
 図9(a)に示すように、比較例のセグメント領域s0の右側には、セグメント領域s0を1画素だけX軸正方向にずらした領域に、セグメント領域sxが設定される。同様に、セグメント領域s0を1画素だけY軸正方向にずれした領域に、セグメント領域syが設定される。なお、セグメント領域の大きさは、7画素×7画素が設定されている。 As shown in FIG. 9A, on the right side of the segment area s0 of the comparative example, a segment area sx is set in an area where the segment area s0 is shifted by one pixel in the positive direction of the X axis. Similarly, the segment area sy is set in an area where the segment area s0 is shifted by one pixel in the positive Y-axis direction. The size of the segment area is set to 7 pixels × 7 pixels.
 比較例では、セグメント領域s0の中心p0と、セグメント領域sxの中心pxおよびセグメント領域syの中心pyの間隔は、セグメント領域のずれ幅に相当する1画素分の間隔となる。 In the comparative example, the interval between the center p0 of the segment region s0, the center px of the segment region sx, and the center py of the segment region sy is an interval corresponding to one pixel corresponding to the shift width of the segment region.
 このように、セグメント領域が決定されると、対象物体に対する距離は、上下左右に1画素に対応する細かさで検出されることになる。すなわち、図9(b)に示す如く、円で概念的に示される位置が、距離を検出できる一つの位置となり、これが距離検出の分解能となる。比較例の場合では、1画素単位で距離検出することができ、CMOSイメージセンサ240から出力された実測画像および基準画像の解像度とほぼ同等の分解能で距離を検出することができる。 Thus, when the segment area is determined, the distance to the target object is detected with a fineness corresponding to one pixel vertically and horizontally. That is, as shown in FIG. 9B, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution. In the case of the comparative example, the distance can be detected in units of one pixel, and the distance can be detected with substantially the same resolution as the resolution of the actual measurement image and the reference image output from the CMOS image sensor 240.
 しかし、このように分解能が高い場合には、その分、CPU21における演算処理の負担が大きくなる。他方、図9(c)に示すように、セグメント領域の大きさが15画素×15画素に広げられると、上記のように分解能が高められても、測定画像の解像感はかなり劣化する。この場合、分解能をやや低下させて、CPU21の処理負担を軽減させた方が望ましいとも考えられる。 However, when the resolution is high in this way, the processing load on the CPU 21 increases accordingly. On the other hand, as shown in FIG. 9C, when the size of the segment area is expanded to 15 pixels × 15 pixels, the resolution of the measurement image is considerably deteriorated even if the resolution is increased as described above. In this case, it may be desirable to reduce the processing load on the CPU 21 by slightly reducing the resolution.
 そこで、本願発明者は、セグメント領域の大きさ毎に、距離検出の分解能の高さを複数設定したシミュレーションを行い、測定画像の解像感の劣化具合を比較した。なお、シミュレーションでは、1画素間隔でセグメント領域を設定した状態で得た測定画像から、X軸方向とY軸方向にそれぞれ所定の画素毎に階調を抽出して測定画像よりも小さなサイズの画像を生成し、生成した画像を測定画像と同一サイズに引き伸ばした画像が示されている。測定画像から1画素飛ばしで階調を抽出すると、2画素間隔で、隣り合うセグメント領域が設定された場合の測定画像と等価な画像が得られ、測定画像から2画素、3画素飛ばしで階調を抽出すると、それぞれ、3画素、4画素間隔で、隣り合うセグメント領域が設定された場合の測定画像と等価な画像が得られる。シミュレーションでは、これらの画像が、各画素間隔で、隣り合うセグメント領域が設定された場合の測定画像として示されている。 Therefore, the inventors of the present application performed a simulation in which a plurality of distance detection resolution heights were set for each segment area size, and compared the degree of degradation of the resolution of the measurement image. In the simulation, an image having a smaller size than the measurement image is obtained by extracting the gradation for each predetermined pixel in the X-axis direction and the Y-axis direction from the measurement image obtained in the state where the segment area is set at an interval of one pixel. And an image obtained by enlarging the generated image to the same size as the measurement image is shown. When gradation is extracted from the measurement image by skipping one pixel, an image equivalent to the measurement image when adjacent segment areas are set at intervals of two pixels is obtained, and gradation is skipped from the measurement image by skipping two pixels and three pixels. Is extracted, an image equivalent to a measurement image when adjacent segment regions are set at intervals of 3 pixels and 4 pixels, respectively, is obtained. In the simulation, these images are shown as measurement images when adjacent segment regions are set at each pixel interval.
 図10は、シミュレーション結果を示す図である。 FIG. 10 is a diagram showing a simulation result.
 図10の上段には、セグメント領域の大きさが7画素×7画素の場合の右手部分の測定画像、中段には、セグメント領域の大きさが15画素×15画素の場合の右手部分の測定画像、下段には、セグメント領域の大きさが21画素×21画素の場合の右手部分の測定画像が示されている。 The upper part of FIG. 10 shows the measurement image of the right hand part when the segment area size is 7 pixels × 7 pixels, and the middle part shows the measurement image of the right hand part when the size of the segment area is 15 pixels × 15 pixels. In the lower part, a measurement image of the right hand part in the case where the size of the segment area is 21 pixels × 21 pixels is shown.
 また、図中、左から順に、セグメント領域の間隔が1画素、2画素、3画素、4画素、8画素で設定された場合の測定画像が示されている。左にいくほど、距離測定の分解能は高く、右にいくほど、距離測定の分解能は低い。すなわち、左にいくほど、測定画像に含まれる階調が対応づけられた領域(階調領域)の数が多く、ひとつひとつの階調領域が小さいサイズとなっており、右にいくほど、測定画像に含まれる階調領域の数が少なく、ひとつひとつの階調領域が大きいサイズとなっている。 Also, in the figure, in order from the left, measurement images in the case where the interval between segment areas is set to 1 pixel, 2 pixels, 3 pixels, 4 pixels, and 8 pixels are shown. The distance measurement resolution is higher as it goes to the left, and the distance measurement resolution is lower as it goes to the right. In other words, the more to the left, the more the number of areas (gradation areas) associated with the gradations contained in the measurement image, the smaller the gradation area, and the further to the right, the measurement image There are a small number of gradation areas, and each gradation area has a large size.
 本願発明者は、これらのシミュレーション結果の測定画像のうち、セグメント領域の間隔が1画素の場合における測定画像の手の輪郭の解像感が損なわれない程度の測定画像を主観的に選択した。 The inventor of the present application subjectively selected a measurement image of such a simulation result to such an extent that the resolution of the hand outline of the measurement image is not impaired when the interval between the segment areas is one pixel.
 まず、図中、上段のセグメント領域の大きさが7画素×7画素の場合は、セグメント領域の間隔が2画素の場合までは、手の輪郭の解像感がほぼ損なわれていない。セグメント領域の間隔が3画素の場合は、1画素の場合に比べて、指の輪郭の鮮鋭さ(解像感)が失われている。 First, in the figure, when the size of the upper segment area is 7 pixels × 7 pixels, the resolution of the outline of the hand is not substantially impaired until the interval between the segment areas is 2 pixels. When the interval between the segment areas is 3 pixels, the sharpness (resolution) of the finger outline is lost compared to the case of 1 pixel.
 次に、図中、中段のセグメント領域の大きさが15画素×15画素の場合は、セグメント領域の間隔が4画素の場合までは、手の輪郭の解像感がほぼ損なわれていない。セグメント領域の間隔が8画素の場合は、1画素の場合に比べて、指の輪郭の鮮鋭さ(解像感)は、大きく失われている。 Next, in the figure, when the size of the middle segment area is 15 pixels × 15 pixels, the resolution of the hand outline is not substantially impaired until the interval between the segment areas is 4 pixels. When the interval between the segment areas is 8 pixels, the sharpness (feeling of resolution) of the finger outline is largely lost compared to the case of 1 pixel.
 さらに、図中、下段のセグメント領域の大きさが21画素×21画素の場合は、セグメント領域の間隔が4画素の場合までは、手の輪郭の解像感がほぼ損なわれていない。セグメント領域の間隔が8画素の場合は、1画素の場合に比べて、指の輪郭の鮮鋭さ(解像感)は、大きく失われている。 Furthermore, in the figure, when the size of the lower segment area is 21 pixels × 21 pixels, the resolution of the contour of the hand is not substantially impaired until the interval between the segment areas is 4 pixels. When the interval between the segment areas is 8 pixels, the sharpness (feeling of resolution) of the finger outline is largely lost compared to the case of 1 pixel.
 以上のことから、セグメント領域の間隔は、セグメント領域の一辺に含まれる画素の数の略1/4程度にしたとしても、物体の見た目の輪郭の鮮鋭さ(解像感)が劣ることがないことがわかる。 From the above, even if the interval between the segment areas is about ¼ of the number of pixels included in one side of the segment area, the sharpness (resolution) of the appearance of the object is not inferior. I understand that.
 たとえば、セグメント領域の大きさが7画素×7画素の場合、7を4で除算した除算値1.75に最も近い整数である2を、セグメント領域の間隔の上限の画素数として求めることができる。また、セグメント領域の大きさが15画素×15画素の場合、15を4で除算した除算値3.75に最も近い整数である4を、セグメント領域の上限の画素数として求めることができる。さらに、セグメント領域の大きさが21画素×21画素の場合、21を4で除算した除算値5.25に最も近い整数である5を、好適なセグメント領域の間隔の上限の画素数として求めることができる。これらの結果は、上記の解像感の評価で選択した測定画像に対応する画素数と略一致する。 For example, when the size of the segment area is 7 pixels × 7 pixels, 2 that is an integer closest to the division value 1.75 obtained by dividing 7 by 4 can be obtained as the upper limit pixel number of the segment area interval. . Further, when the size of the segment area is 15 pixels × 15 pixels, 4 which is an integer closest to the division value 3.75 obtained by dividing 15 by 4 can be obtained as the upper limit number of pixels of the segment area. Further, when the size of the segment area is 21 pixels × 21 pixels, 5 which is an integer closest to the division value 5.25 obtained by dividing 21 by 4 is obtained as the upper limit number of pixels of the segment area interval. Can do. These results substantially coincide with the number of pixels corresponding to the measurement image selected in the above evaluation of resolution.
 なお、上記では、除算値に最も近い整数をセグメント領域の間隔を規定する画素数としたが、物体の輪郭抽出の要求精度に応じて、除算値の小数点以下を切り捨てた整数をセグメント領域の間隔を規定する画素数としてもよいし、除算値の小数点以下を切り上げた整数をセグメント領域の間隔を規定する画素数としてもよい。たとえば、セグメント領域の大きさが15画素×15画素の場合、小数点以下を切り捨てて、セグメント領域の間隔の上限を3画素としてもよい。 In the above, the integer closest to the division value is the number of pixels that defines the interval of the segment area. However, depending on the required accuracy of the contour extraction of the object, the integer obtained by rounding off the decimal point of the division value is the interval of the segment area. Or an integer obtained by rounding up the fractional value after the decimal point may be used as the number of pixels defining the interval between the segment areas. For example, when the size of the segment area is 15 pixels × 15 pixels, the decimal point may be rounded down to set the upper limit of the segment area interval to 3 pixels.
 このように、セグメント領域の間隔の上限は、セグメント領域のサイズが大きくなるほど大きくなる。これは、セグメント領域が大きくなるほど、セグメント領域の間隔が1画素であるときの解像感が劣化するとともに、マッチングエラーが少なくなるためである。 Thus, the upper limit of the segment area interval increases as the size of the segment area increases. This is because the larger the segment area, the worse the resolution when the interval between the segment areas is one pixel and the fewer matching errors.
 たとえば、セグメント領域の大きさが7画素×7画素の場合には、もともとセグメント領域の間隔が1画素であるときの測定画像の解像感が高いため、これと比較される測定画像の解像感も高くなければならない。しかしながら、セグメント領域の大きさが7画素×7画素の場合には、セグメント領域の大きさが小さいため、マッチングエラーが生じやすい。このため、このマッチングエラーと、セグメント領域の間隔を広げることによる分解能の低下とが相俟って、セグメント領域の間隔を少し広げただけで、大きく解像感が劣化する。したがって、測定画像において、セグメント領域の間隔が1画素であるときとあまり遜色のない解像感を得るためには、セグメント領域の間隔を大きく広げることができない。 For example, when the size of the segment area is 7 pixels × 7 pixels, the resolution of the measurement image is originally high when the interval between the segment areas is 1 pixel. The feeling must also be high. However, when the size of the segment area is 7 pixels × 7 pixels, since the size of the segment area is small, a matching error is likely to occur. For this reason, the matching error and the decrease in resolution caused by increasing the interval between the segment areas are combined, so that the resolution is greatly deteriorated even if the interval between the segment areas is slightly increased. Therefore, in the measurement image, in order to obtain a resolution that is not inferior to that when the interval between the segment regions is one pixel, the interval between the segment regions cannot be increased greatly.
 これに対し、セグメント領域の大きさが21画素×21画素の場合には、もともとセグメント領域の間隔が1画素であるときの測定画像の解像感が低いため、これと比較される測定画像の解像感は低くても良い。また、セグメント領域の大きさが21画素×21画素の場合には、セグメント領域の大きさが大きいため、マッチングエラーが生じにくい。このため、セグメント領域の間隔を広げても、解像感は劣化しにくい。したがって、セグメント領域の間隔を大きく広げても、測定画像において、セグメント領域の間隔が1画素であるときとあまり遜色のない解像感を得ることができる。 On the other hand, when the size of the segment area is 21 pixels × 21 pixels, the resolution of the measurement image is originally low when the interval between the segment areas is 1 pixel. The resolution may be low. Further, when the size of the segment area is 21 pixels × 21 pixels, since the size of the segment area is large, a matching error hardly occurs. For this reason, even if the interval between the segment areas is widened, the resolution is unlikely to deteriorate. Therefore, even when the interval between the segment areas is greatly widened, it is possible to obtain a resolution that is not much different from that when the interval between the segment areas is one pixel in the measurement image.
 以上の検討から、セグメント領域の大きさが大きくなるに応じて、セグメント領域の間隔を大きく設定することができる。具体的には、セグメント領域の大きさに応じて、セグメント領域の間隔を、セグメント領域の一辺に含まれる画素の数の略1/4程度を上限として設定する。これにより、解像感をさほど損なうことなく、CPU21の処理負担を大幅に軽減することができる。最も望ましくは、セグメント領域の間隔を、セグメント領域の一辺に含まれる画素の数の略1/4程度に設定すると良い。 From the above examination, the interval between the segment areas can be set larger as the size of the segment area becomes larger. Specifically, according to the size of the segment area, the interval between the segment areas is set with an upper limit of about ¼ of the number of pixels included in one side of the segment area. As a result, the processing load on the CPU 21 can be greatly reduced without significantly impairing the resolution. Most preferably, the interval between the segment areas is set to about 1/4 of the number of pixels included in one side of the segment area.
 なお、1画素間隔の場合よりもCPU21の処理負担を軽減するためには、セグメント領域の間隔を2画素以上で、且つ、上述の上限値以下に設定する必要がある。ここで、上記のように上限値は、セグメント領域の一辺に含まれる画素の数の略1/4程度に設定される。このため、セグメント領域の大きさが5画素×5画素以上でなければ、セグメント領域の間隔を2画素以上に設定できない。 In order to reduce the processing burden on the CPU 21 as compared with the case of one pixel interval, it is necessary to set the interval of the segment areas to 2 pixels or more and to the above upper limit value or less. Here, as described above, the upper limit value is set to about ¼ of the number of pixels included in one side of the segment area. For this reason, unless the size of the segment area is 5 pixels × 5 pixels or more, the interval between the segment areas cannot be set to 2 pixels or more.
 たとえば、セグメント領域の大きさが4画素×4画素であると、4を4で除算した除算値は1となり、セグメント領域の間隔を1より大きく設定できなくなる。これに対し、セグメント領域の大きさが5画素×5画素であれば、5を4で除算した除算値は1.25となり、セグメント領域の間隔を1.25に近い2(小数点以下を切り上げ)に設定することがあり得る。したがって、上記シミュレーション結果における検討を適用する場合には、セグメント領域の大きさが5画素×5画素以上である必要がある。 For example, if the size of the segment area is 4 pixels × 4 pixels, the division value obtained by dividing 4 by 4 is 1, and the interval between the segment areas cannot be set larger than 1. On the other hand, if the size of the segment area is 5 pixels × 5 pixels, the division value obtained by dividing 5 by 4 is 1.25, and the interval between the segment areas is close to 1.25 (rounded up after the decimal point) It may be set to. Therefore, when applying the examination in the simulation result, the size of the segment area needs to be 5 pixels × 5 pixels or more.
 図11は、本実施の形態におけるセグメント領域の間隔と、距離検出の分解能の関係を説明する図である。ここでは、セグメント領域の間隔が、セグメント領域の一辺に含まれる画素の数の略1/4程度に設定されている。 FIG. 11 is a diagram for explaining the relationship between the interval between segment areas and the resolution of distance detection in the present embodiment. Here, the interval between the segment areas is set to about ¼ of the number of pixels included in one side of the segment area.
 図11(a)に示すように、セグメント領域の大きさが7画素×7画素の場合は、セグメント領域S0の右側に、セグメント領域S0を2画素だけX軸正方向にずらした領域に、セグメント領域Sxが設定される。同様に、セグメント領域S0を2画素だけY軸正方向にずらした領域に、セグメント領域Syが設定される。 As shown in FIG. 11A, when the size of the segment area is 7 pixels × 7 pixels, the segment area S0 is moved to the right side of the segment area S0 by shifting the segment area S0 by 2 pixels in the X-axis positive direction. Region Sx is set. Similarly, the segment area Sy is set in an area where the segment area S0 is shifted by two pixels in the positive Y-axis direction.
 この場合、セグメント領域S0の中心P0と、セグメント領域Sxの中心Pxおよびセグメント領域Syの中心pyの間隔は、セグメント領域のずれ幅に相当する2画素分の距離となる。 In this case, the interval between the center P0 of the segment area S0, the center Px of the segment area Sx, and the center py of the segment area Sy is a distance corresponding to two pixels corresponding to the shift width of the segment area.
 このように、セグメント領域が決定されると、対象物体に対する距離は、上下左右に2画素に対応する距離で検出されることになる。すなわち、図11(b)に示す如く、円で概念的に示される位置が、距離を検出できる一つの位置となり、これが距離検出の分解能となる。したがって、本実施の形態では、上記比較例の場合に比べ、分解能を1/2とすることができ、演算量は、1/4となる。たとえば、セグメント領域の間隔が2画素の場合の4個の円は、セグメント領域の間隔が1画素の場合の16個の円に相当する。 Thus, when the segment area is determined, the distance to the target object is detected at a distance corresponding to two pixels in the vertical and horizontal directions. That is, as shown in FIG. 11B, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution. Therefore, in the present embodiment, the resolution can be halved and the amount of computation is ¼ compared to the case of the comparative example. For example, four circles when the interval between the segment areas is 2 pixels corresponds to 16 circles when the interval between the segment areas is 1 pixel.
 また、図11(c)に示すように、セグメント領域の大きさが15画素×15画素の場合は、セグメント領域S’0の右側に、セグメント領域S’0を4画素だけX軸正方向にずらした領域に、セグメント領域SP’xが設定される。同様に、セグメント領域S’0を4画素だけY軸正方向にずらした領域に、セグメント領域S’yが設定される。 In addition, as shown in FIG. 11C, when the size of the segment area is 15 pixels × 15 pixels, the segment area S′0 is set to the right side of the segment area S′0 by 4 pixels in the positive direction of the X axis. The segment area SP′x is set in the shifted area. Similarly, the segment area S′y is set in an area where the segment area S′0 is shifted by 4 pixels in the positive Y-axis direction.
 この場合、セグメント領域S’0の中心P’0と、セグメント領域S’xの中心P’xおよびセグメント領域S’yの中心p’yの間隔は、セグメント領域のずれ幅に相当する4画素となる。 In this case, the interval between the center P′0 of the segment area S′0, the center P′x of the segment area S′x, and the center p′y of the segment area S′y is 4 pixels corresponding to the shift width of the segment area. It becomes.
 このように、セグメント領域が決定されると、対象物体に対する距離は、上下左右に4画素に対応する距離で検出されることになる。すなわち、図11(c)に示す如く、円で概念的に示される位置が、距離を検出できる一つの位置となり、これが距離検出の分解能となる。したがって、本実施の形態では、セグメント領域の間隔が1画素の場合に比べ、分解能を1/4とすることができ、演算量は、1/16となる。たとえば、セグメント領域の間隔が4画素の場合の4個の円は、セグメント領域の間隔が1画素の場合の64個の円に相当する。 Thus, when the segment area is determined, the distance to the target object is detected at a distance corresponding to four pixels vertically and horizontally. That is, as shown in FIG. 11C, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution. Therefore, in the present embodiment, the resolution can be reduced to 1/4 and the amount of calculation is 1/16 compared to the case where the interval between the segment areas is one pixel. For example, four circles when the segment area interval is 4 pixels correspond to 64 circles when the segment area interval is 1 pixel.
 以上のように、セグメント領域の大きさに応じて、セグメント領域の間隔を大きくすると、セグメント領域の間隔が1画素に設定された場合に比べて、演算量を1/(セグメント領域の間隔の画素数の2乗)に抑えることができる。上述のように、マッチングのエラーを抑止するためには、セグメント領域の大きさは、15画素×15画素のように、ある程度の大きさが必要であるため、セグメント領域の間隔を大きくとることができ、演算量を大きく減少させることができる。また、シミュレーション結果のとおり、セグメント領域の間隔は、セグメント領域の一辺に含まれる画素の数の略1/4画素程度までは、セグメント領域の間隔が1画素の場合に対して解像感が損なわれることがないため、物体検出の精度を保ちつつ、演算量を減らすことができる。 As described above, when the interval between the segment areas is increased according to the size of the segment area, the amount of calculation is reduced to 1 / (pixel of the segment area interval) compared to the case where the interval between the segment areas is set to 1 pixel. (The square of the number). As described above, in order to suppress matching errors, the segment area needs to have a certain size, such as 15 pixels × 15 pixels. And the amount of calculation can be greatly reduced. Further, as shown in the simulation results, the resolution of the segment area is reduced to about 1/4 pixel of the number of pixels included in one side of the segment area as compared with the case where the interval of the segment area is 1 pixel. Therefore, the amount of calculation can be reduced while maintaining the accuracy of object detection.
 図12(a)は、本実施の形態におけるセグメント領域の設定までの処理の概要の流れを示す図である。図12(b)は、本実施の形態におけるセグメント領域の大きさと間隔の設定の処理の流れを示す図、図12(c)は、セグメント領域の大きさと間隔の値を保持するテーブルTを示す図である。これらの処理は、情報取得装置1のセットアップ時に、設定者によって設定装置を用いて行われる。 FIG. 12A is a diagram showing a flow of an outline of processing up to the setting of the segment area in the present embodiment. FIG. 12B shows a flow of processing for setting the size and interval of the segment area in this embodiment, and FIG. 12C shows a table T that holds the values of the size and interval of the segment area. FIG. These processes are performed by the setting person using the setting device when setting up the information acquisition device 1.
 図12(a)を参照して、まず、セグメント領域の大きさおよびセグメント領域の間隔の設定が行われる(S1)。具体的には、図12(b)に示すように、まず、設定者がセグメント領域の縦・横の大きさを設定する(S101)。設定されたセグメント領域の縦・横の大きさはメモリ26に記憶される。なお、ここでは、セグメント領域の縦・横の大きさは、それぞれ同じ値が用いられ、セグメント領域が正方形になるように設定される。 Referring to FIG. 12A, first, the size of the segment area and the interval between the segment areas are set (S1). Specifically, as shown in FIG. 12B, the setter first sets the vertical and horizontal sizes of the segment area (S101). The vertical and horizontal sizes of the set segment area are stored in the memory 26. Here, the same value is used for the vertical and horizontal sizes of the segment areas, and the segment areas are set to be square.
 次に、設定装置は、設定されたセグメント領域の一辺に含まれる画素の数を4で除算する(S102)。そして、設定装置は、除算された値の小数点以下を四捨五入する(S103)。設定装置は、こうして得られた値を、図12(c)に示すテーブルTのように、セグメント領域の間隔として、セグメント領域の大きさに関連付けてメモリ26に設定する(S104)。そして、設定装置は、設定者によりセグメント領域の大きさ・間隔の設定を完了する指示が入力された否かを判断する(S105)。設定を完了する指示が入力されない場合(S105:NO)、設定装置は、処理をS101に戻し、他のセグメント領域の大きさおよび間隔の設定を受け付ける。設定処理を完了する指示が入力された場合(S105:YES)、設定装置は、セグメント領域の大きさ、およびセグメント領域の間隔の設定処理を終了する。こうして、S101で設定されたセグメント領域の大きさに対する測定画像の解像感が劣化しない程度の、適正な距離検出の分解能が決定される。 Next, the setting device divides the number of pixels included in one side of the set segment area by 4 (S102). Then, the setting device rounds off the fractional value after the decimal point (S103). The setting device sets the value thus obtained in the memory 26 in association with the size of the segment area as the interval of the segment area as in the table T shown in FIG. 12C (S104). Then, the setting device determines whether an instruction to complete the setting of the size and interval of the segment area is input by the setter (S105). When an instruction to complete the setting is not input (S105: NO), the setting device returns the process to S101 and accepts the setting of the size and interval of another segment area. When an instruction to complete the setting process is input (S105: YES), the setting apparatus ends the setting process of the segment area size and the segment area interval. Thus, an appropriate distance detection resolution that does not deteriorate the resolution of the measurement image with respect to the size of the segment area set in S101 is determined.
 その後、図12(a)に戻り、設定者は、目標領域に基準面が配置された状態で、設定装置を介してCPU21にDP光を照射させ、基準画像が取得させる(S2)。そして、取得した基準画像と、S1で設定したセグメント領域の大きさ、およびセグメント領域の間隔をもとに、CPU21にセグメント領域を設定させる(S3)。 Then, returning to FIG. 12A, the setter causes the CPU 21 to irradiate the DP light through the setting device in a state where the reference plane is arranged in the target area, and obtain a reference image (S2). Then, the CPU 21 is caused to set the segment area based on the acquired reference image, the size of the segment area set in S1 and the interval between the segment areas (S3).
 図13は、セグメント領域の設定処理の流れを示す図である。 FIG. 13 is a diagram showing a flow of segment area setting processing.
 図13を参照して、まず、CPU21は、基準画像上に、参照パターン領域を設定し(S301)、CMOSイメージセンサ240上における参照パターン領域の位置に関する情報と、参照パターン領域に含まれる全画素の画素値(参照パターン)がメモリ26に記憶される(S302)。 Referring to FIG. 13, first, the CPU 21 sets a reference pattern area on the standard image (S301), information on the position of the reference pattern area on the CMOS image sensor 240, and all the pixels included in the reference pattern area. Are stored in the memory 26 (S302).
 次に、設定者は、設定装置を介して、セグメント領域の大きさを指定する。これを受けて、CPU21は、メモリ26に記憶されたセグメント領域の縦横幅の大きさを特定する(S303)。そして、セグメント領域の大きさに応じたセグメント領域の間隔をS104で生成したテーブルより読み出し、変数nにセットする(S304)。なお、セグメント領域の大きさは、複数選択可能となっており、セグメント領域の大きさに応じて、セグメント領域の間隔がセットされる。たとえば、15画素×15画素のセグメント領域の大きさが選択されると、セグメント領域の間隔として2画素が変数nにセットされる。 Next, the setter specifies the size of the segment area via the setting device. In response to this, the CPU 21 specifies the size of the vertical and horizontal widths of the segment area stored in the memory 26 (S303). Then, the segment area interval corresponding to the size of the segment area is read from the table generated in S104 and set to the variable n (S304). A plurality of segment area sizes can be selected, and the interval between the segment areas is set according to the size of the segment area. For example, when the size of the segment area of 15 pixels × 15 pixels is selected, 2 pixels are set as the variable n as the interval between the segment areas.
 次に、CPU21は、変数iに1をセットし(S305)、参照パターン領域の左上の角を頂点とし、規定された縦横幅の四角形の領域を指定して、最初のセグメント領域Si(Si=S1)を設定する(S306)。 Next, the CPU 21 sets 1 to the variable i (S305), designates a rectangular area having a specified vertical and horizontal width with the upper left corner of the reference pattern area as a vertex, and designates the first segment area Si (Si = S1) is set (S306).
 そして、CMOSイメージセンサ240上におけるセグメント領域Siの位置に関する情報がメモリ26に記憶される(S307)。 Then, information on the position of the segment area Si on the CMOS image sensor 240 is stored in the memory 26 (S307).
 次に、CPU21は、セグメント領域Siの位置が参照パターン領域の右端に到達したかどうかを判定する(S308)。セグメント領域Siの位置が参照パターン領域の右端に到達していないと(S308:NO)、iに1を加算し(S309)、セグメント領域Siの位置からX軸正方向にn画素ずらした領域を指定して、セグメント領域Siを設定する(S310)。その後、CPU21は、処理をS307に戻す。 Next, the CPU 21 determines whether or not the position of the segment area Si has reached the right end of the reference pattern area (S308). If the position of the segment area Si has not reached the right end of the reference pattern area (S308: NO), 1 is added to i (S309), and an area shifted by n pixels in the X-axis positive direction from the position of the segment area Si is obtained. By designating, the segment area Si is set (S310). Thereafter, the CPU 21 returns the process to S307.
 参照パターン領域の左端から右端までn画素間隔でセグメント領域Siが設定され、セグメント領域Siの位置情報がメモリ26に記憶されると(S308:YES)、セグメント領域Siの位置が参照パターン領域の下端に到達したかどうかを判定する(S311)。 When the segment area Si is set at n pixel intervals from the left end to the right end of the reference pattern area and the position information of the segment area Si is stored in the memory 26 (S308: YES), the position of the segment area Si is the lower end of the reference pattern area. Is determined (S311).
 セグメント領域Siの位置が参照パターン領域の下端に到達していないと(S311:NO)、iに1を加算し(S312)、セグメント領域Siの位置をY軸正方向にn画素ずらし、かつ、参照パターン領域の左端の領域を指定し、セグメント領域Siを設定する(S313)。その後、CPU21は、処理をS307に戻す。 If the position of the segment area Si has not reached the lower end of the reference pattern area (S311: NO), 1 is added to i (S312), the position of the segment area Si is shifted by n pixels in the Y-axis positive direction, and The leftmost area of the reference pattern area is designated, and the segment area Si is set (S313). Thereafter, the CPU 21 returns the process to S307.
 参照パターン領域の上端の左端から下端の右端までセグメント領域Siが設定され、セグメント領域Siの位置情報がメモリ26に記憶されると(S311:YES)、図11(a)または図11(b)に示すように、セグメント領域の大きさに応じてセグメント領域がX軸方向およびY軸方向にn画素間隔で並ぶように設定され、処理が終了する。なお、各セグメント領域内の画素値のパターンは、セグメント領域毎に異なっている。 When the segment area Si is set from the left end at the upper end of the reference pattern area to the right end at the lower end, and the position information of the segment area Si is stored in the memory 26 (S311: YES), FIG. 11 (a) or FIG. 11 (b). As shown in FIG. 4, the segment areas are set so as to be arranged at intervals of n pixels in the X-axis direction and the Y-axis direction in accordance with the size of the segment area, and the processing ends. Note that the pattern of pixel values in each segment area differs for each segment area.
 こうして、CMOSイメージセンサ240上における参照パターン領域の位置に関する情報と、参照パターン領域に含まれる全画素の画素値(参照パターン)と、参照パターン領域に対して設定されるセグメント領域の位置に関する情報が参照テンプレートとして、メモリ26に記憶される。なお、セグメント領域の情報として、CMOSイメージセンサ240上の位置に関する情報のみがメモリ26に記憶されたが、セグメント領域内の画素値が記憶されてもよい。 Thus, information on the position of the reference pattern area on the CMOS image sensor 240, pixel values (reference patterns) of all pixels included in the reference pattern area, and information on the position of the segment area set for the reference pattern area are obtained. It is stored in the memory 26 as a reference template. As the segment area information, only the information regarding the position on the CMOS image sensor 240 is stored in the memory 26, but the pixel value in the segment area may be stored.
 そして、図2のCPU21は、投射光学系100から検出対象物体までの距離を算出する際に、参照テンプレートから得られる各セグメント領域内のドットパターンのずれ量に基づいて、物体の各部までの距離を算出する。距離の算出方法は、上述のごとく、セグメント領域の変位量を用いて、三角測量法に基づき算出される。 Then, when calculating the distance from the projection optical system 100 to the detection target object, the CPU 21 in FIG. 2 determines the distance to each part of the object based on the amount of deviation of the dot pattern in each segment area obtained from the reference template. Is calculated. As described above, the distance calculation method is calculated based on the triangulation method using the displacement amount of the segment area.
 以上、本実施の形態によれば、セグメント領域の大きさに応じて、セグメント領域の間隔が大きく設定されるため、距離検出にかかる演算量の増加を抑えることができる。 As described above, according to the present embodiment, the interval between the segment areas is set to be large according to the size of the segment area, so that it is possible to suppress an increase in the amount of calculation required for distance detection.
 また、本実施によれば、セグメント領域の間隔が、セグメント領域の大きさの略1/4画素程度に設定されるため、物体の検出精度を保ちつつ、距離検出にかかる演算量の増加を抑えることができる。 Further, according to the present embodiment, the interval between the segment areas is set to about ¼ pixel of the size of the segment area, so that an increase in the amount of calculation for distance detection is suppressed while maintaining the object detection accuracy. be able to.
 以上、本発明の実施の形態について説明したが、本発明は、上記実施の形態に何ら制限されるものではなく、また、本発明の実施の形態も上記の他に種々の変更が可能である。 Although the embodiment of the present invention has been described above, the present invention is not limited to the above embodiment, and various modifications can be made to the embodiment of the present invention in addition to the above. .
 たとえば、上記実施の形態では、セグメント領域の大きさに応じたセグメント領域の間隔があらかじめテーブルTに設定されたが、セグメント領域の間隔は、テーブルに保持されずに、セグメント領域の大きさを読み出した時に、直接演算されてよい。 For example, in the above embodiment, the segment area interval corresponding to the segment area size is set in the table T in advance, but the segment area interval is not held in the table and the segment area size is read. May be directly calculated.
 図14は、この場合のセグメント領域の設定処理の流れを示す図である。図中、上記実施の形態における図13と同様の処理については、同一の符号が付されており、説明は省略する。 FIG. 14 is a diagram showing a flow of the segment area setting process in this case. In the figure, the same processing as in FIG. 13 in the above embodiment is denoted by the same reference numeral, and the description thereof is omitted.
 変更例の場合、図12(b)のS102~S104にかかる処理が省略され、図12(c)に示すテーブルTが省略される。 In the case of the modified example, the processing relating to S102 to S104 in FIG. 12B is omitted, and the table T shown in FIG. 12C is omitted.
 メモリ26に記憶されたセグメント領域の縦横幅の大きさを設定すると(S303)、CPU21は、セグメント領域の一辺に含まれる画素の数を4で除算する(S321)。そして、除算された値の小数点以下を四捨五入する(S322)。こうして得られた値を、セグメント領域の間隔として、変数nにセットする(S304)。その後、上記実施の形態と同様、セグメント領域Siがn画素間隔で設定される(S305~S312)。 When the vertical and horizontal width sizes of the segment area stored in the memory 26 are set (S303), the CPU 21 divides the number of pixels included in one side of the segment area by 4 (S321). Then, the value after the decimal point is rounded off (S322). The value thus obtained is set in the variable n as the interval between the segment areas (S304). Thereafter, similarly to the above embodiment, the segment regions Si are set at n pixel intervals (S305 to S312).
 本変更例においても、上記実施の形態と同様、物体の検出精度を保ちつつ、距離検出にかかる演算量の増加を抑えることができる。また、メモリ26にテーブルTを保持する必要がなく、メモリ26の容量の削減を図ることができる。 Also in this modified example, as in the above embodiment, it is possible to suppress an increase in the amount of calculation for distance detection while maintaining the object detection accuracy. Further, it is not necessary to store the table T in the memory 26, and the capacity of the memory 26 can be reduced.
 また、上記実施の形態では、セグメント領域の間隔をセグメント領域の一辺に含まれる画素の数の略1/4程度に設定することで、物体の検出精度を保ちつつ、距離の演算量の低減を図ったが、求められる検出精度に応じて、セグメント領域の間隔をやや小さくしてもよいし、大きくしてもよい。 In the above embodiment, the distance between the segment areas is set to about ¼ of the number of pixels included in one side of the segment area, thereby reducing the amount of calculation of the distance while maintaining the object detection accuracy. Although illustrated, the interval between the segment regions may be slightly reduced or increased depending on the required detection accuracy.
 また、上記実施の形態では、セグメント領域の大きさを設定者によって選択可能としたが、あらかじめ、セグメント領域の大きさが決められていてもよい。 In the above embodiment, the size of the segment area can be selected by the setter, but the size of the segment area may be determined in advance.
 また、上記実施の形態では、セグメント領域の間隔がセグメント領域の一辺に含まれる画素の数の略1/4程度に設定されたが、セグメント領域の大きさに応じてセグメント領域の間隔が大きくなれば、その他の値が設定されてもよい。たとえば、セグメント領域の一辺に含まれる画素の数の略1/4程度を上限値として、2画素から上限値までの間の画素数に、セグメント領域の間隔が設定されても良い。この場合、あらかじめ、2画素から上限値までの間の画素数から、セグメント領域の間隔の最適値を個別に決定しておき、その値をテーブルに保持しておくことにより、セグメント領域の間隔を決定してもよい。 In the above embodiment, the interval between the segment areas is set to about 1/4 of the number of pixels included in one side of the segment area. However, the interval between the segment areas can be increased according to the size of the segment area. For example, other values may be set. For example, the interval of the segment areas may be set to the number of pixels between 2 pixels and the upper limit value, with the upper limit value being approximately ¼ of the number of pixels included in one side of the segment area. In this case, the optimum value of the segment area interval is individually determined in advance from the number of pixels between 2 pixels and the upper limit value, and the segment area interval is determined by holding the value in a table. You may decide.
 また、上記実施の形態では、セグメント領域は、縦・横の大きさが等しく、正方形になるように設定されたが、一辺が、他方の辺よりも大きくなるように、設定されてもよい。この場合、どちらか一辺に含まれる画素の数をもとに、セグメント領域の間隔が決定される。 In the above embodiment, the segment areas are set to have the same vertical and horizontal sizes and become square, but may be set so that one side is larger than the other side. In this case, the interval between the segment areas is determined based on the number of pixels included on either side.
 また、上記実施の形態では、距離検出のエラー判定として、最も照合率の高いRsadと、その次に照合率が高いRsadとの差分が閾値を超えているかに基づいて、エラーが判定されたが、最も照合率の高いRsadが所定の閾値を超えているかに基づいて、エラーが判定されてもよい。 In the above embodiment, as the error detection of distance detection, an error is determined based on whether the difference between Rsad with the highest matching rate and Rsad with the next highest matching rate exceeds a threshold. An error may be determined based on whether Rsad having the highest collation rate exceeds a predetermined threshold.
 また、上記実施の形態では、セグメント領域と比較領域のマッチング率を算出する前に、セグメント領域と比較領域に含まれる画素の画素値を2値化したが、CMOSイメージセンサ240によって得られた画素値をそのまま用いて、マッチングしてもよい。また、上記実施の形態では、CMOSイメージセンサ240によって得られた画素値をそのまま2値化したが、画素値について、所定の画素の重みづけ処理、および背景光の除去処理、等の補正処理を行った後に、2値化、もしくは多値化してもよい。 In the above embodiment, the pixel values of the pixels included in the segment area and the comparison area are binarized before calculating the matching rate between the segment area and the comparison area. You may match using a value as it is. In the above embodiment, the pixel value obtained by the CMOS image sensor 240 is binarized as it is. However, the pixel value is subjected to correction processing such as predetermined pixel weighting processing and background light removal processing. After performing, it may be binarized or multi-valued.
 また、上記実施の形態では、三角測量法を用いて距離情報が求められ、メモリ26に記憶されたが、物体の輪郭抽出を主目的とするような場合は、三角測量法を用いた距離を演算せずに、セグメント領域の変位量(画素ずれ量)が距離情報として取得されてもよい。 In the above embodiment, the distance information is obtained using the triangulation method and stored in the memory 26. However, when the main purpose is to extract the contour of the object, the distance using the triangulation method is set. The displacement amount (pixel shift amount) of the segment area may be acquired as the distance information without calculating.
 また、上記実施の形態では、投射光学系100に、FMD150が用いられたが、FMD150は省略されてもよい。 In the above embodiment, the FMD 150 is used for the projection optical system 100, but the FMD 150 may be omitted.
 また、上記実施の形態では、目標領域に照射されるレーザ光の波長帯以外の波長帯の光を除去するためにフィルタ230を配したが、たとえば、目標領域に照射されるレーザ光以外の光の信号成分を、CMOSイメージセンサ240から出力される信号から除去する回路構成が配されるような場合には、フィルタ230を省略することができる。また、アパーチャ210の配置位置は、何れか2つの撮像レンズの間であってもよい。 Further, in the above embodiment, the filter 230 is disposed to remove light in a wavelength band other than the wavelength band of the laser light irradiated to the target region. For example, light other than the laser light irradiated to the target region is used. In the case where a circuit configuration for removing the signal component is removed from the signal output from the CMOS image sensor 240, the filter 230 can be omitted. The arrangement position of the aperture 210 may be between any two imaging lenses.
 また、上記実施の形態では、受光素子として、CMOSイメージセンサ240を用いたが、これに替えて、CCDイメージセンサを用いることもできる。さらに、投射光学系100および受光光学系200の構成も、適宜変更可能である。また、情報取得装置1と情報処理装置2は一体化されてもよいし、情報取得装置1と情報処理装置2がテレビやゲーム機、パーソナルコンピュータと一体化されてもよい。 In the above embodiment, the CMOS image sensor 240 is used as the light receiving element, but a CCD image sensor can be used instead. Furthermore, the configurations of the projection optical system 100 and the light receiving optical system 200 can be changed as appropriate. Further, the information acquisition device 1 and the information processing device 2 may be integrated, or the information acquisition device 1 and the information processing device 2 may be integrated with a television, a game machine, or a personal computer.
 本発明の実施の形態は、特許請求の範囲に示された技術的思想の範囲内において、適宜、種々の変更が可能である。 The embodiment of the present invention can be appropriately modified in various ways within the scope of the technical idea shown in the claims.
     1 … 情報取得装置
    21 … CPU(距離取得部)
   21b … 距離取得部
    24 … 撮像信号処理回路(距離取得部)
    26 … メモリ(記憶部)
   100 … 投射光学系
   110 … レーザ光源
   120 … コリメータレンズ
   140 … DOE(回折光学素子)
   200 … 受光光学系
DESCRIPTION OF SYMBOLS 1 ... Information acquisition apparatus 21 ... CPU (distance acquisition part)
21b ... Distance acquisition unit 24 ... Imaging signal processing circuit (distance acquisition unit)
26 ... Memory (storage unit)
DESCRIPTION OF SYMBOLS 100 ... Projection optical system 110 ... Laser light source 120 ... Collimator lens 140 ... DOE (diffractive optical element)
200 ... Light receiving optical system

Claims (7)

  1.  光を用いて目標領域の情報を取得する情報取得装置において、
     目標領域に所定のドットパターンでレーザ光を投射する投射光学系と、
     前記投射光学系に対して所定の距離だけ横方向に離れて並ぶように配置され、前記目標領域をイメージセンサにより撮像する受光光学系と、
     基準面に前記レーザ光を照射したときに前記受光光学系により撮像された基準ドットパターンを保持する記憶部と、
     前記基準ドットパターンにセグメント領域を設定し、距離測定時に目標領域を撮像して取得された実測ドットパターンと前記セグメント領域内のドットとを照合することにより、前記セグメント領域に対応する前記目標領域内の位置について距離を取得する距離取得部と、を備え、
     前記距離取得部は、前記イメージセンサ上において縦が5画素以上、横が5画素以上の四角形状となるよう前記セグメント領域の大きさを設定し、隣り合う前記セグメント領域の間隔を、2画素分の距離以上で、且つ、前記セグメント領域の大きさに応じた上限値以下の距離に設定する、
    ことを特徴とする情報取得装置。
    In an information acquisition device that acquires information on a target area using light,
    A projection optical system that projects a laser beam with a predetermined dot pattern to a target area;
    A light receiving optical system that is arranged so as to be laterally separated by a predetermined distance with respect to the projection optical system, and that images the target area by an image sensor;
    A storage unit for holding a reference dot pattern imaged by the light receiving optical system when the laser beam is irradiated on a reference surface;
    A segment area is set in the reference dot pattern, and an actual dot pattern acquired by imaging the target area at the time of distance measurement is compared with the dots in the segment area, whereby the target area corresponding to the segment area is matched. A distance acquisition unit that acquires a distance for the position of
    The distance acquisition unit sets the size of the segment area so that the image sensor has a quadrangular shape with 5 pixels or more in the vertical direction and 5 pixels or more in the horizontal direction, and sets the interval between the adjacent segment areas by 2 pixels. Is set to a distance equal to or greater than the distance and equal to or less than the upper limit value according to the size of the segment area.
    An information acquisition apparatus characterized by that.
  2.  請求項1に記載の情報取得装置において、
     前記上限値は、前記セグメント領域の一辺に含まれる画素の数を4で除算した値に近い画素数に設定される、
    ことを特徴とする情報取得装置。
    The information acquisition device according to claim 1,
    The upper limit value is set to a number of pixels close to a value obtained by dividing the number of pixels included in one side of the segment area by 4.
    An information acquisition apparatus characterized by that.
  3.  請求項2に記載の情報取得装置において、
     前記上限値は、前記セグメント領域の一辺に含まれる画素の数を4で除算した値に最も近い画素数に設定される、
    ことを特徴とする情報取得装置。
    The information acquisition device according to claim 2,
    The upper limit value is set to the number of pixels closest to the value obtained by dividing the number of pixels included in one side of the segment area by 4.
    An information acquisition apparatus characterized by that.
  4.  請求項2に記載の情報取得装置において、
     前記上限値は、前記セグメント領域の一辺に含まれる画素の数を4で除算した値の小数点以下を、四捨五入、切り上げまたは切り捨てした値に設定される、
    ことを特徴とする情報取得装置。
    The information acquisition device according to claim 2,
    The upper limit value is set to a value obtained by rounding off, rounding up, or rounding down the value after dividing the number of pixels included in one side of the segment area by 4.
    An information acquisition apparatus characterized by that.
  5.  請求項1ないし4の何れか一項に記載の情報取得装置において、
     前記距離取得部は、隣り合う前記セグメント領域の間隔を、前記上限値に設定する、
    ことを特徴とする情報取得装置。
    In the information acquisition device according to any one of claims 1 to 4,
    The distance acquisition unit sets the interval between the adjacent segment areas to the upper limit value.
    An information acquisition apparatus characterized by that.
  6.  請求項1ないし5のいずれか一項に記載の情報取得装置において、
     前記投射光学系は、レーザ光源と、前記レーザ光源から出射されたレーザ光が入射するコリメータレンズと、前記コリメータレンズを透過した前記レーザ光を回折によりドットパターンの光に変換する回折光学素子と、を備える、
    ことを特徴とする情報取得装置。
    In the information acquisition device according to any one of claims 1 to 5,
    The projection optical system includes a laser light source, a collimator lens on which laser light emitted from the laser light source is incident, a diffractive optical element that converts the laser light transmitted through the collimator lens into light of a dot pattern by diffraction, and Comprising
    An information acquisition apparatus characterized by that.
  7.  請求項1ないし6の何れか一項に記載の情報取得装置を有する物体検出装置。 An object detection device having the information acquisition device according to any one of claims 1 to 6.
PCT/JP2012/069125 2011-08-26 2012-07-27 Object detection device and information acquisition device WO2013031447A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-184307 2011-08-26
JP2011184307A JP2014211305A (en) 2011-08-26 2011-08-26 Object detection device and information acquisition device

Publications (1)

Publication Number Publication Date
WO2013031447A1 true WO2013031447A1 (en) 2013-03-07

Family

ID=47755950

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/069125 WO2013031447A1 (en) 2011-08-26 2012-07-27 Object detection device and information acquisition device

Country Status (2)

Country Link
JP (1) JP2014211305A (en)
WO (1) WO2013031447A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235736A (en) * 2017-12-25 2018-06-29 深圳前海达闼云端智能科技有限公司 Positioning method, cloud server, terminal, system, electronic device and computer program product
WO2022116675A1 (en) * 2020-12-02 2022-06-09 达闼机器人股份有限公司 Object detection method and apparatus, storage medium, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000002529A (en) * 1998-06-16 2000-01-07 Honda Motor Co Ltd Vehicle distance measuring device
JP2004191092A (en) * 2002-12-09 2004-07-08 Ricoh Co Ltd Three-dimensional information acquisition system
JP2005246033A (en) * 2004-02-04 2005-09-15 Sumitomo Osaka Cement Co Ltd State analysis apparatus
JP2006113832A (en) * 2004-10-15 2006-04-27 Canon Inc Stereoscopic image processor and program
JP2009204991A (en) * 2008-02-28 2009-09-10 Funai Electric Co Ltd Compound-eye imaging apparatus
JP2010101683A (en) * 2008-10-22 2010-05-06 Nissan Motor Co Ltd Distance measuring device and distance measuring method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000002529A (en) * 1998-06-16 2000-01-07 Honda Motor Co Ltd Vehicle distance measuring device
JP2004191092A (en) * 2002-12-09 2004-07-08 Ricoh Co Ltd Three-dimensional information acquisition system
JP2005246033A (en) * 2004-02-04 2005-09-15 Sumitomo Osaka Cement Co Ltd State analysis apparatus
JP2006113832A (en) * 2004-10-15 2006-04-27 Canon Inc Stereoscopic image processor and program
JP2009204991A (en) * 2008-02-28 2009-09-10 Funai Electric Co Ltd Compound-eye imaging apparatus
JP2010101683A (en) * 2008-10-22 2010-05-06 Nissan Motor Co Ltd Distance measuring device and distance measuring method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235736A (en) * 2017-12-25 2018-06-29 深圳前海达闼云端智能科技有限公司 Positioning method, cloud server, terminal, system, electronic device and computer program product
CN108235736B (en) * 2017-12-25 2021-11-16 达闼机器人有限公司 Positioning method, cloud server, terminal, system, electronic device and computer program product
WO2022116675A1 (en) * 2020-12-02 2022-06-09 达闼机器人股份有限公司 Object detection method and apparatus, storage medium, and electronic device

Also Published As

Publication number Publication date
JP2014211305A (en) 2014-11-13

Similar Documents

Publication Publication Date Title
WO2012137674A1 (en) Information acquisition device, projection device, and object detection device
US10514148B2 (en) Pattern projection using microlenses
JP5138116B2 (en) Information acquisition device and object detection device
JP5138119B2 (en) Object detection device and information acquisition device
JP5214062B1 (en) Information acquisition device and object detection device
JP5143312B2 (en) Information acquisition device, projection device, and object detection device
WO2013046927A1 (en) Information acquisition device and object detector device
WO2014108976A1 (en) Object detecting device
JP2014137762A (en) Object detector
JP2012237604A (en) Information acquisition apparatus, projection device and object detection device
JP2014044113A (en) Information acquisition device and object detector
WO2013031447A1 (en) Object detection device and information acquisition device
WO2012144340A1 (en) Information acquisition device and object detection device
WO2013015146A1 (en) Object detection device and information acquisition device
JPWO2013015145A1 (en) Information acquisition device and object detection device
JP2013246009A (en) Object detection apparatus
JP2014085257A (en) Information acquisition device and object detection device
WO2013046928A1 (en) Information acquiring device and object detecting device
EP3987248B1 (en) Projecting a structured light pattern from an apparatus having an oled display screen
JP2014035294A (en) Information acquisition device and object detector
JP2014035304A (en) Information acquisition device and object detection apparatus
WO2013031448A1 (en) Object detection device and information acquisition device
JP2013234956A (en) Information acquisition apparatus and object detection system
JP2013234957A (en) Information acquisition apparatus and object detection system
US8351042B1 (en) Object detecting device and information acquiring device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12828358

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12828358

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP