WO2013031447A1 - Dispositif de détection d'objets et dispositif d'acquisition d'informations - Google Patents

Dispositif de détection d'objets et dispositif d'acquisition d'informations Download PDF

Info

Publication number
WO2013031447A1
WO2013031447A1 PCT/JP2012/069125 JP2012069125W WO2013031447A1 WO 2013031447 A1 WO2013031447 A1 WO 2013031447A1 JP 2012069125 W JP2012069125 W JP 2012069125W WO 2013031447 A1 WO2013031447 A1 WO 2013031447A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
segment
area
distance
segment area
Prior art date
Application number
PCT/JP2012/069125
Other languages
English (en)
Japanese (ja)
Inventor
武藤 裕之
Original Assignee
三洋電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三洋電機株式会社 filed Critical 三洋電機株式会社
Publication of WO2013031447A1 publication Critical patent/WO2013031447A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the present invention relates to an object detection apparatus that detects an object in a target area based on a state of reflected light when light is projected onto the target area, and an information acquisition apparatus suitable for use in the object detection apparatus.
  • An object detection device using light has been developed in various fields.
  • An object detection apparatus using a so-called distance image sensor can detect not only a planar image on a two-dimensional plane but also the shape and movement of the detection target object in the depth direction.
  • light in a predetermined wavelength band is projected from a laser light source or LED (Light-Emitting-Diode) onto a target area, and the reflected light is received by a light-receiving element such as a CMOS image sensor.
  • CMOS image sensor Light-Emitting-Diode
  • a distance image sensor of a type that irradiates a target region with laser light having a predetermined dot pattern reflected light from the target region of laser light having a dot pattern is received by a light receiving element. Based on the light receiving position of the dot on the light receiving element, the distance to each part of the detection target object (irradiation position of each dot on the detection target object) is detected using triangulation (for example, non-patent) Reference 1).
  • triangulation for example, non-patent
  • the dot pattern captured by the image sensor when the reference plane is arranged at a position separated by a predetermined distance is compared with the dot pattern captured by the image sensor at the time of actual measurement to detect the distance.
  • a plurality of areas are set in the dot pattern with respect to the reference plane.
  • the object detection device detects the distance to the target object for each region based on the position on the dot pattern captured at the time of actual measurement of the dots included in each region. Then, the object detection device detects the shape, contour, movement, and the like of the object based on the distance information obtained for each region.
  • the distance detection resolution increases.
  • the resolution of distance detection can be remarkably enhanced by setting each region so that adjacent regions are shifted from each other by one pixel on the image sensor.
  • the resolution of distance detection may be set to such an extent that the shape and contour of the detection target object can be grasped by the acquired distance information.
  • the present invention has been made in view of this point, and an object of the present invention is to provide an information acquisition apparatus and an object detection apparatus that can suppress an increase in the amount of calculation while maintaining the accuracy of object detection.
  • the 1st aspect of this invention is related with the information acquisition apparatus which acquires the information of a target area
  • the information acquisition device is arranged so as to be lined up with a projection optical system that projects a laser beam with a predetermined dot pattern on a target area and spaced apart by a predetermined distance from the projection optical system,
  • a light receiving optical system that images a target area with an image sensor, a storage unit that holds a reference dot pattern imaged by the light receiving optical system when the laser beam is irradiated on a reference surface, and a segment area in the reference dot pattern
  • the distance for which the distance is acquired for the position in the target area corresponding to the segment area by collating the measured dot pattern acquired by imaging the target area during distance measurement and the dots in the segment area An acquisition unit.
  • the distance acquisition unit sets the size of the segment area so that the image sensor has a quadrangular shape with 5 pixels or more in the vertical direction and 5 pixels or more in the horizontal direction, and sets the interval between the adjacent segment areas by 2 pixels. And a distance not more than the upper limit value corresponding to the size of the segment area.
  • the second aspect of the present invention relates to an object detection apparatus.
  • the object detection apparatus according to this aspect includes the information acquisition apparatus according to the first aspect.
  • an information acquisition device and an object detection device that can suppress an increase in the amount of calculation while maintaining the accuracy of object detection.
  • an information acquisition device of a type that irradiates a target area with laser light having a predetermined dot pattern is exemplified.
  • FIG. 1 shows a schematic configuration of the object detection apparatus according to the present embodiment.
  • the object detection device includes an information acquisition device 1 and an information processing device 2.
  • the television 3 is controlled by a signal from the information processing device 2.
  • the information acquisition device 1 projects infrared light over the entire target area and receives the reflected light with a CMOS image sensor, whereby the distance between each part of the object in the target area (hereinafter referred to as “three-dimensional distance information”). To get.
  • the acquired three-dimensional distance information is sent to the information processing apparatus 2 via the cable 4.
  • the information processing apparatus 2 is, for example, a controller for TV control, a game machine, a personal computer, or the like.
  • the information processing device 2 detects an object in the target area based on the three-dimensional distance information received from the information acquisition device 1, and controls the television 3 based on the detection result.
  • the information processing apparatus 2 detects a person based on the received three-dimensional distance information and detects the movement of the person from the change in the three-dimensional distance information.
  • the information processing device 2 is a television control controller
  • the information processing device 2 detects the person's gesture from the received three-dimensional distance information, and outputs a control signal to the television 3 in accordance with the gesture.
  • the application program to be installed is installed.
  • the user can cause the television 3 to execute a predetermined function such as channel switching or volume up / down by making a predetermined gesture while watching the television 3.
  • the information processing device 2 when the information processing device 2 is a game machine, the information processing device 2 detects the person's movement from the received three-dimensional distance information, and displays a character on the television screen according to the detected movement.
  • An application program that operates and changes the game battle situation is installed. In this case, the user can experience a sense of realism in which he / she plays a game as a character on the television screen by making a predetermined movement while watching the television 3.
  • FIG. 2 is a diagram showing the configuration of the information acquisition device 1 and the information processing device 2.
  • the information acquisition apparatus 1 includes a projection optical system 100 and a light receiving optical system 200 as a configuration of an optical unit.
  • the projection optical system 100 and the light receiving optical system 200 are arranged in the information acquisition apparatus 1 so as to be aligned in the X-axis direction.
  • the projection optical system 100 includes a laser light source 110, a collimator lens 120, a leakage mirror 130, a diffractive optical element (DOE: Diffractive Optical Element) 140, and an FMD (Front Monitor Diode) 150.
  • the light receiving optical system 200 includes an aperture 210, an imaging lens 220, a filter 230, and a CMOS image sensor 240.
  • the information acquisition apparatus 1 includes a CPU (Central Processing Unit) 21, a laser driving circuit 22, a PD signal processing circuit 23, an imaging signal processing circuit 24, an input / output circuit 25, A memory 26 is provided.
  • CPU Central Processing Unit
  • the laser light source 110 outputs laser light in a narrow wavelength band with a wavelength of about 830 nm in a direction away from the light receiving optical system 200 (X-axis negative direction).
  • the collimator lens 120 converts the laser light emitted from the laser light source 110 into light slightly spread from parallel light (hereinafter simply referred to as “parallel light”).
  • the leakage mirror 130 is composed of a multilayer film of dielectric thin films, and the number of layers and the thickness of the film are designed so that the reflectance is slightly lower than 100% and the transmittance is several steps smaller than the reflectance.
  • the leakage mirror 130 reflects most of the laser light incident from the collimator lens 120 side in the direction toward the DOE 140 (Z-axis direction) and transmits the remaining part in the direction toward the FMD 150 (X-axis negative direction).
  • the DOE 140 has a diffraction pattern on the incident surface. Due to the diffraction effect of the diffraction pattern, the laser light incident on the DOE 140 is converted into a dot pattern laser light and irradiated onto the target region.
  • the diffraction pattern has, for example, a structure in which a step type diffraction hologram is formed in a predetermined pattern. The diffraction hologram is adjusted in pattern and pitch so as to convert the laser light converted into parallel light by the collimator lens 120 into laser light of a dot pattern.
  • the DOE 140 irradiates the target region with the laser beam incident from the leakage mirror 130 as a laser beam having a dot pattern that spreads radially.
  • the size of each dot in the dot pattern depends on the beam size of the laser light when entering the DOE 140.
  • the FMD 150 receives the laser light transmitted through the leakage mirror 130 and outputs an electrical signal corresponding to the amount of light received.
  • the laser light reflected from the target area enters the imaging lens 220 through the aperture 210.
  • the aperture 210 stops the light from the outside so as to match the F number of the imaging lens 220.
  • the imaging lens 220 collects the light incident through the aperture 210 on the CMOS image sensor 240.
  • the filter 230 is an IR filter (Infrared Filter) that transmits light in the infrared wavelength band including the emission wavelength (about 830 nm) of the laser light source 110 and cuts the wavelength band of visible light.
  • the CMOS image sensor 240 receives the light collected by the imaging lens 220 and outputs a signal (charge) corresponding to the amount of received light to the imaging signal processing circuit 24 for each pixel.
  • the output speed of the signal is increased so that the signal (charge) of the pixel can be output to the imaging signal processing circuit 24 with high response from light reception in each pixel.
  • CPU 21 controls each unit according to a control program stored in memory 26.
  • the CPU 21 is provided with the functions of a laser control unit 21a for controlling the laser light source 110 and a distance acquisition unit 21b for generating three-dimensional distance information.
  • the laser drive circuit 22 drives the laser light source 110 according to a control signal from the CPU 21.
  • the PD signal processing circuit 23 amplifies and digitizes the voltage signal corresponding to the amount of received light output from the FMD 150 and outputs it to the CPU 21.
  • the CPU 21 determines to amplify or decrease the light amount of the laser light source 110 by processing by the laser control unit 21a.
  • the laser control unit 21 a transmits a control signal for changing the light emission amount of the laser light source 110 to the laser driving circuit 22. Thereby, the power of the laser beam emitted from the laser light source 110 is controlled to be substantially constant.
  • the imaging signal processing circuit 24 controls the CMOS image sensor 240 and sequentially takes in the signal (charge) of each pixel generated by the CMOS image sensor 240 for each line. Then, the captured signals are sequentially output to the CPU 21. Based on the signal (imaging signal) supplied from the imaging signal processing circuit 24, the CPU 21 calculates the distance from the information acquisition device 1 to each part of the detection target by processing by the distance acquisition unit 21b.
  • the input / output circuit 25 controls data communication with the information processing apparatus 2.
  • the information processing apparatus 2 includes a CPU 31, an input / output circuit 32, and a memory 33.
  • the information processing apparatus 2 has a configuration for performing communication with the television 3 and for reading information stored in an external memory such as a CD-ROM and installing it in the memory 33.
  • an external memory such as a CD-ROM
  • the configuration of these peripheral circuits is not shown for the sake of convenience.
  • the CPU 31 controls each unit according to a control program (application program) stored in the memory 33.
  • a control program application program
  • the CPU 31 is provided with the function of the object detection unit 31a for detecting an object in the image.
  • a control program is read from a CD-ROM by a drive device (not shown) and installed in the memory 33, for example.
  • the object detection unit 31a detects a person in the image and its movement from the three-dimensional distance information supplied from the information acquisition device 1. Then, a process for operating the character on the television screen according to the detected movement is executed by the control program.
  • the object detection unit 31 a detects a person in the image and its movement (gesture) from the three-dimensional distance information supplied from the information acquisition device 1. To do. Then, processing for controlling functions (channel switching, volume adjustment, etc.) of the television 3 is executed by the control program in accordance with the detected movement (gesture).
  • the input / output circuit 32 controls data communication with the information acquisition device 1.
  • FIG. 3 is a perspective view showing an installation state of the projection optical system 100 and the light receiving optical system 200.
  • the projection optical system 100 and the light receiving optical system 200 are disposed on the base plate 300.
  • the optical members constituting the projection optical system 100 are installed in the housing 100a, and the housing 100a is installed on the base plate 300. Thereby, the projection optical system 100 is arranged on the base plate 300.
  • Reference numerals 150a and 240a denote FPCs (flexible printed circuit boards) for supplying signals from the FMD 150 and the CMOS image sensor 240 to a circuit board (not shown), respectively.
  • the optical member constituting the light receiving optical system 200 is installed in the holder 200a, and this holder 200a is attached to the base plate 300 from the back surface of the base plate 300. As a result, the light receiving optical system 200 is disposed on the base plate 300.
  • the height in the Z-axis direction is higher than that of the projection optical system 100.
  • the periphery of the arrangement position of the light receiving optical system 200 is raised by one step in the Z-axis direction.
  • the positions of the exit pupil of the projection optical system 100 and the entrance pupil of the light receiving optical system 200 substantially coincide with each other in the Z-axis direction. Further, the projection optical system 100 and the light receiving optical system 200 are arranged with a predetermined distance in the X-axis direction so that the projection center of the projection optical system 100 and the imaging center of the light-receiving optical system 200 are aligned on a straight line parallel to the X axis. Installed at.
  • the installation interval between the projection optical system 100 and the light receiving optical system 200 is set according to the distance between the information acquisition device 1 and the reference plane of the target area.
  • the distance between the reference plane and the information acquisition device 1 varies depending on how far away the target is to be detected. The closer the distance to the target to be detected is, the narrower the installation interval between the projection optical system 100 and the light receiving optical system 200 is. Conversely, as the distance to the target to be detected increases, the installation interval between the projection optical system 100 and the light receiving optical system 200 increases.
  • FIG. 4A is a diagram schematically showing the irradiation state of the laser light on the target region
  • FIG. 4B is a diagram schematically showing the light receiving state of the laser light in the CMOS image sensor 240.
  • FIG. 5B shows a flat surface (screen) in the target area and a light receiving state when a person is present in front of the screen.
  • the projection optical system 100 irradiates a target region with laser light having a dot pattern (hereinafter, the entire laser light having this pattern is referred to as “DP light”). .
  • the luminous flux region of DP light is indicated by a solid line frame.
  • dot regions hereinafter simply referred to as “dots” in which the intensity of the laser light is increased by the diffraction action by the DOE 140 are scattered according to the dot pattern by the diffraction action by the DOE 140.
  • the DP light reflected thereby is distributed on the CMOS image sensor 240 as shown in FIG.
  • the entire DP light receiving area on the CMOS image sensor 240 is indicated by a dashed frame, and the DP light receiving area incident on the imaging effective area of the CMOS image sensor 240 is indicated by a solid frame.
  • the effective imaging area of the CMOS image sensor 240 is an area where a signal is output as a sensor among areas where the CMOS image sensor 240 receives DP light, and has a size of, for example, VGA (640 pixels ⁇ 480 pixels).
  • VGA 640 pixels ⁇ 480 pixels
  • FIG. 5 is a diagram for explaining a reference pattern setting method used in the distance detection method.
  • a flat reflection plane RS perpendicular to the Z-axis direction is disposed at a position at a predetermined distance Ls from the projection optical system 100.
  • the emitted DP light is reflected by the reflection plane RS and enters the CMOS image sensor 240 of the light receiving optical system 200.
  • an electrical signal for each pixel in the effective imaging area is output from the CMOS image sensor 240.
  • the output electric signal value (pixel value) for each pixel is developed on the memory 26 of FIG.
  • an image including all pixel values obtained by reflection from the reflection surface RS is referred to as a “reference image”, and the reflection surface RS is referred to as a “reference surface”.
  • FIG. 5B shows a state in which the light receiving surface is seen through in the positive direction of the Z axis from the back side of the CMOS image sensor 240. The same applies to the drawings after FIG.
  • a plurality of segment areas having a predetermined size are set for the reference pattern area thus set.
  • the size of the segment area is determined in consideration of the contour extraction accuracy of the object based on the obtained distance information and the load of the calculation amount of distance detection for the CPU 21. The relationship between the size of the segment area and the contour extraction accuracy of the object will be described later with reference to FIG.
  • each segment area is indicated by 7 pixels ⁇ 7 pixels, and the center pixel of each segment area is indicated by a cross. That is, the segment area is set to a square whose side length is seven times the interval between adjacent pixels.
  • the segment areas are set so that adjacent segment areas are arranged at intervals of one pixel in the X-axis direction and the Y-axis direction with respect to the reference pattern area. That is, a certain segment area is set at a position shifted by one pixel with respect to a segment area adjacent to the segment area in the X-axis direction and the Y-axis direction. At this time, each segment area is dotted with dots in a unique pattern. Therefore, the pattern of pixel values in the segment area is different for each segment area.
  • the relationship between the segment area interval and the distance detection resolution in the comparative example will be described later with reference to FIG.
  • reference pattern area on the CMOS image sensor 240 information on the position of the reference pattern area on the CMOS image sensor 240, pixel values (reference patterns) of all pixels included in the reference pattern area, and segment area information set for the reference pattern area are shown in FIG. 2 memory 26. These pieces of information stored in the memory 26 are hereinafter referred to as “reference templates”.
  • the CPU 21 calculates the distance to each part of the object based on the shift amount of the dot pattern in each segment area obtained from the reference template.
  • DP light corresponding to a predetermined segment area Sn on the reference pattern is reflected by the object, and the segment area Sn. It is incident on a different region Sn ′. Since the projection optical system 100 and the light receiving optical system 200 are adjacent to each other in the X-axis direction, the displacement direction of the region Sn ′ with respect to the segment region Sn is parallel to the X-axis. In the case of FIG. 5A, since the object is at a position closer than the distance Ls, the region Sn 'is displaced in the positive direction of the X axis with respect to the segment region Sn. If the object is at a position farther than the distance Ls, the region Sn ′ is displaced in the negative X-axis direction with respect to the segment region Sn.
  • the distance Lr from the projection optical system 100 to the portion of the object irradiated with DP light (DPn) is triangulated using the distance Ls.
  • the distance from the projection optical system 100 is calculated for the part of the object corresponding to another segment area.
  • Non-Patent Document 1 The 19th Annual Conference of the Robotics Society of Japan (September 18-20, 2001), Proceedings, P1279-1280).
  • the CMOS image sensor 240 it is detected to which position the segment region Sn of the reference template has been displaced at the time of actual measurement. This detection is performed by collating the dot pattern obtained from the DP light irradiated onto the CMOS image sensor 240 at the time of actual measurement with the dot pattern included in the segment region Sn.
  • an image made up of all the pixel values obtained from the DP light irradiated to the imaging effective area on the CMOS image sensor 240 at the time of actual measurement will be referred to as “measured image”.
  • the effective imaging area of the CMOS image sensor 240 at the time of actual measurement is, for example, the size of VGA (640 pixels ⁇ 480 pixels) as in the case of acquiring the reference image.
  • FIGS. 6A to 6E are diagrams for explaining such a distance detection method.
  • FIG. 6A is a diagram showing a reference pattern region set in a standard image on the CMOS image sensor 240
  • FIG. 6B is a diagram showing an actually measured image on the CMOS image sensor 240 at the time of actual measurement.
  • FIGS. 6C to 6E are diagrams for explaining a method for collating the dot pattern of the DP light included in the actual measurement image and the dot pattern included in the segment area of the reference template.
  • FIGS. 6A and 6B only a part of the segment areas is shown for convenience.
  • FIG. 4 (b) there is a person in front of the reference plane as a detection target object, and the image of the person is reflected. It is shown.
  • a search range Ri is set for the segment area Si on the actual measurement image.
  • the search range Ri has a predetermined width in the X-axis direction.
  • the segment area Si is sent pixel by pixel in the search range Ri in the X-axis direction, and the dot pattern of the segment area Si is compared with the dot pattern on the measured image at each feed position.
  • a region corresponding to each feed position on the actually measured image is referred to as a “comparison region”.
  • a plurality of comparison areas having the same size as the segment area Si are set in the search range Ri, and the comparison areas adjacent in the X-axis direction are shifted by one pixel from each other.
  • the search range Ri is determined by the direction in which the detection target object moves away from the reference plane toward the information acquisition device 1 and the distance that can be detected in the approaching direction. In FIG. 6, there is a range of a position that is shifted by x pixels in the X-axis positive direction from a position shifted by x pixels in the X-axis negative direction from the pixel position on the actual measurement image corresponding to the pixel position of the segment region Si on the reference image.
  • the search range Ri is set.
  • the degree of matching between the dot pattern of the segment area Si stored in the reference template and the dot pattern of the DP light of the measured image is obtained at each feed position. It is done. As described above, the segment area Si is sent only in the X-axis direction within the search range Ri as described above. Normally, the dot pattern of the segment area set by the reference template is a predetermined value in the X-axis direction at the time of actual measurement. This is because the displacement occurs only within the range.
  • the dot pattern corresponding to the segment area may protrude from the actual measurement image in the X-axis direction.
  • the dot pattern corresponding to the segment area S1 on the negative side of the X axis on the most negative side of the reference pattern area is reflected by an object far from the reference plane, the dot pattern corresponding to the segment area S1 is more than the actual measurement image.
  • Positioned in the negative X-axis direction since the dot pattern corresponding to the segment area is not within the effective imaging area of the CMOS image sensor 240, this area cannot be properly matched. However, since areas other than the edge region can be appropriately matched, the influence on the object distance detection is small.
  • the effective imaging area of the CMOS image sensor 240 at the time of actual measurement can be made larger than the effective imaging area of the CMOS image sensor 240 at the time of acquiring the reference image.
  • an effective imaging area is set with a size of VGA (640 pixels ⁇ 480 pixels) at the time of acquiring a reference image, it is 30 pixels larger in the X-axis positive direction and the X-axis negative direction than that when actually measured. Set the effective imaging area by size. As a result, the actually measured image becomes larger than the reference image, but the edge region can also be appropriately matched.
  • the pixel value of each pixel in the reference pattern area and the pixel value of each pixel in each segment area of the measured image are binarized and stored in the memory 26.
  • the pixel values of the reference image and the actually measured image are 8-bit gradations, among the pixel values of 0 to 255, pixels that are equal to or greater than a predetermined threshold are pixels whose pixel value is 1 and pixels that are less than the predetermined threshold are pixels
  • the value is converted to 0 and stored in the memory 26.
  • the similarity between the comparison region and the segment region Si is obtained. That is, the difference between the pixel value of each pixel in the segment area Si and the pixel value of the pixel corresponding to the comparison area is obtained.
  • a value Rsad obtained by adding the obtained difference to all the pixels in the comparison region is acquired as a value indicating the similarity.
  • FIG. 6D the value Rsad is obtained for all the comparison regions in the search range Ri for the segment region Si.
  • FIG. 6E is a graph schematically showing the magnitude of the value Rsad at each feed position in the search range Ri.
  • the minimum value Bt1 is referred to from the obtained value Rsad.
  • the second smallest value Bt2 is referred to from the obtained value Rsad. If the difference value Es between the minimum value Bt1 and the second smallest value Bt2 is less than the threshold value, the search for the segment area Si is regarded as an error.
  • the comparison area Ci corresponding to the minimum value Bt1 is determined as the movement area of the segment area Si.
  • the comparison area Ci is detected at a position shifted by ⁇ pixels in the X-axis positive direction from the pixel position Si0 on the measured image at the same position as the pixel position of the segment area Si on the reference image. The This is because the dot pattern of the DP light on the measured image is displaced in the X-axis positive direction from the segment area Si on the reference image by a detection target object (person) that is present at a position closer to the reference plane.
  • segment area search is performed for all the segment areas from segment area S1 to segment area Sn.
  • the segment area Si when the segment area Si is detected, it is expressed in gray scales from white to black according to the amount of deviation of the detected position of the segment area Si with respect to the pixel position Si0 (hereinafter referred to as “pixel deviation amount”).
  • pixel deviation amount This value is stored in the memory 26 as the distance information of the segment area Si.
  • a tone of a color closer to black is assigned, and as the detection position of the segment region Si shifts in the positive X-axis direction in the search range Ri .
  • the gradation corresponding to the position shifted in the negative X-axis direction in the search range Ri that is, the gradation of the blackest color is stored in the memory 26.
  • an image in which the matching measurement result is expressed by white and black gradations is referred to as a “measurement image”.
  • FIG. 7 is a diagram showing a distance measurement example when the above distance detection method is used in a state where segment areas are set at intervals of one pixel as in the above comparative example.
  • FIG. 7A illustrates a case where a flat screen is disposed at a distance of 1000 mm from the information acquisition device 1 and a DP light is irradiated and reflected by a person standing at a distance of 800 mm from the information acquisition device 1. It is the actual measurement image which imaged light. The person sticks out his hand about 200 mm forward. That is, in the figure, the distance from the information acquisition device 1 is longer in the order of the person's hand, the trunk, and the screen.
  • FIG. 7B uses the actually measured image of FIG. 7A in a state where the segment area is set at a size of 7 pixels ⁇ 7 pixels and at an interval of 1 pixel as shown in FIG. 5C. It is a figure which shows the measurement image which measured matching.
  • FIG. 7C shows a measurement image obtained by measuring matching using the actual measurement image of FIG. 7A in a state where the size is 15 pixels ⁇ 15 pixels and the segment areas are set at intervals of one pixel.
  • FIG. 7 uses a reference image obtained by irradiating DP light in a state where only the screen is arranged as a reference plane at the same position as the screen in the actually measured image of FIG. 7A and capturing the reflected DP light.
  • the screen portion is shown with gradation close to black, and the person portion is shown with gradation close to white.
  • the hand portion of the person is shown with a gradation closer to white compared to other body portions. In this way, it can be seen that a rough human shape can be recognized in each measurement result.
  • the error at the boundary between the person and the screen is shown in FIG. 6E because the dot pattern reflected by the screen and the dot pattern reflected by the person are included in the comparison area corresponding to the segment area. This is because the difference value between the smallest Rsad and the second smallest Rsad does not exceed the threshold value.
  • the error in the belt position is caused by a decrease in the dot pattern received by the CMOS image sensor 240 due to the belt having a black color with a low reflectance.
  • segment area size As described above, when the size of the segment area is small, the uniqueness of the dot pattern included in the segment area decreases, and error areas frequently occur. Therefore, in order to acquire distance information appropriately, a certain segment area size is required, such as 15 pixels ⁇ 15 pixels in FIG. 7C.
  • the apparent sharpness of the contour of the object in the measurement image deteriorates.
  • the apparent sharpness of the contour of the object is referred to as “resolution”.
  • the sense of resolution is a measure of the fineness of an image felt by a person and is a subjective evaluation standard for a person.
  • FIG. 8 is a diagram for explaining the relationship between the size of the segment area and the resolution.
  • FIG. 8A is a diagram schematically showing the right hand peripheral portion of the actually measured image shown in FIG. 7A
  • FIG. 8B is a diagram showing the inside of the right hand shown in FIG. It is a partial enlarged view of part A1.
  • FIG. 8 (c) is a measurement image of the right hand periphery as a result of measuring matching in a state where the size is 7 pixels ⁇ 7 pixels and the segment area is set at an interval of 1 pixel.
  • FIG. 8D is a measurement image of the right-hand periphery as a result of measuring matching in a state where the size is 21 pixels ⁇ 21 pixels and the segment area is set at an interval of one pixel.
  • FIG. 8 (b) a position S0 where the pixel shift amount is 0 on the actually measured image of the segment area S having a size of 7 pixels ⁇ 7 pixels is shown. Since the index finger is at a shorter distance than the reference surface, the dots reflected by the index finger are imaged by the light receiving optical system 200 while being shifted in the negative X-axis direction. The size of the segment region S is approximately the same as the size of the index finger. Therefore, the comparison area of the segment area S includes only the dots reflected by the index finger. Therefore, the segment area S matches the comparison area C1 shifted in the positive direction of the X axis, and the distance information of the index finger can be obtained appropriately. For example, in the area A′1 shown in FIG. 8C, it can be seen that the contour of the index finger can be recognized.
  • FIG. 8B shows a position S′0 where the pixel shift amount is 0 on the actually measured image of the segment region S ′ having a size of 21 pixels ⁇ 21 pixels.
  • the comparison area of the segment area S ′ dots reflected by the screen and dots reflected by the index finger are mixed. The number of dots included in the comparison area is much more reflected from the screen than the index finger. Since the screen is at the same position as the reference plane, the positions of the dots reflected by the screen are not substantially deviated from the position of the reference image. Therefore, the segment area S ′ has a higher matching rate with the dots reflected by the screen than the dots reflected by the index finger and matches with the comparison area C ′, so that screen distance information can be obtained.
  • the index finger cannot be recognized, and only screen distance information is obtained. Further, comparing FIG. 8C and FIG. 8D, in the case of FIG. 8D, the tip of the thumb, the gap between the middle finger and the ring finger, etc. cannot be recognized, and the resolution is considerably inferior. I understand that.
  • FIG. 9 is a diagram for explaining the relationship between the interval between the segment areas and the resolution of distance detection in the comparative example.
  • a segment area sx is set in an area where the segment area s0 is shifted by one pixel in the positive direction of the X axis.
  • the segment area sy is set in an area where the segment area s0 is shifted by one pixel in the positive Y-axis direction.
  • the size of the segment area is set to 7 pixels ⁇ 7 pixels.
  • the interval between the center p0 of the segment region s0, the center px of the segment region sx, and the center py of the segment region sy is an interval corresponding to one pixel corresponding to the shift width of the segment region.
  • the distance to the target object is detected with a fineness corresponding to one pixel vertically and horizontally. That is, as shown in FIG. 9B, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution.
  • the distance can be detected in units of one pixel, and the distance can be detected with substantially the same resolution as the resolution of the actual measurement image and the reference image output from the CMOS image sensor 240.
  • the inventors of the present application performed a simulation in which a plurality of distance detection resolution heights were set for each segment area size, and compared the degree of degradation of the resolution of the measurement image.
  • an image having a smaller size than the measurement image is obtained by extracting the gradation for each predetermined pixel in the X-axis direction and the Y-axis direction from the measurement image obtained in the state where the segment area is set at an interval of one pixel.
  • an image obtained by enlarging the generated image to the same size as the measurement image is shown.
  • FIG. 10 is a diagram showing a simulation result.
  • FIG. 10 shows the measurement image of the right hand part when the segment area size is 7 pixels ⁇ 7 pixels
  • the middle part shows the measurement image of the right hand part when the size of the segment area is 15 pixels ⁇ 15 pixels.
  • the lower part a measurement image of the right hand part in the case where the size of the segment area is 21 pixels ⁇ 21 pixels is shown.
  • the inventor of the present application subjectively selected a measurement image of such a simulation result to such an extent that the resolution of the hand outline of the measurement image is not impaired when the interval between the segment areas is one pixel.
  • the resolution of the outline of the hand is not substantially impaired until the interval between the segment areas is 2 pixels.
  • the interval between the segment areas is 3 pixels, the sharpness (resolution) of the finger outline is lost compared to the case of 1 pixel.
  • the resolution of the hand outline is not substantially impaired until the interval between the segment areas is 4 pixels.
  • the interval between the segment areas is 8 pixels, the sharpness (feeling of resolution) of the finger outline is largely lost compared to the case of 1 pixel.
  • the resolution of the contour of the hand is not substantially impaired until the interval between the segment areas is 4 pixels.
  • the interval between the segment areas is 8 pixels, the sharpness (feeling of resolution) of the finger outline is largely lost compared to the case of 1 pixel.
  • the size of the segment area is 7 pixels ⁇ 7 pixels, 2 that is an integer closest to the division value 1.75 obtained by dividing 7 by 4 can be obtained as the upper limit pixel number of the segment area interval. . Further, when the size of the segment area is 15 pixels ⁇ 15 pixels, 4 which is an integer closest to the division value 3.75 obtained by dividing 15 by 4 can be obtained as the upper limit number of pixels of the segment area. Further, when the size of the segment area is 21 pixels ⁇ 21 pixels, 5 which is an integer closest to the division value 5.25 obtained by dividing 21 by 4 is obtained as the upper limit number of pixels of the segment area interval. Can do. These results substantially coincide with the number of pixels corresponding to the measurement image selected in the above evaluation of resolution.
  • the integer closest to the division value is the number of pixels that defines the interval of the segment area.
  • the integer obtained by rounding off the decimal point of the division value is the interval of the segment area.
  • an integer obtained by rounding up the fractional value after the decimal point may be used as the number of pixels defining the interval between the segment areas. For example, when the size of the segment area is 15 pixels ⁇ 15 pixels, the decimal point may be rounded down to set the upper limit of the segment area interval to 3 pixels.
  • the upper limit of the segment area interval increases as the size of the segment area increases. This is because the larger the segment area, the worse the resolution when the interval between the segment areas is one pixel and the fewer matching errors.
  • the resolution of the measurement image is originally high when the interval between the segment areas is 1 pixel.
  • the feeling must also be high.
  • the size of the segment area is 7 pixels ⁇ 7 pixels, since the size of the segment area is small, a matching error is likely to occur. For this reason, the matching error and the decrease in resolution caused by increasing the interval between the segment areas are combined, so that the resolution is greatly deteriorated even if the interval between the segment areas is slightly increased. Therefore, in the measurement image, in order to obtain a resolution that is not inferior to that when the interval between the segment regions is one pixel, the interval between the segment regions cannot be increased greatly.
  • the resolution of the measurement image is originally low when the interval between the segment areas is 1 pixel.
  • the resolution may be low.
  • the size of the segment area is 21 pixels ⁇ 21 pixels, since the size of the segment area is large, a matching error hardly occurs. For this reason, even if the interval between the segment areas is widened, the resolution is unlikely to deteriorate. Therefore, even when the interval between the segment areas is greatly widened, it is possible to obtain a resolution that is not much different from that when the interval between the segment areas is one pixel in the measurement image.
  • the interval between the segment areas can be set larger as the size of the segment area becomes larger.
  • the interval between the segment areas is set with an upper limit of about 1 ⁇ 4 of the number of pixels included in one side of the segment area.
  • the processing load on the CPU 21 can be greatly reduced without significantly impairing the resolution.
  • the interval between the segment areas is set to about 1/4 of the number of pixels included in one side of the segment area.
  • the interval of the segment areas In order to reduce the processing burden on the CPU 21 as compared with the case of one pixel interval, it is necessary to set the interval of the segment areas to 2 pixels or more and to the above upper limit value or less.
  • the upper limit value is set to about 1 ⁇ 4 of the number of pixels included in one side of the segment area. For this reason, unless the size of the segment area is 5 pixels ⁇ 5 pixels or more, the interval between the segment areas cannot be set to 2 pixels or more.
  • the size of the segment area is 4 pixels ⁇ 4 pixels, the division value obtained by dividing 4 by 4 is 1, and the interval between the segment areas cannot be set larger than 1.
  • the size of the segment area is 5 pixels ⁇ 5 pixels, the division value obtained by dividing 5 by 4 is 1.25, and the interval between the segment areas is close to 1.25 (rounded up after the decimal point) It may be set to. Therefore, when applying the examination in the simulation result, the size of the segment area needs to be 5 pixels ⁇ 5 pixels or more.
  • FIG. 11 is a diagram for explaining the relationship between the interval between segment areas and the resolution of distance detection in the present embodiment.
  • the interval between the segment areas is set to about 1 ⁇ 4 of the number of pixels included in one side of the segment area.
  • the segment area S0 is moved to the right side of the segment area S0 by shifting the segment area S0 by 2 pixels in the X-axis positive direction.
  • Region Sx is set.
  • the segment area Sy is set in an area where the segment area S0 is shifted by two pixels in the positive Y-axis direction.
  • the interval between the center P0 of the segment area S0, the center Px of the segment area Sx, and the center py of the segment area Sy is a distance corresponding to two pixels corresponding to the shift width of the segment area.
  • the distance to the target object is detected at a distance corresponding to two pixels in the vertical and horizontal directions. That is, as shown in FIG. 11B, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution. Therefore, in the present embodiment, the resolution can be halved and the amount of computation is 1 ⁇ 4 compared to the case of the comparative example. For example, four circles when the interval between the segment areas is 2 pixels corresponds to 16 circles when the interval between the segment areas is 1 pixel.
  • the segment area S′0 is set to the right side of the segment area S′0 by 4 pixels in the positive direction of the X axis.
  • the segment area SP′x is set in the shifted area.
  • the segment area S′y is set in an area where the segment area S′0 is shifted by 4 pixels in the positive Y-axis direction.
  • the interval between the center P′0 of the segment area S′0, the center P′x of the segment area S′x, and the center p′y of the segment area S′y is 4 pixels corresponding to the shift width of the segment area. It becomes.
  • the distance to the target object is detected at a distance corresponding to four pixels vertically and horizontally. That is, as shown in FIG. 11C, the position conceptually indicated by a circle is one position where the distance can be detected, and this is the distance detection resolution. Therefore, in the present embodiment, the resolution can be reduced to 1/4 and the amount of calculation is 1/16 compared to the case where the interval between the segment areas is one pixel. For example, four circles when the segment area interval is 4 pixels correspond to 64 circles when the segment area interval is 1 pixel.
  • the amount of calculation is reduced to 1 / (pixel of the segment area interval) compared to the case where the interval between the segment areas is set to 1 pixel. (The square of the number).
  • the segment area needs to have a certain size, such as 15 pixels ⁇ 15 pixels. And the amount of calculation can be greatly reduced.
  • the resolution of the segment area is reduced to about 1/4 pixel of the number of pixels included in one side of the segment area as compared with the case where the interval of the segment area is 1 pixel. Therefore, the amount of calculation can be reduced while maintaining the accuracy of object detection.
  • FIG. 12A is a diagram showing a flow of an outline of processing up to the setting of the segment area in the present embodiment.
  • FIG. 12B shows a flow of processing for setting the size and interval of the segment area in this embodiment, and
  • FIG. 12C shows a table T that holds the values of the size and interval of the segment area.
  • FIG. These processes are performed by the setting person using the setting device when setting up the information acquisition device 1.
  • the size of the segment area and the interval between the segment areas are set (S1). Specifically, as shown in FIG. 12B, the setter first sets the vertical and horizontal sizes of the segment area (S101).
  • the vertical and horizontal sizes of the set segment area are stored in the memory 26.
  • the same value is used for the vertical and horizontal sizes of the segment areas, and the segment areas are set to be square.
  • the setting device divides the number of pixels included in one side of the set segment area by 4 (S102). Then, the setting device rounds off the fractional value after the decimal point (S103). The setting device sets the value thus obtained in the memory 26 in association with the size of the segment area as the interval of the segment area as in the table T shown in FIG. 12C (S104). Then, the setting device determines whether an instruction to complete the setting of the size and interval of the segment area is input by the setter (S105). When an instruction to complete the setting is not input (S105: NO), the setting device returns the process to S101 and accepts the setting of the size and interval of another segment area.
  • the setting apparatus ends the setting process of the segment area size and the segment area interval.
  • an appropriate distance detection resolution that does not deteriorate the resolution of the measurement image with respect to the size of the segment area set in S101 is determined.
  • the setter causes the CPU 21 to irradiate the DP light through the setting device in a state where the reference plane is arranged in the target area, and obtain a reference image (S2). Then, the CPU 21 is caused to set the segment area based on the acquired reference image, the size of the segment area set in S1 and the interval between the segment areas (S3).
  • FIG. 13 is a diagram showing a flow of segment area setting processing.
  • the CPU 21 sets a reference pattern area on the standard image (S301), information on the position of the reference pattern area on the CMOS image sensor 240, and all the pixels included in the reference pattern area. Are stored in the memory 26 (S302).
  • the setter specifies the size of the segment area via the setting device.
  • the CPU 21 specifies the size of the vertical and horizontal widths of the segment area stored in the memory 26 (S303).
  • the segment area interval corresponding to the size of the segment area is read from the table generated in S104 and set to the variable n (S304).
  • a plurality of segment area sizes can be selected, and the interval between the segment areas is set according to the size of the segment area. For example, when the size of the segment area of 15 pixels ⁇ 15 pixels is selected, 2 pixels are set as the variable n as the interval between the segment areas.
  • the CPU 21 determines whether or not the position of the segment area Si has reached the right end of the reference pattern area (S308). If the position of the segment area Si has not reached the right end of the reference pattern area (S308: NO), 1 is added to i (S309), and an area shifted by n pixels in the X-axis positive direction from the position of the segment area Si is obtained. By designating, the segment area Si is set (S310). Thereafter, the CPU 21 returns the process to S307.
  • the position of the segment area Si is the lower end of the reference pattern area. Is determined (S311).
  • the segment area Si When the segment area Si is set from the left end at the upper end of the reference pattern area to the right end at the lower end, and the position information of the segment area Si is stored in the memory 26 (S311: YES), FIG. 11 (a) or FIG. 11 (b).
  • the segment areas are set so as to be arranged at intervals of n pixels in the X-axis direction and the Y-axis direction in accordance with the size of the segment area, and the processing ends. Note that the pattern of pixel values in each segment area differs for each segment area.
  • information on the position of the reference pattern area on the CMOS image sensor 240 pixel values (reference patterns) of all pixels included in the reference pattern area, and information on the position of the segment area set for the reference pattern area are obtained. It is stored in the memory 26 as a reference template. As the segment area information, only the information regarding the position on the CMOS image sensor 240 is stored in the memory 26, but the pixel value in the segment area may be stored.
  • the CPU 21 in FIG. 2 determines the distance to each part of the object based on the amount of deviation of the dot pattern in each segment area obtained from the reference template. Is calculated. As described above, the distance calculation method is calculated based on the triangulation method using the displacement amount of the segment area.
  • the interval between the segment areas is set to be large according to the size of the segment area, so that it is possible to suppress an increase in the amount of calculation required for distance detection.
  • the interval between the segment areas is set to about 1 ⁇ 4 pixel of the size of the segment area, so that an increase in the amount of calculation for distance detection is suppressed while maintaining the object detection accuracy. be able to.
  • the segment area interval corresponding to the segment area size is set in the table T in advance, but the segment area interval is not held in the table and the segment area size is read. May be directly calculated.
  • FIG. 14 is a diagram showing a flow of the segment area setting process in this case.
  • the same processing as in FIG. 13 in the above embodiment is denoted by the same reference numeral, and the description thereof is omitted.
  • the CPU 21 divides the number of pixels included in one side of the segment area by 4 (S321). Then, the value after the decimal point is rounded off (S322). The value thus obtained is set in the variable n as the interval between the segment areas (S304). Thereafter, similarly to the above embodiment, the segment regions Si are set at n pixel intervals (S305 to S312).
  • the distance between the segment areas is set to about 1 ⁇ 4 of the number of pixels included in one side of the segment area, thereby reducing the amount of calculation of the distance while maintaining the object detection accuracy.
  • the interval between the segment regions may be slightly reduced or increased depending on the required detection accuracy.
  • the size of the segment area can be selected by the setter, but the size of the segment area may be determined in advance.
  • the interval between the segment areas is set to about 1/4 of the number of pixels included in one side of the segment area.
  • the interval between the segment areas can be increased according to the size of the segment area.
  • other values may be set.
  • the interval of the segment areas may be set to the number of pixels between 2 pixels and the upper limit value, with the upper limit value being approximately 1 ⁇ 4 of the number of pixels included in one side of the segment area.
  • the optimum value of the segment area interval is individually determined in advance from the number of pixels between 2 pixels and the upper limit value, and the segment area interval is determined by holding the value in a table. You may decide.
  • the segment areas are set to have the same vertical and horizontal sizes and become square, but may be set so that one side is larger than the other side.
  • the interval between the segment areas is determined based on the number of pixels included on either side.
  • an error is determined based on whether the difference between Rsad with the highest matching rate and Rsad with the next highest matching rate exceeds a threshold.
  • An error may be determined based on whether Rsad having the highest collation rate exceeds a predetermined threshold.
  • the pixel values of the pixels included in the segment area and the comparison area are binarized before calculating the matching rate between the segment area and the comparison area. You may match using a value as it is.
  • the pixel value obtained by the CMOS image sensor 240 is binarized as it is.
  • the pixel value is subjected to correction processing such as predetermined pixel weighting processing and background light removal processing. After performing, it may be binarized or multi-valued.
  • the distance information is obtained using the triangulation method and stored in the memory 26.
  • the distance using the triangulation method is set.
  • the displacement amount (pixel shift amount) of the segment area may be acquired as the distance information without calculating.
  • the FMD 150 is used for the projection optical system 100, but the FMD 150 may be omitted.
  • the filter 230 is disposed to remove light in a wavelength band other than the wavelength band of the laser light irradiated to the target region.
  • light other than the laser light irradiated to the target region is used.
  • the filter 230 can be omitted.
  • the arrangement position of the aperture 210 may be between any two imaging lenses.
  • the CMOS image sensor 240 is used as the light receiving element, but a CCD image sensor can be used instead. Furthermore, the configurations of the projection optical system 100 and the light receiving optical system 200 can be changed as appropriate. Further, the information acquisition device 1 and the information processing device 2 may be integrated, or the information acquisition device 1 and the information processing device 2 may be integrated with a television, a game machine, or a personal computer.
  • DESCRIPTION OF SYMBOLS 1 Information acquisition apparatus 21 ... CPU (distance acquisition part) 21b ... Distance acquisition unit 24 ... Imaging signal processing circuit (distance acquisition unit) 26 ... Memory (storage unit) DESCRIPTION OF SYMBOLS 100 ... Projection optical system 110 ... Laser light source 120 ... Collimator lens 140 ... DOE (diffractive optical element) 200 ... Light receiving optical system

Abstract

La présente invention concerne un dispositif de détection d'objets et un dispositif d'acquisition d'informations pouvant supprimer une augmentation du volume de calcul tout en conservant la précision de la détection des objets. Le dispositif d'acquisition d'informations (1) est équipé d'un système optique de projection (100) ; d'un système optique de réception de lumière (200) qui forme l'image d'une région cible au moyen d'un capteur d'images CMOS (240) ; d'une mémoire (26) qui enregistre une image de référence capturée lorsqu'une lumière laser est diffusée sur une surface de référence ; et d'une unité d'acquisition de distance qui acquiert une distance au moyen du paramétrage de régions de segments dans l'image de référence et de la comparaison de points dans les régions de segments et l'image de mesure réelle capturée pendant la mesure de la distance. L'unité d'acquisition de distance paramètre la dimension des régions de segments de façon à former un carré d'au moins cinq pixels verticalement et d'au moins cinq pixels horizontalement, et paramètre l'espacement entre des régions de segments adjacentes à une distance qui représente au moins une distance de deux pixels et une distance qui ne dépasse pas une valeur limite supérieure qui est conforme à la taille des régions de segments. Il en résulte qu'il est possible de supprimer une augmentation des calculs sans perdre la précision de détection des objets.
PCT/JP2012/069125 2011-08-26 2012-07-27 Dispositif de détection d'objets et dispositif d'acquisition d'informations WO2013031447A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-184307 2011-08-26
JP2011184307A JP2014211305A (ja) 2011-08-26 2011-08-26 物体検出装置および情報取得装置

Publications (1)

Publication Number Publication Date
WO2013031447A1 true WO2013031447A1 (fr) 2013-03-07

Family

ID=47755950

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/069125 WO2013031447A1 (fr) 2011-08-26 2012-07-27 Dispositif de détection d'objets et dispositif d'acquisition d'informations

Country Status (2)

Country Link
JP (1) JP2014211305A (fr)
WO (1) WO2013031447A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235736A (zh) * 2017-12-25 2018-06-29 深圳前海达闼云端智能科技有限公司 一种定位方法、云端服务器、终端、系统、电子设备及计算机程序产品
WO2022116675A1 (fr) * 2020-12-02 2022-06-09 达闼机器人股份有限公司 Procédé et appareil de détection d'objet, support de stockage et dispositif électronique

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000002529A (ja) * 1998-06-16 2000-01-07 Honda Motor Co Ltd 車両用距離測定装置
JP2004191092A (ja) * 2002-12-09 2004-07-08 Ricoh Co Ltd 3次元情報取得システム
JP2005246033A (ja) * 2004-02-04 2005-09-15 Sumitomo Osaka Cement Co Ltd 状態解析装置
JP2006113832A (ja) * 2004-10-15 2006-04-27 Canon Inc ステレオ画像処理装置およびプログラム
JP2009204991A (ja) * 2008-02-28 2009-09-10 Funai Electric Co Ltd 複眼撮像装置
JP2010101683A (ja) * 2008-10-22 2010-05-06 Nissan Motor Co Ltd 距離計測装置および距離計測方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000002529A (ja) * 1998-06-16 2000-01-07 Honda Motor Co Ltd 車両用距離測定装置
JP2004191092A (ja) * 2002-12-09 2004-07-08 Ricoh Co Ltd 3次元情報取得システム
JP2005246033A (ja) * 2004-02-04 2005-09-15 Sumitomo Osaka Cement Co Ltd 状態解析装置
JP2006113832A (ja) * 2004-10-15 2006-04-27 Canon Inc ステレオ画像処理装置およびプログラム
JP2009204991A (ja) * 2008-02-28 2009-09-10 Funai Electric Co Ltd 複眼撮像装置
JP2010101683A (ja) * 2008-10-22 2010-05-06 Nissan Motor Co Ltd 距離計測装置および距離計測方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235736A (zh) * 2017-12-25 2018-06-29 深圳前海达闼云端智能科技有限公司 一种定位方法、云端服务器、终端、系统、电子设备及计算机程序产品
CN108235736B (zh) * 2017-12-25 2021-11-16 达闼机器人有限公司 一种定位方法、云端服务器、终端、系统、电子设备及计算机程序产品
WO2022116675A1 (fr) * 2020-12-02 2022-06-09 达闼机器人股份有限公司 Procédé et appareil de détection d'objet, support de stockage et dispositif électronique

Also Published As

Publication number Publication date
JP2014211305A (ja) 2014-11-13

Similar Documents

Publication Publication Date Title
WO2012137674A1 (fr) Dispositif d'acquisition d'informations, dispositif de projection et dispositif de détection d'objets
JP5138116B2 (ja) 情報取得装置および物体検出装置
JP5138119B2 (ja) 物体検出装置および情報取得装置
US20180180248A1 (en) Pattern projection using microlenses
JP5143312B2 (ja) 情報取得装置、投射装置および物体検出装置
JPWO2012147495A1 (ja) 情報取得装置および物体検出装置
WO2013046927A1 (fr) Dispositif d'acquisition d'informations et dispositif détecteur d'objet
WO2014108976A1 (fr) Dispositif de détection d'objet
JP2014137762A (ja) 物体検出装置
JP2012237604A (ja) 情報取得装置、投射装置および物体検出装置
JP2014044113A (ja) 情報取得装置および物体検出装置
JP5138115B2 (ja) 情報取得装置及びその情報取得装置を有する物体検出装置
WO2013031447A1 (fr) Dispositif de détection d'objets et dispositif d'acquisition d'informations
WO2012144340A1 (fr) Dispositif d'acquisition d'informations, et dispositif de détection d'objet
WO2013015146A1 (fr) Dispositif de détection d'objet et dispositif d'acquisition d'informations
JPWO2013015145A1 (ja) 情報取得装置および物体検出装置
JP2013246009A (ja) 物体検出装置
JP2014085257A (ja) 情報取得装置および物体検出装置
WO2013046928A1 (fr) Dispositif d'acquisition d'informations et dispositif de détection d'objet
EP3987248B1 (fr) Projection d'un motif lumineux structuré à partir d'un appareil doté d'un écran delo
JP2014035294A (ja) 情報取得装置および物体検出装置
JP2014035304A (ja) 情報取得装置および物体検出装置
WO2013031448A1 (fr) Dispositif de détection d'objets et dispositif d'acquisition d'informations
JP2013234956A (ja) 情報取得装置および物体検出装置
JP2013234957A (ja) 情報取得装置および物体検出装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12828358

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12828358

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP