WO2005045363A1 - 3次元形状検出装置、撮像装置、及び、3次元形状検出プログラム - Google Patents
3次元形状検出装置、撮像装置、及び、3次元形状検出プログラム Download PDFInfo
- Publication number
- WO2005045363A1 WO2005045363A1 PCT/JP2004/016632 JP2004016632W WO2005045363A1 WO 2005045363 A1 WO2005045363 A1 WO 2005045363A1 JP 2004016632 W JP2004016632 W JP 2004016632W WO 2005045363 A1 WO2005045363 A1 WO 2005045363A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pattern light
- slit light
- detection
- dimensional shape
- trajectory
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 210
- 238000003384 imaging method Methods 0.000 title claims abstract description 82
- 238000004364 calculation method Methods 0.000 claims abstract description 69
- 238000000034 method Methods 0.000 claims description 56
- 238000000605 extraction Methods 0.000 claims description 24
- 238000003702 image correction Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 description 65
- 230000008569 process Effects 0.000 description 46
- 230000005484 gravity Effects 0.000 description 20
- 238000005286 illumination Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 230000000694 effects Effects 0.000 description 13
- 238000006243 chemical reaction Methods 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 10
- 238000012937 correction Methods 0.000 description 9
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 9
- 239000007787 solid Substances 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000002411 adverse Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000011248 coating agent Substances 0.000 description 2
- 238000000576 coating method Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000282693 Cercopithecidae Species 0.000 description 1
- 239000006117 anti-reflective coating Substances 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000001465 metallisation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000059 patterning Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
Definitions
- 3D shape detection device imaging device, and 3D shape detection program
- the present invention relates to a three-dimensional shape detection device that detects a three-dimensional shape of a target object using pattern light, an imaging device, and a three-dimensional shape detection program.
- a whiteboard, a book, or the like is imaged as a target object, and the three-dimensional shape of the target object is detected from the captured image, whereby the whiteboard, the book, or the like is inclined with respect to the front surface.
- an imaging apparatus including a correcting unit that corrects a captured image so that even if a positional force is imaged, a warm force is also captured as a frontal force.
- FIG. 1 and the like in Japanese Patent Application Laid-Open No. Hei 9-289611 (hereinafter referred to as Document 1) disclose a portable digital camera provided with such a correction unit.
- FIG. 2 An image of the slit light projected when the slit light is projected as an example of the turn light, and a slit light non-light projected when the slit light is projected and the slit light is projected.
- a stationary three-dimensional shape measuring apparatus that extracts slit light by subtraction from a time image and detects a three-dimensional shape of a target object based on the extracted slit light.
- the above-described three-dimensional shape measuring apparatus is a stationary type, and the degree of freedom during imaging is limited, which is inconvenient. Therefore, it is desirable that the three-dimensional shape measuring apparatus be portable.
- the image capturing position of the image when the slit light is not projected and the image capturing position of the image when the slit light is not projected may be shifted due to “camera shake”. In such a case, naturally, a deviation occurs between the image when the slit light is projected and the image when the slit light is not projected, and the subtraction is performed between the image when the slit light is projected and the image when the slit light is not projected.
- the present invention has been made to solve the above-described problem, and performs a detection process for detecting pixels constituting a locus of pattern light from a pattern light projection image at high speed and with high accuracy.
- An object of the present invention is to provide a three-dimensional shape detection device, an imaging device, and a three-dimensional shape detection program that can perform the above-described operations.
- a light projecting means for projecting pattern light, and capturing a pattern light projection image of a target object in a state where the pattern light is projected
- a three-dimensional shape calculating means for calculating a three-dimensional shape of the target object based on a trajectory of the pattern light extracted from the pattern light projection image picked up by the image pick-up device.
- the three-dimensional shape detection device includes a first area setting unit that sets a first area including a part of the trajectory of the pattern light extending in the X-axis direction from within the pattern light projection image; First detection means for detecting a plurality of pixels constituting a part of the trajectory of the pattern light from within the first area set by the first area setting means, and a plurality of pixels detected by the first detection means Calculating an approximate curve related to the trajectory of the pattern light based on the approximate curve calculated by the approximate curve calculation means; A second area setting means for setting a second area for detecting a pixel constituting the portion. The trajectory of the pattern light is extracted based on a plurality of pixels detected by the first detection means and pixels detected from within the second area set by the second area setting means.
- the second region setting means relates the second region for detecting the pixels constituting the remaining portion of the trajectory of the pattern light to the trajectory of the pattern light. Set based on the approximate curve. Therefore, in the second area, an extra area in which such pixels are very likely to be included is also detected, and adverse effects such as detection of pixels other than the pixels constituting the pattern light are suppressed. be able to. Therefore, an effect is obtained that the detection processing for detecting the pixels forming the trajectory of the pattern light from the pattern light projection image can be performed with high accuracy.
- the target object is extracted based on the trajectory of the pattern light extracted from the pattern light projection image obtained by imaging the target object in a state where the pattern light is projected.
- a three-dimensional shape detection program including a three-dimensional shape calculation step for calculating a three-dimensional shape of a body, and a program configured as follows is provided. That is, the program includes a first area setting step of setting a first area including a part of the trajectory of the pattern light from within the pattern light projection image, and a first area set by the first area setting means.
- a first detection step of detecting a pixel constituting a part of the trajectory of the pattern light, and calculating an approximate curve related to the trajectory of the pattern light based on the pixel detected by the first detection means.
- An approximate curve calculating step and a second area for detecting pixels constituting a remaining portion of the trajectory of the pattern light from the pattern light projection image based on the approximate curve calculated by the approximate curve calculating means. And setting a second area.
- the trajectory of the pattern light is extracted based on a plurality of pixels detected in the first detection step and pixels detected from within the second area set in the second area setting step.
- FIG. 1 (a) is an external perspective view of an imaging device
- FIG. 1 (b) is a schematic sectional view of the imaging device 1.
- FIG. 2 is a diagram showing a configuration of a slit light projecting unit.
- FIG. 3 (a) and FIG. 3 (b) are diagrams for explaining the angular width of slit light.
- FIG. 4 is a block diagram showing an electrical configuration of the imaging device.
- FIG. 5 is a flowchart showing a processing procedure in a processor.
- FIG. 6 (a) and FIG. 6 (b) are diagrams for explaining the principle of slit light trajectory extraction processing.
- FIG. 7 is a flowchart showing a slit light locus extraction process.
- FIG. 8 is a flowchart showing a slit light barycenter position calculation process.
- FIG. 9 is a flowchart showing an approximate curve calculation process.
- FIG. 10 is a flowchart showing a process when updating an approximate curve.
- FIG. 11 sets a search range when searching for a corresponding pixel in a slit light non-image.
- FIG. 6 is a diagram for explaining a method of setting.
- FIG. 12 (a) shows a captured image of a document P in a state where slit light is irradiated.
- FIG. 12B is an enlarged view schematically showing peripheral pixels at the slit light detection position cX.
- FIG. 13 is a diagram for explaining a method of setting a search range in a slit-light-free image in consideration of a camera shake amount.
- FIG. 14 is an enlarged view schematically showing a portion A in FIG. 12 (a).
- FIG. 15 is an enlarged view schematically showing a portion B in FIG. 12 (a).
- FIG. 16 is a diagram showing a captured image of a document in a filed state.
- FIGS. 17 (a) and 17 (b) show images with slit light.
- FIGS. 18 (a) and 18 (b) are diagrams for explaining a three-dimensional space position calculation method.
- FIGS. 19 (a) to 19 (c) are diagrams for explaining a coordinate system at the time of document orientation calculation.
- FIG. 20 is a flowchart showing a plane conversion process.
- Imaging device including a three-dimensional shape detection device
- Slit light projecting unit pattern light projecting means
- FIG. 1A is an external perspective view of an imaging device 1 of the present invention
- FIG. 1B is a schematic sectional view of the imaging device 1. is there.
- the three-dimensional shape detection device according to the embodiment of the present invention is a device included in the imaging device 1.
- the three-dimensional shape detection program as an embodiment of the present invention is a program executed in the imaging device 1 under the control of the CPU 41 (see FIG. 4).
- the imaging device 1 is provided with a rectangular box-shaped main body case 10, an imaging lens 31 provided in front of the main body case 10, and provided behind the imaging lens 31 (inside the imaging device 1). And a CCD image sensor 32.
- the imaging apparatus 1 further includes a slit light projecting unit 20 provided below the imaging lens 31, a processor 40 built in the main body case 10, a release button 52 provided on the upper part of the main body case 10, and a mode. It has a switching switch 59 and a memory card 55 built in the main body case 10. These components are connected by signal lines as shown in Fig. 4.
- the imaging device 1 further includes an LCD (Liquid Crystal Display) 51 provided on the back of the main body case 10 and a main body, which are used when a user determines an imaging range of the imaging device 1.
- LCD Liquid Crystal Display
- a rear view of the case 10 is also provided with a finder 53 disposed through the front.
- the imaging lens 31 includes a plurality of lenses.
- the imaging apparatus 1 has an auto-force function, and drives the imaging lens 31 so that light from an external force is focused on the CCD image sensor 32 by automatically adjusting the focal length and the aperture.
- the CCD image sensor 32 is configured by arranging photoelectric conversion elements such as CCD (Charge Coupled Device) elements in a matrix.
- the CCD image sensor 32 generates a signal corresponding to the color and intensity of the light of the image formed on the surface, converts the signal into digital data, and outputs the digital data to the processor 40.
- the data for one CCD element is pixel data of a pixel forming an image, and the image data is composed of pixel data of the number of CCD elements.
- the release button 52 is configured by a push-button switch.
- the release button 52 is connected to the processor 40, and the processor 40 detects a user's pressing operation of the release button 52.
- the mode switching switch 59 includes a slide switch that can be switched between two positions.
- the processor 40 is assigned such that one switch position of the mode switching switch 59 is detected as “normal mode” and the other switch position is detected as “corrected imaging mode”.
- "Normal mode” refers to the original document In this mode, P itself is used as image data.
- the “corrected imaging mode” is a mode in which, when the document P is imaged from an oblique direction, the image data is corrected as if the document P was imaged from the front.
- the memory card 55 is configured by a nonvolatile and rewritable memory, and is detachable from the main body case 10.
- the LCD 51 is configured by a liquid crystal display or the like that displays an image, and receives an image signal from the processor 40 and displays an image. From the processor 40 to the LCD 51, an image signal for displaying a real-time image received by the CCD image sensor 32, an image stored in the memory card 55, characters of the device setting content, etc., according to the situation. Is sent.
- the finder 53 is constituted by an optical lens.
- the viewfinder 53 is configured such that when the user also looks into the viewfinder 53 with the rear side force of the imaging apparatus 1, a range that substantially matches the range in which the imaging lens 31 forms an image on the CCD image sensor 32 can be seen.
- FIG. 2 is a diagram showing a configuration of the slit light projecting unit 20.
- FIG. 3 is a diagram for explaining the angular width of the slit light.
- the slit light projecting unit 20 includes a laser diode 21, a collimating lens 22, an aperture 23, a transparent flat plate 24, a cylindrical lens 25, a reflecting mirror 26, and a rod lens 27.
- the laser diode 21 emits a red laser beam.
- the emission and the stop of the laser beam in the laser diode 21 are switched.
- the output of the laser diode 21 is adjusted so that a constant output (for example, lmW) can be obtained at the point passing through the aperture 23 in consideration of the individual dispersion of the spread angle of the laser beam with respect to the maximum output rating (for example, 5 mW).
- the rated output has been adjusted.
- the collimating lens 22 focuses the laser beam from the laser diode 21 so as to focus on a reference distance VP (for example, 330 mm) from the slit light projecting unit 20.
- VP for example, 330 mm
- the aperture 23 is formed of a plate having a rectangular opening, and transmits the laser beam from the collimating lens 22 through the opening to be shaped into a rectangle.
- the transparent plate 24 is made of a transparent plate made of a solid glass material or the like, and has an AR coating on the back surface.
- the transparent plate 24 is the laser from the aperture 23 It is disposed at a predetermined angle
- the transparent flat plate 24 reflects about 5% (about 50 ⁇ W) of the power of one laser beam incident from the aperture 23 on the surface and transmits about 95% (about 950 W). Note that the direction in which the laser beam is reflected by the transparent flat plate 24 (the direction in front of the imaging device 1 and upward by 33 degrees with respect to the horizontal plane) is referred to as a second direction.
- the reflection of one ray of the laser beam entering the transparent plate 24 when exiting from the transparent plate 24 is reduced, and one ray of the laser beam inside the transparent plate 24 is reduced.
- the loss has been reduced.
- the reflection surface required for realizing with a normal half mirror is required. The process of forming a metal deposition film on the substrate can be omitted.
- the reflection mirror 26 is made of a member such as a mirror that totally reflects the laser beam.
- the reflection mirror 26 is arranged at a 45-degree angle in front of the main body case 10 downstream of the laser beam transmitted through the transparent flat plate 24, and totally reflects the laser beam to change the direction of the optical path by 90 degrees.
- the direction in which the laser beam is reflected by the reflection mirror 26 (the direction of 0 ° with respect to the horizontal plane in front of the imaging device 1) is referred to as a first direction.
- the rod lens 27 is formed of a cylindrical lens having a short positive focal length, and is disposed downstream of the laser beam reflected by the reflection mirror 26 so that the axial direction of the cylindrical shape is vertical. I have. Then, when a laser beam is incident from the reflection mirror 26, as shown in FIG. 3 (a), since the focal length is short, the laser beam immediately spreads beyond the focal point and a predetermined spread angle ⁇ (for example, 48 degrees) Out in the first direction. Note that the slit light emitted from the rod lens 27 is referred to as first slit light 71.
- the rod lens 27 is formed of a cylindrical lens having a short positive focal length.
- the rod lens 27 is provided downstream of the laser beam reflected by the reflection mirror 26 so that the axial direction of the cylindrical shape is vertical.
- the rod lens 27 has a short focal length. Therefore, as shown in FIG. 3 (a), the laser beam that has passed through the rod lens 27 immediately begins to spread from the focal position near the rod lens 27, and has a slit with a predetermined spread angle ⁇ (for example, 48 degrees). The light is emitted in the first direction.
- ⁇ for example, 48 degrees
- the cylindrical lens 25 is a lens having a concave shape in one direction so as to have a negative focal length.
- the cylindrical lens 25 is disposed downstream of the laser beam reflected by the transparent flat plate 24 so that the lens surface is orthogonal to the second direction.
- the cylindrical lens 25 emits the laser beam incident from the transparent flat plate 24 as slit light which spreads at a spread angle ⁇ .
- the slit light emitted from the cylindrical lens 25 is referred to as a second slit light 72.
- the spread angle ⁇ due to the cylindrical lens 25 is the ratio of the spread angle ⁇ of the first slit light 71 to the spread angle ⁇ of the second slit light 72, and the ratio of the power when the laser beam is split by the transparent plate 24. They are equivalent. That is, the spread angle ⁇ of the second slit light 72 is 5% (2.4 degrees) of the spread angle ⁇ of the first slit light.
- the slit light projecting unit 20 emits a laser beam from the laser diode 21 in response to a command from the processor 40, and outputs the first slit light 71 in the first direction, and The second slit light 72 is emitted in the second direction from the window 29 provided below the imaging lens 31 of the main body case 10.
- the first slit light 71 and the second slit light 72 which also generate the red laser beam power, are R, G, B in RGB space as their spectral components. Of the values, the red value will be mainly composed of R colors.
- the power of the first slit light 71 divided by the transparent flat plate 24 is 95% of the power that is also output by the laser diode 21.
- the power of the second slit light 72 is as small as about 5%. Looking at the power per angular width while applying force, the first slit light 71 with a spread angle force of 8 degrees has a power per unit angle of about 20 / z WZ degree and the second slit light with a spread angle of 2.4 degrees The power per unit angle of the light 72 is also about 21 WZ degrees, and the power per unit angle of these two slit lights is almost the same.
- the illuminance of the first slit light 71 and the second slit light 72 is about 1260 lux, which is the general indoor brightness of 500-1000. Even at the lux location, there is a sufficient luminance difference between the locus of the slit light and the original P. Therefore, the slit light trajectory is extracted by the slit light trajectory extraction program 422 described later. Trace images can be reliably extracted.
- FIG. 4 is a block diagram showing an electrical configuration of the imaging device 1.
- the processor 40 mounted on the imaging device 1 includes a CPU 41, a ROM 42, and a RAM 43.
- the CPU 41 uses the RAM 43 to detect a press-down operation of the release button 52, capture image data from the CCD image sensor 32, and store the image data in a memory card in accordance with processing by a program stored in the ROM 42.
- Various processing such as writing to 55, detecting the state of the mode switching switch 59, and switching the emission of slit light by the slit light projecting unit 20 is performed.
- the ROM 42 includes a camera control program 421, a slit light locus extraction program 422, a triangulation measurement calculation program 423, a document attitude calculation program 424, a plane conversion program 425, a luminance dispersion calculation program 426, a cross-correlation coefficient calculation program 427, A corresponding pixel search program 428 and an approximate curve calculation program 429 are stored.
- the camera control program 421 is a program related to the control of the entire imaging apparatus 1 including the processing of the flowchart shown in FIG. 5 (details will be described later).
- the slit light locus extraction program 422 is a program for extracting the locus of the slit light from the image of the document P on which the slit light is projected.
- the triangulation calculation program 423 is a program for calculating a three-dimensional spatial position of each of the pixels of the locus of the slit light extracted by the slit light locus extraction program 422.
- the document posture calculation program 424 is a program for estimating and obtaining the three-dimensional shape of the document P from the three-dimensional spatial positions of the trajectory 71a of the first slit light and the trajectory 72a of the second slit light.
- the plane conversion program 425 is a program for converting the image data stored in the slit light non-image storage unit 432 given the position and orientation of the document P into an image in which the frontal force of the document P is also captured. is there.
- the brightness variance calculation program 426 is a program for calculating a standard deviation regarding a color value for each small region in a slit lightless image.
- the cross-correlation coefficient calculation program 427 is a program for calculating the amount of deviation between the image with slit light and the image without slit light.
- the corresponding pixel search program 428 is a program for searching for the presence or absence of a pixel force detected in an image without slit light in an image without slit light.
- the approximate curve calculation program 429 configures a part of the locus 71a of the first slit light. Generated pixel force This is a program for calculating an approximate curve for the locus 71a of the first slit light.
- the RAM 43 includes an image storage unit 431 with slit light, an image storage unit 432 without slit light, a pixel value temporary storage unit 433 to be detected, a triangulation calculation result storage unit 434, a document posture calculation result storage unit 435, A slit light locus information storage unit 436, a camera shake amount storage unit 437, an approximate curve storage unit 438, and a working area 439 are allocated as storage areas.
- the slit light image storage unit 431 and the slit light non-image storage unit 432 store image data of the slit light image and the slit light non-image from the CCD image sensor 32, respectively.
- the detected pixel value temporary storage unit 433 stores the red value obtained by subtracting the average of the red value R force green value G and blue value B for each pixel included in the search range in the slit light image.
- the value (Rd'Y value) obtained by multiplying the difference value Rd by the luminance value Y is stored.
- the triangulation calculation result storage unit 434 stores the result of calculating the position of each point of the slit light image.
- the document posture calculation result storage unit 435 stores the calculation result of the position and the posture of the document P.
- the slit light locus information storage unit 436 stores the barycentric position calculated in a slit light barycentric position calculation process described later.
- the camera shake amount storage unit 437 stores a shift amount between the image with slit light and the image without slit light calculated by the cross-correlation coefficient calculation program 427.
- the approximate curve storage unit 438 stores the approximate curve calculated by the approximate curve calculation program 429.
- the working area 439 temporarily stores data necessary for the calculation in the CPU 41.
- FIG. 5 is a flowchart showing a processing procedure in the processor 40 of the imaging device 1.
- the details of the slit light trajectory extraction process (S140), the triangulation calculation process (S160), the document posture calculation process (S170), and the plane conversion process (S180) will be described later.
- the release button 52 When the release button 52 is pressed by the user, first, the position of the switch of the mode switching switch 59 is detected, and it is determined whether or not the force of the switch is the position of the “correction imaging mode” ( S110). As a result of the determination, when the switch position is in the position of the “correction imaging mode” (S110: Yes), the laser diode 21 is emitted to the slit light projecting unit 20. After the light is commanded and the first slit light 71 and the second slit light 72 are emitted, image data represented by RGB values is acquired from the CCD image sensor 32 as an image with slit light (S120). Further, in S120, the read image data is stored in the slit light image storage unit 431 of the RAM 43.
- step S130 the light emission stop of the laser diode 21 is instructed to the slit light projecting unit 20, and when the first slit light 71 and the second slit light 72 are not emitted, the CCD image sensor 32 outputs a slit light no image. Image data represented by RGB values is read. Further, in S130, the read image data is stored in the slit light non-image storage section 432.
- step S140 the process proceeds to step S140, and a slit light trajectory extraction process is executed. That is, in step S140, the pixels constituting the trajectories 71a and 72a of each slit light are detected from the image data force of the slit light existence image read into the slit light existence image storage unit 431 by the slit light trajectory extraction program 422, Data relating to the pixel is stored in the detection target pixel value temporary storage unit 433.
- the aberration correction processing (S150) is next performed. This aberration correction process corrects image distortion depending on the angle from the optical axis.
- a document attitude calculation processing (S170) is executed.
- the original posture calculation process (S170) the original posture calculation program is performed using the three-dimensional spatial positions of the slit light trajectories of the first slit light 71 and the second slit light 72 stored in the triangulation calculation result storage unit 434. According to 424, the position and orientation of the document P are calculated. This calculation result is stored in the document posture calculation result storage unit 435.
- a plane conversion process (S180) is executed. In the plane conversion process (S180), the position and orientation of the original P are converted into image data of an image as viewed from the front by the plane conversion program 425 in the image data force stored in the slit light non-image storage section 432. Is done.
- the laser diode 21 of the slit light projecting unit 20 does not emit light, In a state where the first slit light 71 and the second slit light 72 are not emitted, an image without slit light is read from the CCD image sensor 32 (S200). Next, the image data is written to the memory card 55 (S210). After step S210, the process ends.
- FIG. 6A shows a captured image of the original P in a state where the slit light has been irradiated.
- a character portion M arranged in a plurality of rows in the width direction of the original document, an illumination reflected portion S shown in a rectangular shape, and a red (R) component as a main color component shown in a circle are shown.
- the printed portion I and the trajectories 71a and 72a of the first and second slit lights extending in the width direction of the document P are formed.
- a dashed line extending in a direction orthogonal to the width direction of the document P indicates a slit light detection position, and an intersection between the slit light detection position and the locus 71a of the first slit light is defined as a slit light detection pixel K. .
- FIG. 6 (b) is a graph showing predetermined parameter values at the slit light detection position (see a dashed line in the figure).
- the slit light detection position force also draws an extension line straightforwardly with respect to each graph.
- the given partial force shows each predetermined parameter value of the slit light detection position. That is, The position on the vertical axis of each graph in FIG. 6 (b) corresponds to the vertical position in the graph of FIG. 6 (a).
- Graph A1 adopts the red value R
- graph A2 adopts the red difference value Rd
- graph A3 adopts the brightness value Y
- graph ⁇ 4 adopts the product value Rd′ ⁇ of the red difference value Rd and the luminance value ⁇ ⁇ ⁇ as the predetermined parameter. .
- the red difference value Rd is calculated by subtracting the average of the green value G and the blue value B of the red value R. That is, the red value R corresponding to the R component, which is the main component of the slit light, can be emphasized more than the other components (G value, B value) at the slit light detection position by the red difference value Rd, A pixel having a red value R close to the green value G and the blue value B has a low red difference value Rd, and the red value R is higher than the green value G and the blue value B.
- the difference value Rd is high, indicating that the value is high.
- the luminance value Y indicates the luminance of each pixel at the slit light detection position.
- the luminance value Y is a Y value in the YCbCr space, and the RGB spatial force is also converted to the YCbCr space by the following equation.
- Y 0.2899 water R + 0.5866 water G + 0.11145 water
- BCb --0.1687 water R-0.312 water
- G + 0.5000 water BCr 0.5000 water R-0.4183 water G -0.0816
- Water B Graph A1 shows that the red value R is high in the slit light detection pixel K, the print portion I having the R component, and the illumination reflection portion S.
- the slit light detection pixel ⁇ when it is attempted to detect the slit light detection pixel ⁇ based on the level of the red value R, if the slit light detection pixel ⁇ is included in the printed portion I having the R component or the illumination reflection portion S, Since there is no clear difference between the red value R and the red value R, the slit light detection pixel ⁇ cannot be accurately detected from the print portion I and the illumination reflection portion S having the R component.
- the illumination reflection portion S has a lower red difference value Rd than the printing portion I having the slit light detection pixels K and R components. Therefore, if the slit light detection pixel K is detected based on the level of the red difference value Rd, even if the slit light detection pixel K is included in the illumination reflection portion S, the difference between the two with respect to the red difference value Rd is Because it is clear, it is possible to accurately detect the slit light detection pixel K from the illumination reflection part S. However, when the slit light detection pixel K is included in the print portion I having the R component, there is no clear difference between the two with respect to the red difference value Rd. Pixel K cannot be detected accurately.
- the printed portion I having the R component has a lower luminance value ⁇ ⁇ than the slit light detection pixel K and the illumination reflected portion S. Therefore, slit light detection is performed based on the level of the luminance value ⁇ . If the pixel K is detected, even if the slit light detection pixel K is included in the print portion I having the R component, the difference between the two with respect to the luminance value Y is clear, so the slit light detection is performed from the print portion having the R component. It is possible to detect pixel K. However, when the slit light detection pixel is included in the illumination reflection portion S, the slit light detection pixel K cannot be accurately detected from the illumination reflection portion S because there is no clear difference between the two with respect to the luminance value Y.
- the slit light detection pixel K is such that both the red difference value Rd and the luminance value Y are higher than the illumination reflection portion S and the print portion I having the R component. Focusing on the fact that it has a ⁇ value, it is necessary to detect pixels that include slit light based on the level of the product value Rd'Y (hereinafter Rd'Y value) of the red difference value Rd and the luminance value Y. And
- the Rd′Y value of the slit light detection position pixel K is higher than the Rd′Y value of the illumination reflection part S and the Rd′Y value of the print part I having the R component, Even if the slit light detection element K is included in the printed part I having the illumination reflection part S and R component, the difference between the two with respect to the Rd'Y value is clear, so that the illumination reflection part S and R component are included.
- the slit light detection pixel K can be accurately detected from the printed portion I.
- Figure 7 is a flowchart of the slit light trajectory extraction process.
- a shift amount between an image with slit light and an image without slit light is calculated (S701).
- the amount of deviation between the two images occurs due to "camera shake" of the user during the time between the image with slit light and the image without slit light, which are not captured at the same time.
- the amount of deviation between the two images can be obtained by calculating the cross-correlation coefficient cc between the two pixels using the cross-correlation coefficient calculation program 427.
- the cross-correlation coefficient cc has a value of 1 to 1, and the position having the maximum value is the displacement amount.
- the cross-correlation coefficient cc it is preferable to calculate the characteristic correlation in an image. This is because even if the cross-correlation coefficient CC is calculated in a solid black portion, a solid white portion, a solid portion of a specific color, or the like, no clear difference can be obtained in the cross-correlation coefficient CC. Therefore, before calculating the cross-correlation coefficient CC, A search process for searching for a certain part is performed.
- the image without slit light is divided into four large areas 14, and each of the large areas 14 is divided into the end (area 1) in each large area.
- the upper right the upper left in the area 2, the lower left in the area 3, and the lower right in the area 4).
- the standard deviation ⁇ ⁇ of the luminance Y is calculated by the luminance variance calculation program 426 using the following formulas 1 and 2.
- the center coordinates of the small area having the maximum standard deviation in each large area 1 to 4 are the center positions (xc, yc) at which the mutual correlation coefficient cc is determined, and the center of the two images with slit light image and no slit light image
- the cross-correlation coefficient cc (xd, yd) at each (xd, yd) is calculated with the difference between the pixel positions near the coordinates being (xd, yd). ) Can be deviated.
- the difference between the pixel positions of the image with slit light and the image without slit light is (xd, yd)
- the brightness of the pixel of the image with slit light is Yl
- the slit light is Set the brightness of no image to ⁇ 2
- the image size is about 1200 pixels X 1600 pixels
- FIG. 12A is a diagram schematically illustrating a captured image of the document P in a state where the slit light is irradiated.
- the search parameters are set as cX2 in the ccdx direction on the trajectory 72a of the second slit light and a range from yMin2 to yMax2 in the ccdy direction.
- the image size is 1200 pixels (width W) X 1600 pixels (height H)
- a range from 0 to 799, which is the upper half area of the captured image is set.
- the cX2 is set to one point in the ccdx direction. This is because it is only necessary to extract the trajectory coordinates on the ccdy axis used to calculate the rotation angle about the X axis in the real space.
- a slit light barycenter position calculation process described later is executed (S703). Next, it is searched whether or not a pixel corresponding to the pixel at the barycentric position calculated in the slit light barycentric position calculation processing (S703) exists in the slit light no image (S704).
- the camera shake amount is about several tens of pixels, so Rs may be set to about several tens of pixels.
- the pixel is a pixel that exists in both the image with slit light and the image without slit light. It turns out that. That is, since the pixels detected in the image with slit light are not recognized as pixels including slit light, the pixels around the calculated center of gravity are also excluded from the extraction target power (S706), and again the S703 power and the like are removed. Processing power up to S705 S repeated exemption.
- the pixel does not exist in the image without slit light, and is a pixel unique to the image with slit light. That is, the pixel is determined to be a pixel including slit light, and the calculated center of gravity position is stored in the slit trajectory information storage unit 436 (S707).
- a search parameter for specifying a search range for extracting the trajectory 71a of the first slit light is set (S708).
- This search parameter is set as a range from yMinl to yMaxl in the ccdy direction, as shown in FIG.
- the search parameter is set in the range of 950 to 1599 in the lower half area of the captured image.
- the reason why the entire area of the lower half is not set is that, in this embodiment, the first slit light 71 is parallel to the optical axis of the imaging lens 31 and is also irradiated with a lower force than the imaging lens 31.
- the range in which the first slit light 71 exists can be used for imaging the document P. Since the distance force between the document P and the imaging lens 31 can be calculated backward, the search range is narrowed down in advance and the processing speed is increased. is there. On the other hand, in the ccdx direction, it is set in the range from cXmin to cXmax.
- the variable cX representing the detection position in the ccdx direction is set to cX cmin (S709), and the slit light is set at that position in the range from yMinl to yMaxl in the ccdy direction.
- the approximate curve calculation process (S717) Is executed.
- the barycentric position of the pixel including the slit light is obtained for each detection position in the remaining range in the ccdx direction. After step S717, the process ends.
- FIG. 8 is a flowchart of the slit light barycentric position calculation processing included in the slit light trajectory extraction processing.
- the pixel position detected as a pixel including the slit light and the luminance center position of the slit light are determined by the characteristics of the laser light constituting the slit light and the fineness of the surface of the imaging object. This is a process for finding the center of gravity of the Rd'Y value within a certain range around the pixel to be detected, and using that center of gravity as the pixel position including the slit light.
- cX2 is given as the variable cX
- cXcmin is given as the variable cX.
- a search range in the ccdx direction is set (S801), and a red difference value Rd and a luminance value Y are calculated for each pixel in a range of xMin ⁇ ccdx ⁇ xMax and yMin ⁇ ccd y ⁇ yMax (S802). , S803).
- the Rd'Y value is calculated for each pixel by multiplying the red difference value Rd calculated for each pixel by the luminance value Y, and the result is stored in the detection target pixel value temporary storage unit 433. (S804).
- the reason for performing the processing in this manner is that, as described above, the pixel having the maximum Rd′Y value is extremely likely to be a pixel including slit light within the search range.
- the condition that the pixel exceeds the threshold value vTh is a condition that even if the pixel has the maximum RdY value, the pixel that does not exceed the threshold value vTh is determined from the imaging target.
- a slit light hits a distant object that has come off, it may be a pixel (in this case, very weak, with high brightness), and that pixel also removes the target candidate power of the pixel including the slit light. This is to detect pixels including slit light with higher accuracy.
- each cc X in the range of ⁇ xRange of cX is determined.
- the center of gravity position in the ccdy direction is further obtained using the position of the center of gravity obtained in (1) and the Rd'Y value, and this is used as the ccdy value (Yg) of the slit light locus at cX (S811). Stets After step S811, the process ends.
- the red value R is low and the Rd'Y value is low in the illumination reflection part S. Since it is low, the difference between the illumination reflection portion S and the pixel including the slit light having the maximum Rd'Y value becomes clear, and the pixel including the slit light can be detected with high accuracy.
- the printing portion I having the pixel component R component including the slit light is included in the printing portion I having the R component, the luminance value Y is low and the Rd'Y value is low.
- the difference between the print portion I having the R component and the pixel including the slit light having the maximum Rd'Y value becomes clear, and the pixel including the slit light can be detected with high accuracy.
- FIG. 14 is an enlarged view showing part A of FIG. 12 (a).
- the detection positions from cXcmin to cXcmax in the ccdx direction are one at the center of the image, four at the left from the center of the image, and four at the right from the center of the image.
- the location is set, and the barycentric position of the pixel including the slit light is calculated from each detection position.
- FIG. 9 is a flowchart of an approximate curve calculation process included in the slit light locus extraction process.
- the approximate curve calculation process calculates an approximate curve related to the locus 71a of the first slit light from the barycentric position of the pixel including the slit light obtained at each detection position from cXcmin to cXcmax in the ccdx direction. Is calculated based on the approximate curve, and each of the remaining ranges in the ccx direction is calculated. This is a process of calculating the position of the center of gravity of the pixel including the slit light at the detection position.
- the order N force S “l” of the approximate curve to be calculated is set (S901).
- the order N is set to “1” because the position of the center of gravity of the pixel detected in S711 to S715 has a small number of data (9 points in this embodiment). If so, the calculated approximate curve will vibrate and diverge, which is to prevent it.
- the data of the barycentric position of the pixel serving as a base for calculating the approximate curve is obtained by performing the processing from S709 to S716 in Fig. 7 within the range from cXMin to cXMax, which is the image range in the ccdx direction.
- Pixel force detected from the range from cXcmin to cXcmax including the central part of the slit light locus 71a Calculated, so for example, the pixel force detected from the edge of the image and the calculated center-of-gravity position force are also approximated It is possible to calculate an approximate curve along the locus 71a of the first slit light on the document P as compared with the case of calculating a curve.
- the third pixel from the left that is separated by the approximate curve force and the predetermined threshold value cTh or more is extracted.
- a more accurate approximate curve can be calculated as the approximate curve for the locus 71a of the first slit light.
- a variable UndetectNo (for the left side: lUndetectNo, for the right side, rUndetect) indicating the number of times the pixel including the slit light has no continuous visual power.
- This UndetectNo is calculated as "1" in S919 and S913 when no pixel including slit light is found for each detection position. If it is determined that it has exceeded, it is determined that the locus of the slit light has changed abruptly, and subsequent detection is stopped.
- the trajectory 71a of the first slit light has a trajectory that curves in the document P with substantially the center force of the image on the right and left sides, but at both ends of the image, as shown in part C of FIG. ⁇ It follows a trajectory that changes irregularly depending on the file status.
- the operator requests that at least both ends of the image be converted to the erect image, at least the portion of the document P should be converted into the erect image.
- the slit light trajectory extraction processing is improved by detecting only the necessary part (the part on the document P). This is to make the swift dagger.
- detection is started from the first detection position on the left side, and after detection at the first detection position on the left side is completed, detection is then started at the first detection position on the right side.
- the detection position is set to a position on the left by a predetermined interval dx from the first detection position on the left. This repetition processing is executed until the left detection position clx becomes cXmix and the right detection position crx becomes cXmax (S907).
- the detection position in the ccdx direction is set, and in S907, when it is confirmed that the detection position is within the image range (S907: Yes), first, the set left detection position It is determined whether clx is greater than or equal to the minimum value cXmin of the image range or whether lUndetect No is smaller than the threshold udLimit (S908).
- cXcmin in the ccdx direction the range from dx to cXmix (left) and cXcmax in the ccdx direction + the range from dx to cXmax (right), when detection at each detection position is completed, the processing is completed. To end.
- the value of 2 ⁇ , which is twice as large as the pixel value, is used as the pixel-converted value.
- the standard deviation ⁇ a is smaller than the threshold value aTh (S1003: No)
- the approximate curve is set to the set degree N that does not increase the degree N of the approximate curve.
- the set order N is equal to or larger than a predetermined threshold value nTh (S 1004).
- the predetermined threshold value nTh is assumed to be (third or higher) in the surface shape of one side portion of the two-page spread document, and is set to 5th in this embodiment.
- the order N is equal to or more than the predetermined threshold nTh (S1004: Yes)
- the approximate curve is set to an order N without increasing the order N of the approximate curve, and the order N is higher than the predetermined threshold nTh. If it is smaller (S1004: No), “1” is added to the order N! (S1005). Then, the processing from S1001 to S1004 is repeated with the order N added by “1”.
- the above-described triangulation calculation processing (S160 in FIG. 5) will be specifically described with reference to FIG. 17 and FIG.
- the vertical peaks of the locus 71a of the first slit light and the locus 72a of the second slit light are based on the center of gravity based on the pixel data read into the temporary storage unit 433 for the pixel value to be detected. It is obtained for each horizontal coordinate of the image data by calculation, and the three-dimensional spatial position with respect to this peak extraction coordinate is obtained as follows.
- FIG. 17 is a diagram for explaining an image with slit light.
- FIG. 18 is a diagram for explaining a method of calculating the three-dimensional spatial position of the slit light.
- the coordinate system of the image pickup apparatus 1 with respect to the original P that is curved in the horizontal direction is changed in the optical axis direction of the imaging lens 31. Is defined as the Z axis, a position apart from the imaging device 1 by the reference distance VP is defined as the origin position of the X, Y, and Z axes.
- the number of pixels in the X-axis direction of the CCD image sensor 32 is called ResX, and the number of pixels in the Y-axis direction is called ResY.
- the upper end of the position where the CCD image sensor 32 is projected through the imaging lens 31 on the XY plane Is called Yftop, the bottom is Yfbottom, the left is Xfstart, and the right is Xfend.
- the distance from the optical axis of the imaging lens 31 to the optical axis of the first slit light 71 emitted from the slit light projecting unit 20 is D, and the Y axis at which the first slit light 71 intersects the XY plane.
- the position in the direction is lasl
- the position in the Y-axis direction where the second slit light 72 intersects the XY plane is las2.
- the three-dimensional spatial position (XI, Yl) corresponding to the coordinates (ccdxl, ccdyl) on the CCD image sensor 32 of the point of interest 1 focusing on one of the pixels of the image of the first slit light locus 71a , Z1) are set for a triangle formed by a point on the image plane of the CCD image sensor 32, an emission point of the first slit light 71 and the second slit light 72, and a point intersecting the XY plane.
- FIG. 19 is a diagram for explaining a coordinate system at the time of document orientation calculation.
- each point in the three-dimensional space corresponding to the locus 71 a of the first slit light is approximated to the regression curve.
- Find a similar line and assume a straight line connecting the point where the X-axis position of this curve is “0” and the three-dimensional position where the X-axis position of the second slit light trajectory 72a is “0”.
- the point at which this straight line intersects the Z axis that is, the point at which the optical axis intersects the document P, is defined as the three-dimensional space position (0, 0, L) of the document P (see FIG. 19A).
- the angle between the straight line and the X-Y plane is defined as the inclination ⁇ of the document P about the X axis.
- a line obtained by approximating the regression curve of the trajectory 71a of the first slit light is rotationally transformed in the opposite direction by the previously obtained inclination ⁇ ⁇ ⁇ around the X axis, that is, Consider that the original P is parallel to the XY plane.
- the cross-sectional shape of the document P in the X-axis direction is calculated for the cross-section of the document P in the X-Z plane, and the displacement in the Z-axis direction is obtained at a plurality of points in the X-axis direction.
- a curvature ⁇ (X) which is a function of the tilt in the X-axis direction with the position in the X-axis direction as a variable, is obtained, and the process ends.
- FIG. 20 is a flowchart showing the plane conversion process.
- the slit light no image stored in the slit light no image storage unit 432 is stored.
- the four corner points of the image are rotated around the Y axis with a curvature ⁇ (X), rotated around the X axis by ⁇ , and moved by the position L in the Z axis direction.
- a rectangular area that is, a rectangular area in which the surface of the original P on which characters and the like are written is viewed from a substantially orthogonal direction
- the number of pixels a included in this rectangular area is obtained ( S1301).
- the coordinates on the slit light non-image corresponding to each pixel constituting the set rectangular area are obtained, and the pixel information of each pixel of the planar image is set from the pixel information around the coordinates. . That is, first, it is determined whether or not the force of the counter b has reached the number of pixels a (S1302). As a result, if the counter b has not reached the number of pixels a (S1302: No), one pixel constituting the rectangular area is rotated by a curved ⁇ (X) around the Y axis (S1304).
- the imaging apparatus 1 in the “imaging correction mode” projects two rows of slit lights, the first slit light 71 and the second slit light 72, on the document P, and scans the document P.
- An image is formed by forming an image on the CCD image sensor 32 by the imaging lens 31, and subsequently, an image of the document P is imaged unless slit light is projected.
- the locus of the slit light is extracted. A part of the pixels constituting the pixel is detected, and an approximate curve relating to the locus of the slit light is calculated from the detected pixels.
- the detection range in the ccdy direction for detecting the remaining pixels constituting the locus of the slit light is limited based on this approximation curve, so that the pixels constituting the slit light are detected as a whole.
- the processing for performing the processing can be performed at high speed, and only the range where the possibility that the pixel including the slit light is included is detected is extremely high, the pixel other than the pixel constituting the slit light is detected. Can be reduced, and the trajectory of the slit light can be extracted with high accuracy.
- the imaging device 1 extracts the trajectory of the slit light in this way, calculates the three-dimensional spatial position of each part of the trajectory of the slit light based on the principle of triangulation, and calculates the position, inclination, Then, the bending state (three-dimensional shape data) is obtained, and the three-dimensional shape data and the image data of the image without slit light are written into the memory card 55.
- the user switches the mode switching switch 59 to the “correction imaging mode” side, and the desired range of the document P is in the imaging range with the viewfinder 53 or the LCD 51.
- the image data is as if the flat original P was captured from the front. Can be stored in the memory card 55.
- the image data stored in the memory card 55 is displayed on the LCD 51 to confirm the imaged content, or the memory card 55 is removed from the imaging device 1 and displayed on an external personal computer or the like. It can be used after printing.
- the main component is not limited to the red component
- the green component and the blue component are not limited to the red component.
- Rd red difference value
- Gd green difference value
- the target object imaged by the imaging apparatus 1 may be a smooth surface of a solid block, or in some cases, a surface of an object having a ridge line, in addition to the sheet-shaped document P.
- the effect of detecting the three-dimensional shape of the target object can be similarly exhibited.
- the target object is a sheet-shaped document P as in this embodiment
- the entire shape of the document P is estimated by assuming that the trajectory 71a of the first slit light is the cross-sectional shape of the document P.
- the target object has a three-dimensional shape that is substantially uniform in a direction orthogonal to the longitudinal direction of the slit light, the deviation of the detection posture caused by a unique shape such as a projection of the target object included in the position where the slit light is projected. It is possible to avoid having to pay attention to the location where the slit light is projected without having to consider any factors.
- the slit light projecting unit 20 is configured to emit the first slit light 71 and the second slit light 72 in two rows of slit light.
- the slit light is not limited to two rows, and may be configured to emit three or more rows.
- a third slit light similar to the second slit light 72 is formed.
- the slit light projecting unit 20 may be configured so that the document P is projected above the second slit light 72.
- a laser diode 21 that emits a red laser beam is used as a light source, but any other device that can output a light beam, such as a surface emitting laser, an LED, and an EL element, may be used. It may be.
- a transparent flat plate formed on one surface with a diffraction grating for diffracting a predetermined ratio of the power of the incident laser beam in a predetermined direction may be used.
- the laser beam of the primary light diffracted by the diffraction grating can be used as the second slit light 72, and the laser light of the zero-order light transmitted as it is can be used as the first slit light 71. it can.
- the slit light emitted from the slit light projecting unit 20 may be a striped light pattern having a certain width, in addition to a fine line sharply narrowed in a direction orthogonal to the longitudinal direction.
- various geometric patterns such as a stripe pattern, a block-shaped pattern, and a dot pattern can be adopted as the pattern light.
- the positional relationship between the first slit light 71 and the second slit light 72 may be reversed in the first direction, that is, the second slit light 72 is viewed downward from the imaging device 1, and Each optical element forming the first slit may be arranged in the direction of.
- the imaging device 1 is configured to capture an image with slit light and an image without slit light using the imaging lens 31 and the CCD image sensor 32.
- an imaging lens and a CCD image sensor for capturing an image with slit light are separately provided in the imaging device. Good With this configuration, it is possible to eliminate the time lapse between capturing the image with the slit light and the image without the slit light (time for transferring the image data of the CCD image sensor 32, etc.). On the other hand, the accuracy of the three-dimensional shape of the target object to be detected with no deviation of the imaging range of the image without slit light can be improved.
- the imaging device 1 of the present embodiment can be made smaller and less expensive with fewer components.
- the first area setting means may include, as the first area including a part of the trajectory of the pattern light, an area including a central part of the trajectory of the pattern light in the Noturn light projection image It is configured to set!
- the first area setting means sets an area including the central part of the trajectory of the pattern light in the pattern light projection image as the first area. I do. It is common for the operator to take an image so that the target object is located at the center of the captured image. On the other hand, it is highly likely that backgrounds other than the target object, unnecessary objects, and the like are reflected at both ends of the image. Therefore, when both ends of the image are set as the first region and the approximate curve is calculated based on the pixels extracted from the first region, the approximate curve along the locus of the pattern light on the target object is obtained. Is unlikely to be calculated.
- the three-dimensional shape detection device detects a plurality of detection positions for detecting pixels constituting the remaining portion of the trajectory of the pattern light in the X-axis direction of the pattern light projection image.
- Detection position setting means for setting the end of the pattern light projection image from the outside of the first area, and the remaining portion of the pattern of the light beam for each detection position set by the detection position setting means.
- Second detection means for detecting the constituent pixels, and for each pixel detected by the second detection means, calculated by the approximate curve calculation means based on the pixel and a plurality of pixels detected by the first detection means.
- first updating means for updating the approximated curve to be used.
- the second area setting means limits the area in the Y axis direction of the pattern light projection image for each detection position set by the detection position setting means, and the area in the Y axis direction is detected.
- the detection position set first by the position setting means is limited based on the approximation curve calculated by the approximation curve calculation means, and the detection position set next is set by the approximation curve updated by the first update means. It may be configured to be restricted based on this.
- the approximate curve calculated by the approximate curve calculation unit is, for each pixel detected by the second detection unit, the pixel and the first detection unit. It is updated by the first updating means based on the plurality of detected pixels. Therefore, an approximate curve relating to the trajectory of the pattern light can be calculated with high accuracy. Further, after the approximate curve is updated, the second area setting means limits the area in the Y-axis direction for detecting the pixels constituting the flutter light based on the updated approximate curve. The effect is obtained that the detection processing for detecting the pixels including the pattern light can be performed at high speed.
- the detection position setting means may be configured to alternately set both sides sandwiching the first region as the detection position.
- the detection position setting means alternately sets both sides sandwiching the first region as the detection position.
- the detection position it is possible to calculate an approximate curve relating to the trajectory of the pattern light extending substantially in contrast with high accuracy.
- the other region is detected. If detection is to be performed, the pixel including the pattern light at the first detection position in the other area is affected by the detection trajectory in one area, and the other area sandwiching the first area is calculated based on the detected pixels. Approximate curve force to be obtained If a case occurs in which a pixel including the pattern light cannot be accurately detected at a distance of a predetermined threshold or more, an adverse effect can be prevented.
- the three-dimensional shape detecting device includes a counting unit that counts the number of times a pixel is not detected by the second detection unit for each detection position, and a counting unit that counts the force.
- the judgment means for judging whether or not the number of times counted exceeds the predetermined number of times at the continuous detection position, and when the judgment means judges that the number of times counted by the counting means exceeds the predetermined number of times at the continuous detection position. May include a stopping means for stopping the detection by the second detecting means at the subsequent detection position.
- the three-dimensional shape detection device having such a configuration, when it is determined that the number counted by the counting means exceeds a predetermined number at a continuous detection position, the second number at the subsequent detection position is determined.
- the detection by the detecting means is stopped by the stopping means. Therefore, if unnecessary processing can be omitted and the processing for extracting the trajectory of the pattern light can be performed at high speed, the following effect can be obtained.
- the three-dimensional shape detecting apparatus performs a function of patterning the approximate curve calculated by the approximate curve calculation means from among the plurality of pixels detected by the first detection means.
- Extraction means for extracting pixels separated by a predetermined distance or more in the Y-axis direction of the light projection image may be further provided.
- the trajectory of the pattern light is represented by the remaining pixels obtained by removing the pixels extracted by the extraction means from the plurality of pixels detected by the first detection means and the second pixels set by the second area setting means. The extraction may be performed based on pixels detected from within the area.
- the trajectory of the turn light is obtained by removing the pixels extracted by the plurality of pixel force extraction means detected by the first detection means. Pixels and pixels detected from within the second area set by the second area setting means are extracted. It is. Therefore, the effect that the trajectory of the pattern light can be extracted with higher precision than when all of the plurality of pixels detected by the first detection means are pixels constituting the trajectory of the pattern light is obtained.
- the three-dimensional shape detecting device approximates a pixel extracted by the extracting unit based on a remaining pixel obtained by removing a plurality of pixel forces detected by the first detecting unit.
- the apparatus may further include second updating means for updating the approximate curve calculated by the curve calculating means.
- the second area setting means may be configured to limit the area in the Y-axis direction at the detection position initially set by the detection position setting means based on the approximate curve updated by the second updating means. .
- the approximate curve calculated by the approximate curve calculation unit excludes the pixels extracted by the plurality of pixel force extraction units detected by the first detection unit. It is updated based on the remaining pixels. Therefore, the accuracy of the approximate curve can be improved as compared with the approximate curve calculated based on the plurality of pixels detected by the first detection unit. Further, since the area in the Y-axis direction at the detection position initially set by the detection position setting means is limited based on the updated approximate curve, the area in the Y-axis direction can be limited to a more appropriate range. The effect is obtained.
- the first updating means or the second updating means may be configured to calculate a higher-order approximate curve in a stepwise manner.
- the first updating means or the second updating means calculates a high-order approximation curve in a stepwise manner, projects light onto the target object having irregularities. Even when the pattern light is used, if the trajectory of the pattern light can be extracted with high accuracy, the effect can be obtained.
- the imaging device includes a three-dimensional shape detection device configured as described above, and a three-dimensional shape of the target object calculated by the three-dimensional shape calculation unit of the three-dimensional shape detection device. Based on the shape, a pattern light non-projection image of the target object in a state where the pattern light imaged by the imaging means of the three-dimensional shape detection device is not projected from a substantially vertical direction of a predetermined surface of the target object.
- a plane image correcting unit for correcting the observed plane image is corrected to an accurate planar image based on the locus of the no-turn light extracted with high accuracy by the three-dimensional detection device described above. If you can do it! /
- the processing of S709 and S710 in the flowchart of Fig. 7 is positioned as a first area setting means or a first area setting step.
- the processing from S711 to S715 in the flowchart of FIG. 7 is regarded as a first detecting means or a first detecting step.
- the process of S902 in the flowchart of FIG. 9 is regarded as an approximate curve calculation means or an approximate curve calculation step.
- the processing of S909 and S910 in the flowchart of FIG. 9 is regarded as a second area setting means or a second area setting step.
- the process of S906 in the flowchart of Fig. 9 is regarded as a detection position setting unit.
- the processes of S911 and S914 and S923 and S926 in the flowchart of FIG. 9 are positioned as the second detection means.
- the processing of S915 and S927 in the flowchart of FIG. 9 is positioned as the first updating means.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/431,033 US7365301B2 (en) | 2003-11-11 | 2006-05-10 | Three-dimensional shape detecting device, image capturing device, and three-dimensional shape detecting program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-381104 | 2003-11-11 | ||
JP2003381104A JP2005148813A (ja) | 2003-11-11 | 2003-11-11 | 3次元形状検出装置、撮像装置、及び、3次元形状検出プログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/431,033 Continuation-In-Part US7365301B2 (en) | 2003-11-11 | 2006-05-10 | Three-dimensional shape detecting device, image capturing device, and three-dimensional shape detecting program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005045363A1 true WO2005045363A1 (ja) | 2005-05-19 |
Family
ID=34567268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/016632 WO2005045363A1 (ja) | 2003-11-11 | 2004-11-10 | 3次元形状検出装置、撮像装置、及び、3次元形状検出プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US7365301B2 (ja) |
JP (1) | JP2005148813A (ja) |
WO (1) | WO2005045363A1 (ja) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005029408A1 (ja) * | 2003-09-18 | 2005-03-31 | Brother Kogyo Kabushiki Kaisha | 画像処理装置、及び、撮像装置 |
US8494304B2 (en) * | 2007-05-11 | 2013-07-23 | Xerox Corporation | Punched hole detection and removal |
DE102008054985B4 (de) | 2008-12-19 | 2012-02-02 | Sirona Dental Systems Gmbh | Verfahren und Vorrichtung zur optischen Vermessung von dreidimensionalen Objekten mittels einer dentalen 3D-Kamera unter Verwendung eines Triangulationsverfahrens |
JP5864950B2 (ja) * | 2011-08-15 | 2016-02-17 | キヤノン株式会社 | 三次元計測装置、三次元計測方法およびプログラム |
KR101625320B1 (ko) * | 2012-10-12 | 2016-05-27 | 가부시기가이샤니레꼬 | 형상 계측 방법 및 형상 계측 장치 |
US9881235B1 (en) | 2014-11-21 | 2018-01-30 | Mahmoud Narimanzadeh | System, apparatus, and method for determining physical dimensions in digital images |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003504947A (ja) * | 1999-07-09 | 2003-02-04 | ヒューレット・パッカード・カンパニー | 文書撮像システム |
JP2003078725A (ja) * | 2001-02-14 | 2003-03-14 | Ricoh Co Ltd | 画像入力装置 |
JP2003106826A (ja) * | 2001-09-28 | 2003-04-09 | Ricoh Co Ltd | 形状測定装置、原稿画像補正装置及び投影画像補正装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3282331B2 (ja) | 1993-12-20 | 2002-05-13 | ミノルタ株式会社 | 3次元形状測定装置 |
US5668631A (en) * | 1993-12-20 | 1997-09-16 | Minolta Co., Ltd. | Measuring system with improved method of reading image data of an object |
US6407817B1 (en) | 1993-12-20 | 2002-06-18 | Minolta Co., Ltd. | Measuring system with improved method of reading image data of an object |
JP3493886B2 (ja) | 1996-04-23 | 2004-02-03 | ミノルタ株式会社 | デジタルカメラ |
US6449004B1 (en) | 1996-04-23 | 2002-09-10 | Minolta Co., Ltd. | Electronic camera with oblique view correction |
-
2003
- 2003-11-11 JP JP2003381104A patent/JP2005148813A/ja not_active Withdrawn
-
2004
- 2004-11-10 WO PCT/JP2004/016632 patent/WO2005045363A1/ja active Application Filing
-
2006
- 2006-05-10 US US11/431,033 patent/US7365301B2/en not_active Expired - Lifetime
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003504947A (ja) * | 1999-07-09 | 2003-02-04 | ヒューレット・パッカード・カンパニー | 文書撮像システム |
JP2003078725A (ja) * | 2001-02-14 | 2003-03-14 | Ricoh Co Ltd | 画像入力装置 |
JP2003106826A (ja) * | 2001-09-28 | 2003-04-09 | Ricoh Co Ltd | 形状測定装置、原稿画像補正装置及び投影画像補正装置 |
Also Published As
Publication number | Publication date |
---|---|
US20060219869A1 (en) | 2006-10-05 |
US7365301B2 (en) | 2008-04-29 |
JP2005148813A (ja) | 2005-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7627196B2 (en) | Image processing device and image capturing device | |
CN110163917B (zh) | 用于校准可穿戴装置中的图像传感器的系统和方法 | |
CN103765870B (zh) | 图像处理装置、包括图像处理装置的投影仪和投影仪系统、图像处理方法 | |
EP2495594A2 (en) | Camera applications in a handheld device | |
JP4530011B2 (ja) | 位置計測システム | |
CN110178156A (zh) | 包括可调节焦距成像传感器的距离传感器 | |
JP2009043139A (ja) | 位置検出装置 | |
CN106991378B (zh) | 基于深度的人脸朝向检测方法、检测装置和电子装置 | |
WO2019174484A1 (zh) | 一种充电座识别方法及移动机器人 | |
JP2005198743A (ja) | 三次元視点計測装置 | |
US7365301B2 (en) | Three-dimensional shape detecting device, image capturing device, and three-dimensional shape detecting program | |
US20060192082A1 (en) | Three-dimensional shape detecting device, image capturing device, and three-dimensional shape detecting program | |
EP3975034A1 (en) | Control method of electronic equipment, electronic equipment and computer readable storage medium | |
JP2005148813A5 (ja) | ||
WO2005008174A1 (ja) | 3次元形状検出装置、撮像装置、及び、3次元形状検出方法 | |
JP4360145B2 (ja) | 3次元形状検出装置、撮像装置、及び、3次元形状検出方法 | |
CN101149653B (zh) | 判读影像位置的装置 | |
JP2005128006A (ja) | 3次元形状検出装置、撮像装置、及び、3次元形状検出プログラム | |
CN116858131A (zh) | 基于折衍混合成像的结构光三维重构系统及方法 | |
JP6740614B2 (ja) | 物体検出装置、及び物体検出装置を備えた画像表示装置 | |
EP4036596A1 (en) | High resolution lidar scanning | |
JP2005189021A (ja) | 撮像装置 | |
JP2005069771A (ja) | 3次元形状検出装置、及び、撮像装置 | |
JP2020122696A (ja) | 位置検出装置、位置検出ユニット、画像表示システムおよび位置検出方法 | |
CN109756660B (zh) | 电子设备和移动平台 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 11431033 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 11431033 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |