WO2010133921A1 - Surface scanning system - Google Patents

Surface scanning system Download PDF

Info

Publication number
WO2010133921A1
WO2010133921A1 PCT/IB2009/052130 IB2009052130W WO2010133921A1 WO 2010133921 A1 WO2010133921 A1 WO 2010133921A1 IB 2009052130 W IB2009052130 W IB 2009052130W WO 2010133921 A1 WO2010133921 A1 WO 2010133921A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
camera
light source
scanned
scanning system
Prior art date
Application number
PCT/IB2009/052130
Other languages
French (fr)
Inventor
Cem Unsalan
Rıfat BENVENISTE
Original Assignee
Yeditepe Universitesi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yeditepe Universitesi filed Critical Yeditepe Universitesi
Priority to PCT/IB2009/052130 priority Critical patent/WO2010133921A1/en
Priority to TR2010/11109T priority patent/TR201011109T2/en
Priority to EP09786403A priority patent/EP2433089A1/en
Priority to US13/145,337 priority patent/US20110279656A1/en
Publication of WO2010133921A1 publication Critical patent/WO2010133921A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2509Color coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the present invention relates to a surface scanning system which enables obtaining three dimensional models of the geometries of the objects particularly having shiny or specular surfaces.
  • 3D scanners are devices used to extract the surface coordinates of a three dimensional object. These devices are used in various areas such as reverse engineering, computer graphics applications, archeological finding scanning and medical imaging.
  • 3D scanners There are various methods used to construct 3D scanners. Two main categories of 3D scanners are 1-) non-contact and 2-) contact. 3D scanners based on touch sensors are considered as contact scanners. These devices are not of general use since they are slow and some objects can not be touched either due to their characteristic properties or due to their positions.
  • the 3D scanners in the non- contact category are divided into two main categories: 1-) triangulation based structured light and 2-) other optical property based.
  • Triangulation based structured light 3D scanners take different methods as basis: laser based, projection based and patterned structured light based. Patterned structured light based scanners use different pattern coding strategies such as color and line coding.
  • triangulation based structured light 3D scanners white or colored stripes are projected on the object from a monochromatic or multi spectral light source. These stripes are then reflected and the image of the object onto which a stripe is projected is captured by one or more cameras. Frcun the image captured, bending of the stripe on the object according to the shape of the object is determined and the shape information is obtained by means of triangulation. If the stripe is moved along the object surface, three dimensional model of the object can be obtained.
  • the United States patent document US20050116952 known in the art, discloses producing structured-light pattern, wherein high-resolution real-time three- dimensional coordinates can be obtained by using single frame or double frame imaging methods.
  • the system used has become complex due to the fact that double frame imaging methods are used.
  • changing stripe color is projected on the object.
  • an additional image is used. This causes prolongation of the scanning process.
  • the Great Britain patent document GB2078944 discloses measurement of the surface profile by scanning method upon projection of a color band comprising at least two wavelength bands onto the surface by means of an optic transmitter.
  • the objective of the present invention is to provide a surface scanning system which enables performing three dimensional modeling of objects with shiny or specular surfaces without having any difficulty.
  • Figure 1 is the schematic view of a three dimensional surface scanning system.
  • Figure 2 is the flowchart of the surface scanning process in the three dimensional surface scanning system.
  • Figure 3 is the drawings which show the stripe taking the shape of the object on which it is projected in three dimensional surface scanning system.
  • the surface scanning system (1) comprises at least one light source (2), a moving mechanism (3) which enables the light source (2) to move relative to the object to be scanned, at least one camera (4), a moving mechanism (5) which enables the camera (4) to move relative to the object to be scanned, a moving mechanism (6) which enables the object to be scanned to move in order for it to be viewed from different angles, at least one controller (7) which controls the light source (2), camera (4) and the moving mechanisms (3, 5, 6).
  • the moving mechanisms (3, 5, 6) provided in the inventive surface scanning system (1) move in all directions and can turn to any direction.
  • the camera (4) used in the inventive surface scanning system (1) is preferably a color camera.
  • surface scanning process (100) begins with the start command given to the controller (7) (101).
  • the controller (7) activates the light source (2) that is used and a light stripe is projected from the light source (2) onto the object which will be surface scanned (102). Images of the surfaces on which light is projected are recorded by the camera (4) (103). Then the color invariant, which will distinguish the color of the light source from the image received from the camera, will be found and the color invariant will be applied to the image received from the camera, and the threshold value of the color invariant applied image will be calculated according to the pixel density distribution (histogram) thereof (104).
  • the image to which the color invariants are applied is thresholded according to the threshold value calculated in step 104 and the information regarding the stripe projected from the light source on the object is obtained (105).
  • the bended stripe acquired on the object is processed by triangulation method whereby information regarding the depth on the object is obtained (106). It is checked whether the entire object is scanned or not (107). If the entire object is scanned, the scanning process is finalized (108). If after step 107 the entire object is not scanned, scanning process restarts from step 101.
  • Luminosity on the object varies depending on the luminosity intensity of each source in the medium. Chromaticity varies only depending on the light source that provides that color and the color of the object. For this reason, parameters which are not influenced by the changes depending on the luminosity in the image of the object but returns data depending only on chromaticity are called color invariants.
  • Image of an object is comprised of three main color channels (Red, Green and Blue). Chromaticity in these channels differs from luminosity with various transformations. Each method distinguishing chromaticity is considered as color invariant.
  • RGB red color value
  • G green color value coming from each pixel of the camera sensor
  • sensing cells on the sensor of a color camera which are sensitive to the intensity of each color channel (Red, Green and Blue). In single sensor cameras, these pixels are arranged according to a certain rule. In cameras with a plurality of sensors, the light is first passed through a prism and measured by sensors which are sensitive to different color channels (e.g. 3 CCD cameras).
  • the intensity of the red color and the intensity of green color in a light projected on a point are measured by a sensor sensitive to red and a sensor sensitive to green, respectively (the same applies for blue). These measurements are expressed by the sensor with a voltage level. If this voltage level is transferred to the digital medium, the pixel values for all three main colors showing the color intensity are obtained. In a system which is digitalized by being sampled with 8 bits, an intensity value in the range of 0 - 255 is obtained for each pixel.
  • the threshold value is derived from the image obtained according to the color invariants.
  • a color invariant intensity distribution (histogram) is attained. Since this distribution is subject to change in a different image, a certain percentage of the distribution is selected as the threshold for each image in the inventive system. The said percentage is preferably above 90%. This way the system can perform adaptive thresholding.
  • the light stripe projected on the object is provided by a projector or a laser whose position is changed by the controller (7). The light emitted by the said laser or projector can be of any color.
  • the calculated threshold value only comprises the beam projected on the object by the light source. Since color invariants are used in calculating the threshold value, the received image is not affected by the reflection luminance dependant on the other light sources in the medium. The color information in the image received by using color invariants becomes dominant relative to luminosity. Thresholding is performed in connection with this.
  • the color of the light stripe reflected on the object is known by the nature of the system (1). Locations which are thresholded with a color equivalent to the color of the stripe reflected as a result of thresholding bear stripe information. This way noise and shiny parts originating from the lighting conditions are not present in the threshold image.
  • the light stripe projected on the object in step 105 during scanning bends on the object depending on the shape of the object. Depth information is obtained by processing the said bends. This process is carried out by applying triangulation method which enables to find the distance of the point to the image plane by means of trigonometric identities.
  • Figure 3 there are provided pictures showing the bending of the light stripe upon taking the shape of the object on which it is projected.
  • (a) corresponds to the red color band in the color image and (b) corresponds to the green color band in the color image.
  • light colors mean high values.
  • (c) is the image obtained by a color invariant
  • (d) is the points which are obtained as a result of thresholding the color invariant and which only comprises the reflected laser line information (here the white points correspond to the laser line).
  • the reflected laser line would be straight if there would not be any object. But it bended when it was projected on the object. Thus it acquired the shape of the object. This way, three dimensional coordinates of the points on the line can be found by the triangulation method.
  • color invariants are used to obtain the shape and depth information regarding the object to be scanned. Scanning process starts with the start command given to the controller, and the process is performed automatically.
  • the depth information of the object is obtained by the linear movement of the light beam(s) projected on the object.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The present invention relates to a surface scanning system, which enables obtaining three dimensional models of the geometries of the objects particularly having shiny or specular surfaces, and which comprises at least one light source, a moving mechanism that enables the light source to move relative to the object to be scanned, at least one camera, a moving mechanism that enables the camera to move relative to the object to be scanned, a moving mechanism that enables the object to be scanned to move in order for it to be viewed from different angles, at least one controller that controls the light source, camera and the moving mechanisms.

Description

SURFACE SCANNING SYSTEM
Field of the Invention
The present invention relates to a surface scanning system which enables obtaining three dimensional models of the geometries of the objects particularly having shiny or specular surfaces.
Background of the Invention
3D scanners are devices used to extract the surface coordinates of a three dimensional object. These devices are used in various areas such as reverse engineering, computer graphics applications, archeological finding scanning and medical imaging.
There are various methods used to construct 3D scanners. Two main categories of 3D scanners are 1-) non-contact and 2-) contact. 3D scanners based on touch sensors are considered as contact scanners. These devices are not of general use since they are slow and some objects can not be touched either due to their characteristic properties or due to their positions. The 3D scanners in the non- contact category are divided into two main categories: 1-) triangulation based structured light and 2-) other optical property based. Triangulation based structured light 3D scanners take different methods as basis: laser based, projection based and patterned structured light based. Patterned structured light based scanners use different pattern coding strategies such as color and line coding.
In triangulation based structured light 3D scanners, white or colored stripes are projected on the object from a monochromatic or multi spectral light source. These stripes are then reflected and the image of the object onto which a stripe is projected is captured by one or more cameras. Frcun the image captured, bending of the stripe on the object according to the shape of the object is determined and the shape information is obtained by means of triangulation. If the stripe is moved along the object surface, three dimensional model of the object can be obtained.
In patterned structured light scanners, a plurality of stripes is projected on the object at the same time. For this reason, there are problems of stripe correspondence in this type of scanners. In laser or projection stripe based systems, only one stripe is projected on the object; therefore the above mentioned correspondence problem is not experienced in this type of scanners.
Impacts of the surface properties and the lighting conditions of the medium on the quality of the image to be acquired impose a problem for the structured light 3D scanners. As a result of these problems, the scanner can not be used in certain media. For example, problems are encountered when the scanner is used in objects with shiny or specular surfaces. In addition to the light stripes projected on the object, other light rays coming from the outer environment cause noise on the images due to specularity of the object. Although some filtering techniques are used to solve this problem, the said techniques do not eliminate the problem entirely. In the applications carried out, most of the shiny surfaced objects are covered with an opaque material such as powder to suppress the specularity of the surface. There are two major problems with this solution. First of all, covering the entire surface with powder slows down the scanning process. Secondly and lastly, objects such as archeological findings or those that can be affected from powders can not be covered by the said powder-like opaque materials.
The United States patent document US20050116952, known in the art, discloses producing structured-light pattern, wherein high-resolution real-time three- dimensional coordinates can be obtained by using single frame or double frame imaging methods. In the above mentioned United States patent document, the system used has become complex due to the fact that double frame imaging methods are used. In the method provided in the said document, changing stripe color is projected on the object. Additionally, in the method disclosed in the said document, after the projector which is used as the light source is turned off, an additional image is used. This causes prolongation of the scanning process.
The Great Britain patent document GB2078944 discloses measurement of the surface profile by scanning method upon projection of a color band comprising at least two wavelength bands onto the surface by means of an optic transmitter.
In order for the system disclosed in the above mentioned Great Britain patent document to function, there is a need of one visible and one invisible wavelength for the light sources. This renders the system complicated due to the fact that the system needs a reflector and a sensor of two different structures.
Summary of the Invention
The objective of the present invention is to provide a surface scanning system which enables performing three dimensional modeling of objects with shiny or specular surfaces without having any difficulty.
Detailed Description of the Invention
The surface scanning system developed to fulfill the objectives of the present invention is illustrated in the accompanying figures, in which,
Figure 1 is the schematic view of a three dimensional surface scanning system. Figure 2 is the flowchart of the surface scanning process in the three dimensional surface scanning system. Figure 3 is the drawings which show the stripe taking the shape of the object on which it is projected in three dimensional surface scanning system.
The surface scanning system (1) comprises at least one light source (2), a moving mechanism (3) which enables the light source (2) to move relative to the object to be scanned, at least one camera (4), a moving mechanism (5) which enables the camera (4) to move relative to the object to be scanned, a moving mechanism (6) which enables the object to be scanned to move in order for it to be viewed from different angles, at least one controller (7) which controls the light source (2), camera (4) and the moving mechanisms (3, 5, 6).
The moving mechanisms (3, 5, 6) provided in the inventive surface scanning system (1) move in all directions and can turn to any direction.
The camera (4) used in the inventive surface scanning system (1) is preferably a color camera.
In the inventive surface scanning system (1), surface scanning process (100) begins with the start command given to the controller (7) (101). The controller (7) activates the light source (2) that is used and a light stripe is projected from the light source (2) onto the object which will be surface scanned (102). Images of the surfaces on which light is projected are recorded by the camera (4) (103). Then the color invariant, which will distinguish the color of the light source from the image received from the camera, will be found and the color invariant will be applied to the image received from the camera, and the threshold value of the color invariant applied image will be calculated according to the pixel density distribution (histogram) thereof (104). The image to which the color invariants are applied is thresholded according to the threshold value calculated in step 104 and the information regarding the stripe projected from the light source on the object is obtained (105). The bended stripe acquired on the object is processed by triangulation method whereby information regarding the depth on the object is obtained (106). It is checked whether the entire object is scanned or not (107). If the entire object is scanned, the scanning process is finalized (108). If after step 107 the entire object is not scanned, scanning process restarts from step 101.
When light is projected on any object, depending on the surface properties of the object, a certain amount of light is absorbed and a certain amount of light is reflected back in different angles. Here, we can define the light reflected on an object with its two basic properties, namely luminosity and chromaticity. Luminosity on the object varies depending on the luminosity intensity of each source in the medium. Chromaticity varies only depending on the light source that provides that color and the color of the object. For this reason, parameters which are not influenced by the changes depending on the luminosity in the image of the object but returns data depending only on chromaticity are called color invariants. Image of an object is comprised of three main color channels (Red, Green and Blue). Chromaticity in these channels differs from luminosity with various transformations. Each method distinguishing chromaticity is considered as color invariant.
Different types of examples can be given for color invariants: Where R is the red color value and G is the green color value coming from each pixel of the camera sensor, the following equation is a color invariant that may be used in distinguishing the red color.
R - G
Φ —
R + G
A similar color invariant is obtained by YCbCr color transformation. Here Y is the light intensity value independent of the color in the image. In connection with this, Cr can also be used as a color invariant in distinguishing red color. Cr is obtained as follows. CR = 128 + 112(t _ QS 299 (J? - 10)
There are different sensing cells on the sensor of a color camera which are sensitive to the intensity of each color channel (Red, Green and Blue). In single sensor cameras, these pixels are arranged according to a certain rule. In cameras with a plurality of sensors, the light is first passed through a prism and measured by sensors which are sensitive to different color channels (e.g. 3 CCD cameras).
Therefore, the intensity of the red color and the intensity of green color in a light projected on a point are measured by a sensor sensitive to red and a sensor sensitive to green, respectively (the same applies for blue). These measurements are expressed by the sensor with a voltage level. If this voltage level is transferred to the digital medium, the pixel values for all three main colors showing the color intensity are obtained. In a system which is digitalized by being sampled with 8 bits, an intensity value in the range of 0 - 255 is obtained for each pixel.
Example: For pure red -> red: 255 green: 0 blue: 0
For pure yellow -> red: 255 green: 255 blue: 0
For light purple -> red: 120 green: 50 blue: 140
In the inventive surface scanning system (1), the threshold value is derived from the image obtained according to the color invariants. In order to calculate the threshold value, upon arranging the number of each color invariant image pixel in the chart, a color invariant intensity distribution (histogram) is attained. Since this distribution is subject to change in a different image, a certain percentage of the distribution is selected as the threshold for each image in the inventive system. The said percentage is preferably above 90%. This way the system can perform adaptive thresholding. In the inventive surface scanning system (1), the light stripe projected on the object is provided by a projector or a laser whose position is changed by the controller (7). The light emitted by the said laser or projector can be of any color.
The calculated threshold value only comprises the beam projected on the object by the light source. Since color invariants are used in calculating the threshold value, the received image is not affected by the reflection luminance dependant on the other light sources in the medium. The color information in the image received by using color invariants becomes dominant relative to luminosity. Thresholding is performed in connection with this. The color of the light stripe reflected on the object is known by the nature of the system (1). Locations which are thresholded with a color equivalent to the color of the stripe reflected as a result of thresholding bear stripe information. This way noise and shiny parts originating from the lighting conditions are not present in the threshold image.
The light stripe projected on the object in step 105 during scanning, bends on the object depending on the shape of the object. Depth information is obtained by processing the said bends. This process is carried out by applying triangulation method which enables to find the distance of the point to the image plane by means of trigonometric identities.
In Figure 3 there are provided pictures showing the bending of the light stripe upon taking the shape of the object on which it is projected. Among these pictures, (a) corresponds to the red color band in the color image and (b) corresponds to the green color band in the color image. In all of the pictures, light colors mean high values. In addition to the laser line on the teapot, the effect of the light coming from the external environment is also visible, (c) is the image obtained by a color invariant, (d) is the points which are obtained as a result of thresholding the color invariant and which only comprises the reflected laser line information (here the white points correspond to the laser line). If the normal light intensity were to be thresholded (as most of the other depth scanners do) rather than the color invariant, then other light effects coming from the external environment on the teapot would be obtained. As these points are redundant, they would arise as noise in the step of finding depth.
The reflected laser line would be straight if there would not be any object. But it bended when it was projected on the object. Thus it acquired the shape of the object. This way, three dimensional coordinates of the points on the line can be found by the triangulation method.
In the inventive surface scanning system (1), color invariants are used to obtain the shape and depth information regarding the object to be scanned. Scanning process starts with the start command given to the controller, and the process is performed automatically. The depth information of the object is obtained by the linear movement of the light beam(s) projected on the object.
Most of the scanners in the state of the art are operated in dark environments in order for the scanning process not to be affected by the lighting conditions of the environment. In the inventive method, color invariants are used whereby surface of the object is scanned under any lighting condition and the scanning process is not affected by the ambient light.
It is possible to develop a wide variety of embodiments of the inventive surface scanning system. The invention can not be limited to the examples described herein; it is essentially according to the claims.

Claims

1. A surface scanning system (1) comprising at least one light source (2), a moving mechanism (3) which enables the light source (2) to move relative to the object to be scanned, at least one camera (4), a moving mechanism (5) which enables the camera (4) to move relative to the object to be scanned, a moving mechanism (6) which enables the object to be scanned to move in order for it to be viewed from different angles, at least one controller (7) which controls the light source (2), camera (4) and the moving mechanisms (3, 5, 6), and characterized in that the surface scanning process (100) is carried out by the steps of, giving start command to the controller (7) (101), the controller (7) activating the light source (2) that is used, and a light stripe being projected from the light source (2) onto the object which will be surface scanned (102), the images of the surfaces on which light is projected being recorded by the camera (4) (103), the color invariant, which will distinguish the color of the light source from the image received from the camera, being found and the color invariant being applied to the image received from the camera, and the threshold value of the color invariant applied image being calculated according to the pixel density distribution (histogram) thereof (104), the image to which the color invariants are applied being thresholded according to the threshold value calculated in step 104 whereby the information regarding the stripe projected from the light source on the object being obtained (105), information regarding the depth of the object being obtained upon processing the distortions on the stripes (106), checking whether the entire object is scanned or not (107), finalizing the scanning process if the entire object is scanned (108).
2. A surface scanning system (1) according to Claim 1, characterized in that information regarding the depth of the object is obtained upon processing the distortions on the stripes by means of triangulation method in step 106.
3. A surface scanning system (1) according to Claim 1 or 2, characterized in that a color invariant depending on the color of the stripe projected on the object is found and the said invariant is applied to the image received from the camera.
4. A surface scanning system (1) according to Claim 1, 2 or 3, characterized in that a predetermined value of the color invariant intensity distribution (histogram) is selected as the threshold for each image.
5. A surface scanning system (1) according to Claim 4, characterized in that in order for the color invariant intensity distribution percentage to be selected as the threshold value, it should be above 90%.
PCT/IB2009/052130 2009-05-21 2009-05-21 Surface scanning system WO2010133921A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/IB2009/052130 WO2010133921A1 (en) 2009-05-21 2009-05-21 Surface scanning system
TR2010/11109T TR201011109T2 (en) 2009-05-21 2009-05-21 A surface scanning system.
EP09786403A EP2433089A1 (en) 2009-05-21 2009-05-21 Surface scanning system
US13/145,337 US20110279656A1 (en) 2009-05-21 2009-05-21 Surface Scanning System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2009/052130 WO2010133921A1 (en) 2009-05-21 2009-05-21 Surface scanning system

Publications (1)

Publication Number Publication Date
WO2010133921A1 true WO2010133921A1 (en) 2010-11-25

Family

ID=41559656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/052130 WO2010133921A1 (en) 2009-05-21 2009-05-21 Surface scanning system

Country Status (4)

Country Link
US (1) US20110279656A1 (en)
EP (1) EP2433089A1 (en)
TR (1) TR201011109T2 (en)
WO (1) WO2010133921A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103868472B (en) * 2013-12-23 2016-09-07 黑龙江科技大学 A kind of area-structure light three-dimensional measuring apparatus for high reflectance part and method
DE102018101995B8 (en) 2018-01-30 2019-08-14 Willi Gerndt Device for measuring according to the light-section triangulation method
CN108490000A (en) * 2018-03-13 2018-09-04 北京科技大学 A kind of Bar Wire Product surface defect on-line measuring device and method
CN117073577A (en) * 2022-05-09 2023-11-17 苏州佳世达光电有限公司 Structured light scanning device and method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008044096A1 (en) * 2006-10-13 2008-04-17 Yeditepe Üniversitesi Method for three-dimensionally structured light scanning of shiny or specular objects

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754370B1 (en) * 2000-08-14 2004-06-22 The Board Of Trustees Of The Leland Stanford Junior University Real-time structured light range scanning of moving scenes
CN101198964A (en) * 2005-01-07 2008-06-11 格斯图尔泰克股份有限公司 Creating 3D images of objects by illuminating with infrared patterns
US8487991B2 (en) * 2008-04-24 2013-07-16 GM Global Technology Operations LLC Clear path detection using a vanishing point

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008044096A1 (en) * 2006-10-13 2008-04-17 Yeditepe Üniversitesi Method for three-dimensionally structured light scanning of shiny or specular objects

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GEERTS H ET AL: "Color invariance", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 23, no. 12, 1 December 2001 (2001-12-01), pages 1338 - 1350, XP011094034, ISSN: 0162-8828 *
GEVERS T ET AL: "Color-based object recognition", PATTERN RECOGNITION, ELSEVIER, GB, vol. 32, no. 3, 1 March 1999 (1999-03-01), pages 453 - 464, XP004157212, ISSN: 0031-3203 *

Also Published As

Publication number Publication date
TR201011109T2 (en) 2011-08-22
US20110279656A1 (en) 2011-11-17
EP2433089A1 (en) 2012-03-28

Similar Documents

Publication Publication Date Title
CN107607040B (en) Three-dimensional scanning measurement device and method suitable for strong reflection surface
CN208672539U (en) A kind of foliated glass edge faults detection device based on Image Acquisition
US20230154105A1 (en) System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
US7711182B2 (en) Method and system for sensing 3D shapes of objects with specular and hybrid specular-diffuse surfaces
US9858682B2 (en) Device for optically scanning and measuring an environment
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
Tarini et al. 3D acquisition of mirroring objects using striped patterns
US6147760A (en) High speed three dimensional imaging method
US8107721B2 (en) Method and system for determining poses of semi-specular objects
US6455835B1 (en) System, method, and program product for acquiring accurate object silhouettes for shape recovery
JP2015021862A (en) Three-dimensional measurement instrument and three-dimensional measurement method
Benveniste et al. A color invariant for line stripe-based range scanners
EP2433089A1 (en) Surface scanning system
Chiang et al. Active stereo vision system with rotated structured light patterns and two-step denoising process for improved spatial resolution
Rantoson et al. 3D reconstruction of transparent objects exploiting surface fluorescence caused by UV irradiation
JP5633719B2 (en) 3D information measuring apparatus and 3D information measuring method
JP4379626B2 (en) Three-dimensional shape measuring method and apparatus
JP6591332B2 (en) Radiation intensity distribution measuring system and method
CN107392955B (en) Depth of field estimation device and method based on brightness
EP3062516B1 (en) Parallax image generation system, picking system, parallax image generation method, and computer-readable recording medium
WO2008044096A1 (en) Method for three-dimensionally structured light scanning of shiny or specular objects
JP6745936B2 (en) Measuring device and control method thereof
Seulin et al. Dynamic lighting system for specular surface inspection
JP7079218B2 (en) Imaging device
Kim et al. Accurate 3D reconstruction of highly reflective mechanical part surfaces using a structured light method with combined phase shift and gray code

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2010/11109

Country of ref document: TR

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09786403

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2009786403

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 13145337

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE