GB2270155A - Three-dimentional information from shadows of concurrent projections of multiple light sources - Google Patents

Three-dimentional information from shadows of concurrent projections of multiple light sources Download PDF

Info

Publication number
GB2270155A
GB2270155A GB9217462A GB9217462A GB2270155A GB 2270155 A GB2270155 A GB 2270155A GB 9217462 A GB9217462 A GB 9217462A GB 9217462 A GB9217462 A GB 9217462A GB 2270155 A GB2270155 A GB 2270155A
Authority
GB
United Kingdom
Prior art keywords
light sources
power spectral
shadow
shadows
spectral distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9217462A
Other versions
GB9217462D0 (en
Inventor
Wee Soon Ching
Peng Seng Toh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB9217462A priority Critical patent/GB2270155A/en
Publication of GB9217462D0 publication Critical patent/GB9217462D0/en
Publication of GB2270155A publication Critical patent/GB2270155A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Input (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Three-dimensional information of an object (2) is obtained by acquiring and processing multiple shadows (5) cast by multiple light sources of different power spectral distribution. The use of light sources of different spectral distribution and illumination direction increases the acquisition and processing speed and permits the determination of object height and shape, which is essential to automated object recognition, classification, measurement and inspection. The method is robust against highly specular and shiny surfaces and the accuracy can be varied by changing the tilt angle of the multiple light sources. The number of light sources and their spectral distribution may be changed to optimise identification of the respective shadow regions (5). <IMAGE>

Description

THREE-DIMENSIONAL INFORMATION FROM SHADOWS OF CONCURRENT PROJECTIONS OF MULTIPLE LIGHT SOURCES Field of Invention This invention is related to a method of recovering the three-dimensional information of objects by acquiring and processing multiple shadows cast by the projection of multiple light sources of different power spectral distribution.
Background of the Present Invention Shadow is an important cue for three-dimensional(3D) vision. Shadow is caused by the blocking of illumination and the boundary of a shadow region can be easily detected as it is a discontinuation in illumination. Once the boundary of shadow has been detected and given that the position of the light source is known, the height of the object that causes the shadow can be determined by applying simple geometrical rule.
Existing method of utilizing shadow information is by projecting light from a single light source that casts shadows either on the background or on itself (Daniel Raviv, Yoh-Han Pao and Kenneth A. Loparo, "Reconstruction of Three-Dimensional Surfaces from Two-Dimensional Binary Images", IEEE Transaction on Robotics and Automation, Vol.5, No.5, October 1989) The single light source is then moved to another position or tilt angle so as to cast another shadows. This is repeated for a number of light source position or direction sufficient for recovering the 3D shape of the object. This method is slow and hence not suitable for high-speed application.
The use of multiple light sources located at different positions is a possible way of increasing the processing speed. However, the recovery of shadow information cast by multiple light sources will be very difficult if not impossible. In order to harness the advantage of utilizing multiple light sources and yet able to utilize the shadow information, the present invention employs multiple light sources with different power spectral distribution or colour. As a result, the shadow cast by different light sources will have different colours and can be distinguished easily using a multiple spectral video source.
Summary of the Present Invention One of the objectives of the present invention is in recovering threedimensional information of an object using the shadows of the object cast by the simultaneous projection of multiple light sources of different power spectral distribution from different positions. The present invention employs a multi-spectral video source to capture the image of the object and its shadows.
An image processing system is employed in conjunction with the video source. According to another objective of the present invention, the number of light sources can be varied according to the shape of the object. It is further provided by the present invention that the power spectral distribution of the light sources can be changed so that the identification of their respective shadow regions can be optimized. The image processing system is employed to extract and process the boundaries of the object and the shadows. Yet another objective is in changing the tilt angles of the light sources in obtaining different resolution and accuracy of measurement and/or classification.
Changing the tilt angle of the light sources changes the length of the shadows.
A Brief Description of the Accompanied Drawings Fig. l(a) A block diagram showing the preferred setup of the present invention.
Fig. l(b) shows the block diagram of the active illumination system.
Fig. 2(a) A 3 dimensional view of the invention using 4 light sources for illuminating the object.
Fig. 2(b) shows the tilt angle of the light source in relation to the object.
Fig. 3 Plan view of the shadow boundaries showing the relationship between the shadows and the state of the light sources.
Fig. 4 Illustration of the second constraint on the power spectral distribution of light sources.
Fig. 5 Procedure for optimizing the power spectral distribution of each light sources.
Fig. 6 Mismatch of shadow pattern due to changes in object height.(a) shows object of height = h. (b) shows object of height = h + Ah; the shadows of (a) and (b) are different.
Detailed Descriptions of the Preferred Embodiments The functional block diagram of the present invention is shown in Fig. l(a). A multi-spectral video source 1 is used to capture the image of the object 2 and the shadow information 5. The object 2 is placed on a uniformed coloured background 3. A host computer 7 coordinates the activities of the video source 1, the image processing system 6, the active illumination system 8 and the x-y table 9. The active illumination system 8 illuminates object 2, comprises of multiple light sources 4. It further comprises of means to control the state, position, tilt angle and the power spectral distribution of the light sources. Fig.
1 (b) shows the tilt angle control 10, state control 11, tunable power spectral distribution control 12 and position control 13 of the active illumination system 8. The various controllers in the active illumination system 8 can be readily controlled by the host computer 7. Fig. 2(a) shows a 3 dimensional view of the invention using 4 light sources for illuminating the object. Fig.
2(b) shows the tilt angle of the light source in relation to the object.
The operation of the present invention consists of two stages, namely the setup phase and the processing phase. In the setup phase, there are two stages to be carried out corresponding to the determination of the respective shadow boundary 5 created by each of the light sources 4 and the optimization of the allocation of the power spectral distribution for the multiple light sources 4.
Setup Phase Stage 1: each of the multiple light sources 4 is sequentially switched on so that the shadow 5 cast by each individual light sources 4 can be determined. A method of detecting the shadow boundaries 5 projected by the multiple light sources 4 is described below:i) Fixed the power spectral distribution of each light source 4 such that each light source 4 has a dominant wavelength. These dominant wavelength of each light sources 4 should, as far as possible, equally distributed over the whole visible range. Fixed all the light sources 4 at a specific tilt angle (ao) from the object 2. Selects an initial lighting state of the light sources 4 and captures the view of the object 2 and its associated shadow 5 by the multispectral video source 1.
ii) Change the lighting state (refer to turn on/off) of these light sources 4 to another state and capture the corresponding view by the same video source 1. Repeats this step by sequentially switch on one light source at a time.
iii) The shadow boundaries 5 cast by the simultaneous projections of light sources 4 at a specific tilt angle (aO) are determined by accumulating the boundaries information determined in step(ii) above. An example of the plan view of the shadow boundaries showing the relationship between the shadows and the state of the light sources is given in Fig. 3. Stores the corresponding images of each shadow 5.
iv) Step (i) to step (iii) are repeated to determine all the shadow boundaries 5 when these light sources 4 are positioned at a new tilt angle, loci, from the object 2. Repeat this step for all the desired tilt angles of light projections.
The total number of possible shadow 5, Ns, cast on the surfaces of object 2 and background 3 is given by equation (1), Ns = No+Nb
where n denotes the number of light sources 4 used in the system No denotes self shadows 5 of object 2 Nb denotes shadows 5 cast on background 3 The total possible combinations of shadows 5 when three light sources 4 are used is therefore
Setup Phase Stage 2: optimization of the power spectral distribution of light sources. This process is required when a new object 2 is introduced or when significant drifting of reflectance characteristics of object 2 or background 3 with time is expected to happen.
The scene irradiance captured by the video source 1 is a function of a few parameters as given in equation(2).
I(h)=f(a, p(X), S,(h), L,#8, E(h)) (2) where I(X) denotes the irradiance captured by the video source 1 denotes the wavelength a denotes the direction of light sources 4 p(X) denotes the reflectance function of object 2 S(4 denotes the video source 1 power spectral response denotes the state of light sources 4 o denotes the viewing direction of the video source 1 E#) denotes the power spectral distributions of light sources Given the same projection direction of light sources 4, the same state of light sources 4 and the same viewing direction of the video source 1 on the same object 2, the irradiance captured by the same video source 1 is dependent only on the power spectral distribution of the light sources 4.
As a result, the shadow boundaries 5 determined in stage 1 should remain the same under such situation.
Two constraints have to be taken into consideration in optimizing the power spectral distribution of each of the light sources 4. The first constraint is that the irradiance of adjacent shadows 5 must have distinguishable power spectral distributions. This enables shadow boundaries 5 to be clearly defined when all the light sources 4 are projected simultaneously onto the object 2 and the background 3.
Euclidean distance, Dab is selected as the distance measures between two shadows, a and b as given in equation (3). The multi-spectral video source employs the three primary colour components corresponding to Red, Green and Blue (R, G, B).
Dab = (IRal-lRbl)2 + (IGal-lGbl)2 + (IBal-lBbl)2 (3) where Ra denotes the red colour component for shadow a Rb denotes the red colour component for shadow b Ga denotes the green colour component for shadow a Gb denotes the green colour component for shadow b B a denotes the blue colour component for shadow a Bb denotes the blue colour component for shadow b The average distance AD and the minimum distance between any two adjacent shadows 5 are given in equation(4a) and equation(4b) respectively.
where i denotes the shadow number j denotes the shadows that are in contact with shadow i Ns denotes the number of shadows Nnl denotes the number of shadows that are in contact with shadow i D", = Mifl(Dab) (4b) fora,bE Us where Us denotes the set of all possible shadows As seen in Fig. 4, if shadow C is different from A and B, then, it is able to deduce the relevant changes in object's dimensions or shape that causes the reduction in area of shadow A. Similarly, if C is the same as B, then, it is able to deduce the relevant changes in object 2 dimensions or shape that causes this from the increased in the area of shadow B and also from the reduction in the area of shadow A.However, it is not able to tell the changes in object 2 shape or dimensions accurately if C is the same as A This will happen if the difference in power spectral distributions of light sources 4 that contribute to shadow A and shadow B, E1 and E2, happens only in the power spectral region where the reflectivity of the object 2 at region C is very small. To prevent this from happening, E1 and E2 must has substantially different magnitude for each of its power spectral component. Therefore, the second constraint in optimizing power spectral distributions of light sources 4 is that the total power spectral distributions of light sources 4 that contribute to each of the neighboring shadow 5 must have substantially different magnitude in each of its power spectral component, namely, Red, Green and Blue respectively.
The optimization procedure to ensure robust vision inspection and measurement as given in Fig. 5 is formulated as follows:i) Selects an initial set of power spectral distributions for light sources 4 which satisfy constraint 1 and 2. Fixed all the light sources 4 at a specific tilt angle(aO) from the object 2. Project all light sources 4 simultaneously onto the object 2 and background 3.
Captures the irradiance and area of each shadow 5 by the same video source 1.
ii) Evaluates the euclidean distance between each of the neighboring shadow 5. Find the minimum euclidean distance (Dmin) and average euclidean. distance (AD) for all the euclidean distances determined above.
iii) Vary the power spectral distribution of light sources 4 to a new set of values. Proceed to step(iv) if all the possible combination of power spectral distributions have been tried. Go to step(ii) if the selected power spectral distribution satisfy constraint 1 and 2. If not, repeat step (iii).
iv) The optimum power spectral distributions of light sources 4 is the one that gives the best performance index as given in equation (5).
The weights assign to the minimum euclidean distance and average euclidean distance are set to 0.99 and 0.01 respectively. This enables the system to select optimum power spectral distributions of light sources 4 based on the highest value of minimum euclidean distance, Admin When several values of Drain are about the same, the decision for the best power spectral distributions of light sources 4 is based on the one that give maximum average euclidean distance, AD.
Index = k*Drnin + (1-k)*AD (5) where k denotes adaptive weight for D#' l-k denotes adaptive weight for AD v) Similarly, the optimum power spectral distributions of light sources 4 when the tilt angle of light projections is at a new angle, oi with respect to the object 2 are determined by repeating step(i) to step(iv). Repeat these until all the desired radial distance have been covered.
Processing Phase: An Adaptive Procedure for Real-time Vision Inspection and Measurement The shadow boundaries 5 for each tilt angle of light sources 4 projections, ai, was found in the system setup stage 1. For each Q, the optimum power spectral distributions of light sources 4 and the irradiance (Iref=nCe) and area of each shadow 5 were determined in the system setup stage 2. An adaptive procedure for real-time vision inspection and measurement is then formulated as follows: i) Set the tilt angle of light sources 4 projection, ai ,to a specific angle according to the resolution required. Tune the power spectral distributions of light sources 4 to the optimum one that correspond to the selected tilt angle. Project all these light sources 4 onto the object 2 and background 3 simultaneously.
ii) Captured the respective irradiance pattern(Iachla) of all the shadow 5 and compared it against the irradiance pattern (#fercncc) determined during the system setup stage 2. Proceed to step(iv) if more than half of the shadow 5 are matched as this implies that there is no registration problem on the two shadow patterns.
iii) Shifts the x-y table 9 that contains the object 2 and background 3 slightly and repeats step(ii). Note that this step is not necessary if the placement accuracy of the object 2 on the x-y table 9 is high enough when compared to the resolution required in step(i). The placement accuracy is very much depends on the tolerances of the fixture used to hold the object 2.
iv) Any mismatch on any of the shadow 5 implies that there are changes in object 2 shape or dimension. The exact change is determined from the mismatch shadow number and the degree of mismatch of the shadow 5. Fig. 6 illustrates the concept behind.
v) Active control of the tilt angle of light sources 4 projections is then used to resolve any ambiguity caused by multiple changes on the object 2 shape or dimensions.

Claims (4)

1. This invention is related to a method of recovering the threedimensional information of objects by simultaneous acquisition and processing of multiple shadows cast by the concurrent projection of multiple light sources of different power spectral distribution.
2. The concurrent projections of multiple light sources of different power spectral distribution and illumination direction as claimed in claim 1 increases the acquisition and processing speed. The power spectral distribution of the light sources can be changed so that the identification of their respective shadow regions can be optimized. The number of light sources can be varied according to the shape of the object.
3. The operation of the invention as claimed in claim 1 consists of two phases, namely the setup phase and the processing phase. The setup phase consists of two stages corresponding to the determination of the respective shadow boundary 5 created by each of the light sources 4 and the optimization of the allocation of the power spectral distribution for the multiple light sources 4.
4. The method of claim 3 includes two constraints. The first constraint is that the irradiance of adjacent shadows 5 must have distinguishable power spectral distributions. This enables shadow boundaries 5 to be clearly defined when all the light sources 4 are projected simultaneously onto the object 2 and the background 3. The second constraint is that the total power spectral distribution of light sources 4 that contribute to each of the neighboring shadows S must have substantially different magnitude in each of its power spectral component.
GB9217462A 1992-08-17 1992-08-17 Three-dimentional information from shadows of concurrent projections of multiple light sources Withdrawn GB2270155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9217462A GB2270155A (en) 1992-08-17 1992-08-17 Three-dimentional information from shadows of concurrent projections of multiple light sources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9217462A GB2270155A (en) 1992-08-17 1992-08-17 Three-dimentional information from shadows of concurrent projections of multiple light sources

Publications (2)

Publication Number Publication Date
GB9217462D0 GB9217462D0 (en) 1992-09-30
GB2270155A true GB2270155A (en) 1994-03-02

Family

ID=10720484

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9217462A Withdrawn GB2270155A (en) 1992-08-17 1992-08-17 Three-dimentional information from shadows of concurrent projections of multiple light sources

Country Status (1)

Country Link
GB (1) GB2270155A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999033022A1 (en) * 1997-12-19 1999-07-01 Alfa Laval Agri Ab An apparatus for monitoring an animal related area
WO2002086815A1 (en) * 2001-04-23 2002-10-31 Koninklijke Philips Electronics N.V. Three-dimensional reconstruction from shadows
GB2464453A (en) * 2008-10-10 2010-04-21 Toshiba Res Europ Ltd Determining Surface Normals from Three Images
WO2010130962A1 (en) * 2009-05-14 2010-11-18 Airbus Operations (S.A.S.) Method and system for the remote inspection of a structure

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0172663A2 (en) * 1984-07-23 1986-02-26 Mutual Corporation Method and apparatus for inspecting tablets automatically

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0172663A2 (en) * 1984-07-23 1986-02-26 Mutual Corporation Method and apparatus for inspecting tablets automatically

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999033022A1 (en) * 1997-12-19 1999-07-01 Alfa Laval Agri Ab An apparatus for monitoring an animal related area
WO2002086815A1 (en) * 2001-04-23 2002-10-31 Koninklijke Philips Electronics N.V. Three-dimensional reconstruction from shadows
GB2464453A (en) * 2008-10-10 2010-04-21 Toshiba Res Europ Ltd Determining Surface Normals from Three Images
GB2464453B (en) * 2008-10-10 2010-09-08 Toshiba Res Europ Ltd An imaging system and method
WO2010130962A1 (en) * 2009-05-14 2010-11-18 Airbus Operations (S.A.S.) Method and system for the remote inspection of a structure
FR2945630A1 (en) * 2009-05-14 2010-11-19 Airbus France METHOD AND SYSTEM FOR REMOTELY INSPECTING A STRUCTURE
CN102439394A (en) * 2009-05-14 2012-05-02 空中客车营运有限公司 Method and system for the remote inspection of a structure
US9310189B2 (en) 2009-05-14 2016-04-12 Airbus Operations S.A.S. Method and system for the remote inspection of a structure

Also Published As

Publication number Publication date
GB9217462D0 (en) 1992-09-30

Similar Documents

Publication Publication Date Title
Pietikainen et al. Accurate color discrimination with classification based on feature distributions
US9518931B2 (en) Image inspection apparatus, image inspection method, image inspection program, computer-readable recording medium and recording device
CN1727983B (en) Strobe illumination
Ikeuchi et al. Determining grasp configurations using photometric stereo and the prism binocular stereo system
US8913825B2 (en) Specular edge extraction using multi-flash imaging
JP6403446B2 (en) Image inspection apparatus, image inspection method, image inspection program, computer-readable recording medium, and recorded apparatus
Tan et al. Illumination chromaticity estimation using inverse-intensity chromaticity space
EP0523152A1 (en) Real time three dimensional sensing system.
Angelopoulou et al. Spectral gradient: a material descriptor invariant to geometry and incident illumination
US7555159B2 (en) Image highlight correction using illumination specific HSV color coordinate
US20130223679A1 (en) Movement analysis and/or tracking system
US5369430A (en) Patter correlation type focus detecting method and focus detecting apparatus
JP2016170122A (en) Measurement device
EP1204069A3 (en) Object recognition using linear subspaces
JPH0521403B2 (en)
GB2270155A (en) Three-dimentional information from shadows of concurrent projections of multiple light sources
TWI503537B (en) Method of measuring measurement target
JP6568991B2 (en) Image inspection apparatus, image inspection method, image inspection program, computer-readable recording medium, and recorded apparatus
US10909713B2 (en) System and method for item location, delineation, and measurement
JPH06307833A (en) Surface unevenness shape recognizing device
JP2023043178A (en) Workpiece inspection and defect detection system utilizing color channels
El-Hakim A hierarchical approach to stereo vision
Slater et al. The illumination-invariant matching of deterministic local structure in color images
Ching et al. Concurrent acquisition and processing of multi-spectral shadow information for 3D computer vision
US20230252637A1 (en) System and method for improving image segmentation

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)