GB2452944A - Imaging system and method for generating a depth map using three light sources having different frequencies - Google Patents

Imaging system and method for generating a depth map using three light sources having different frequencies Download PDF

Info

Publication number
GB2452944A
GB2452944A GB0718316A GB0718316A GB2452944A GB 2452944 A GB2452944 A GB 2452944A GB 0718316 A GB0718316 A GB 0718316A GB 0718316 A GB0718316 A GB 0718316A GB 2452944 A GB2452944 A GB 2452944A
Authority
GB
United Kingdom
Prior art keywords
data
imaging system
radiation
frame
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0718316A
Other versions
GB2452944A8 (en
GB0718316D0 (en
GB2452944B (en
GB2452944B8 (en
Inventor
Carlos Hernandez
Gabriel Julian Browstow
Roberto Cipolla
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB0718316.3A priority Critical patent/GB2452944B8/en
Publication of GB0718316D0 publication Critical patent/GB0718316D0/en
Priority to JP2008240104A priority patent/JP2009081853A/en
Priority to US12/233,967 priority patent/US20090073259A1/en
Publication of GB2452944A publication Critical patent/GB2452944A/en
Publication of GB2452944B publication Critical patent/GB2452944B/en
Application granted granted Critical
Publication of GB2452944A8 publication Critical patent/GB2452944A8/en
Publication of GB2452944B8 publication Critical patent/GB2452944B8/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • G06T7/0073
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • H04N5/247
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

A system for imaging a moving three dimensional (3D) object 1 comprises: at least three light sources 3, 5, 7, irradiating the object 1 from three different angles and each emitting radiation at a different frequency (wavelength); a video camera 9 to collect radiation from the three radiation sources, as reflected from the object; and an image processor to distinguish between the reflected signal from the three different radiation sources and generate a depth map of the 3D object. A corresponding depth imaging method is also independently claimed. Stored calibration data from a sample with a same surface characteristic as the imaged object may be used to determine orientation - specifically, surface normals - of the object; a calibration board (21, Figure 2) may be used to provide such data. The processor can determine shadow positions arising from object movement. The system is particularly directed to imaging non rigid materials such as cloth, and generating animation data.

Description

1 2452944 An Imaging System and Method The present invention is concerned with the field of imaging systems which may be used to collect and display data for production of 3D images. The present invention may also be used to generate data for 2D and 3D animation of complex objects.
The field of 3D image production has largely been hampered by the time which it takes to take the data to produce a 3D film. Previously, 3D films have generally been perceived as a novelty as opposed to a serious recording format. Now, 3D image generation is seen as being an important tool in the production of CG images.
One established method of producing 3D image data has been photometric stereo (see for example R Woodham "photometric method for determining surface orientation from multiple images" Optical Eng. Number 1, pages 139-144 1980) where photographs are taken of an object from different illumination directions. A single photograph is taken for each illumination direction. Thus, this is not a technique which can be used for capturing video of a moving object in real time.
The present invention addresses the above problem and in a first aspect provides an imaging system for imaging a moving three dimensional object, the system comprising: at least three light sources, irradiating the object from three different angles; a video camera provided to collect radiation from said three radiation sources which has been reflected from said object; and an image processor configured to generate a depth map of the three dimensional object, wherein each radiation source emits radiation at a different frequency and said image processor is configured to distinguish between the reflected signal from the three different radiation sources.
A, Petrov "Light Color and Shape" Cognitive Processes and their Simulation, pages 350-358, 1987 discuss the use of colour for computing surface normals.
However, there has been no realisation that colour could be used to address the issue of recording 3D video in real time.
Further, the technique can be applied to recording data for complex objects such as cloth, clothing, knitted or woven objects, sheets etc. When recording data from a moving object self shadowing will occur and this will affect data. Therefore, preferably, said processor is configured to determine the position of shadows arising as said object moves. The position of shadows is determined by locating sharp changes in the intensity of the signal measured from each of said radiation sources.
In a preferred embodiment, the processor is configured to determine the position of shadows before determining the position of surface normals for said object.
In a preferred embodiment, the apparatus further comprises a memory configured to store calibration data, said calibration data comprising data from a sample with a same surface characteristic as the object stored with information indicating the orientation of the surface of the sample. The processor may then be configured to determine the depth map for the object from the collected radiation using the calibration data.
The above may be achieved by using a calibration board and means to mount said calibration board, said calibration board having a part of its surface with the same surface characteristics as the object and said means to mount comprising means for determining the orientation of the surface of the calibration board.
Although the data gathering apparatus can stand alone, it may be incorporated in part of a 3D image generation apparatus further comprising means for displaying a three dimensional moving image from said depth map.
The system may also be used in 2D or 3D animation where the system comprises means for moving said generated depth map.
The system may also further comprise means for applying pattern to the depth map, the means configured to form a 3D template of the object from a frame of the depth map and determine the position of the pattern on said object of said frame and to deform said template with said pattern to match subsequent frames. The template may be deformed using a constraint that the deformations of the template must be compatible with the frame to frame optical flow of the original captured data. Preferably the template is deformed using the further constraint that the deformations be as rigid as the data will allow.
In a second aspect, the present invention provides a method for imaging a moving three dimensional object, the method comprising: irradiating said object with at least three light sources from three different angles, wherein each radiation source emits radiation at a different frequency; using a video camera to collect radiation from said three radiation sources which has been reflected from said object; distinguishing between the reflected signal from the three different radiation sources; and generating a depth map of the three dimensional object from the output of the video camera.
The method may be applied to animating cloth or other flexible materials.
The present invention will now be described with reference to the following non-limiting embodiments in which: Figure 1 is a schematic of an apparatus in accordance with an embodiment of the present invention; Figure 2 is a calibration board used to calibrate the apparatus of the present invention; Figures 3A, 3B and 3C are a frame from a video of a moving object which is collected using a video camera and three different colour light sources illuminating the object from different positions, figure 3A shows the frame with the component of the image collected from the first light source, figure 3B shows the frame with the component of the image collected from the second light source; and figure 3C shows the frame with the component of the image collected from the third light source, figure 3D shows the edges of the image determined by a Laplacian filter and figure 3E shows where the lights cast their shadows; Figure 4A is an image of the model shown in figure 3 illuminated by all three lights and figure 4B shows the generated image; Figures 5A, 5B and 5C are three frames of a jacket captured using the prior art technique of photometric stereo where each frame A, B and C is individually captured using illumination from a different illumination direction, figure 5D is a 3D image generated from the data of figures 5A, B and C, Figure 5D is a frame a from each of the light sources respectively described in the apparatus of figure 1, figure 5E is a frame captured by the apparatus of figure 1 and figure 5F is a 3D image generated from the frame of figure 5E; Figure 6a is a frame from a video of a moving object wearing a jumper with texture being collected using a video camera and three different colour light sources and Figure 6b is a 3D image generated from the data collected from the object shown in figure 6a; Figure 7A is a series of frames of a dancer, figure 7B is a series of frames of a 3D image generated of the dance of figure 7A with a colour pattern superimposed on the jumper of the dancer, figure 7C is a series of frames of the dancer of figure 7A showing an enhanced method of superimposing a colour image onto the dancer where the pattern uses a registration scheme with advective optical flow and figure 7D is a series of frames of the dancer of figure 7A using the advective optical flow of figure 7C with a rigidity constraint; Figure 8 shows a 3D image viewed from 5 different angles; and Figure 9 shows an articulated skeleton with a dress modelled in accordance with an embodiment of the present invention.
Figure I is a schematic of a system in accordance with an embodiment of the present invention used to image object 1. The object is illuminated by three different light sources 3, 5 arid 7.
In this particular example, first light source 3 is a source of red (R) light, second light source 5 is a source of green (G) light and third light source 7 is a source of blue (B) light.
In this embodiment, the system is either provided indoors or outside in the dark to minimise background radiation affecting the data. The three lights 3, 5 and 7 are arranged laterally around the object 1 and are vertically positioned at levels between floor level to the height of the object 1. The lights are directed towards the object 1.
The angular separation between the three light sources 3, 5 and 7 is approximately 30 degrees in the plane of rotation about the object 1. Greater angular separation can make orientation dependent colour changes more apparent. However, if the light sources are too far apart, concave shapes in the object 1 are more difficult to distinguish since shadows cast by such shapes will extend over larger portions of the object making data analysis more difficult. In a preferred arrangement each part of the object I is illuminated by all three light sources 3, 5 arid 7.
Camera 9 which is positioned vertically below second light source 5 is used to record the object as it moves while being illuminated by the three lights 3, 5 and 7.
To calibrate the system, a calibration board of the type shown in figure 2 may be used.
The calibration board 21 comprises a square of cloth 23 and a pattern of circles 25.
Movement of the board 21 allows the homography between the camera 9 and the light sources 3, 5 and 7 to be calculated. Calculating the homography means calculating the light source directions relative to the camera. Once this has been done, zoom and focus can change during filming as these do not affect the colours or light directions. The cloth 23 also allows the association between colour and orientation to be measured.
To determine the shape, it is first necessary to determine the orientation of the normals to the surface for all points on the surface of the object to be imaged. This embodiment assumes that the three lights sources 3, 5 and 7 induce a colour cue on every surface point which is dependent on the orientation of that surface point.
Thus, there is a one-to-one mapping Mbetween the surface colour I and the orientation 1M(n) or n=A'T'(I) To determine M, photometric-stereo techniques assume that the surface is a Lamb ertian surface and that the camera sensor response is linear. bR
I=[IRIGIB]Tn+ bG =[LTbJ b8 Where I is the RGB colour observed on the image, b is a constant vector that accounts for ambient light, n is the unit normal at the surface location and L is a 3x3 matrix where every column represents a 3D vector directed towards the light source and scaled by the light source intensity times the object albedo. The object albedo is the ratio of the reflected to incident light.
To simplify this example, it is assumed that the ratios of the colors are constant i.e. the ration between R/B and BIG should be the same for each pixel in the image. This will allow the mapping between colours and surface orientation to be determined by estimating the 3x4 matrix [LTb] up to a scale factor.
For many practical situations, it will be more difficult to calculate the mapping since the camera response is non-linear and the surface will not be a Lambertian reflector.
However, it is possible to use a calibration tool of the type shown in figure 2 to measure the mapping. If the surface material of the object which is to be imaged is placed in square 23 of the calibration board 21, it is possible to measure an image signal for each possible surface normal as part of a calibration sequence. Thus, the correspondence between surface normals n and material colour values I can be determined even for non-linear conditions and surfaces which does not have perfectly Larnbertian reflectance characteristics.
The results from the initial calibration routine where an image is captured for various known orientations of the board does not need to be performed for every possible board orientation as nearest neighbour interpolation can be used to determine suitable data for all orientations. It is possible to capture data from just 4 orientations in order to provide calibration data for a 3x4 matrix. Good calibration data is achieved from around 50 orientations. However, since calibration data is easily collected it is possibly to obtain data from thousands of orientations.
Although the technique of using the calibration board can be used to determine complex mappings for non-Lambertian reflectors and cameras with non-linear response functions, it is still necessary to assume that the object albedo has constant chromaticity.
If this is not assumed, the mapping Mis non invertible and there will be several valid surface orientations for the same surface colour.
The object may also shadow itself during filming. Figure 3A is an image of a dancer wearing a spandex bodysuit taking using the system of figure 1. Figure 3A shows the image data from the red light source, figure 3B from the green light source and figure 3C from the blue light source. In this particular example, the red light source is to the dancer's right hand side, the green light source in front of the dancer and the blue light source is to the dancer's left hand side. In the pose shown, the dancer turns to her right.
The shadow caused by her left leg on her right leg is more pronounced in figure 3C.
In the absence of a shadow, the reflected illumination from one channel, i.e. either red, green or blue would be expected to vary smoothly. A sharp variation indicates the presence of an edge, these edges are determined for each channel by using a Laplace filter. The results from this analysis which is carried out per channel is shown in figure 3D.
The pixels which are determined to be edge pixels are then further analysed to determine gradient orientation. The pixels are analysed along each of the either cardinal directions (i.e. north, south, east, west, north-west, south-west, north-east, south-east).
Pixels whose gradient magnitude falls below a threshold T are rejected Adjoining pixels whose gradient directions agree are grouped into connected components.
The algorithm could also be used to determine the difference between boundary edges of the object and shadows. This is shown in figure 3E. In figure 3E, the boundary edges of the object occur where all three channels (RGB) show a sharp change in the intensity of the reflected signal.
From the above a look up shadow mask can be determined The surface may then be reconstructed by first determining the position of the shadows using the above technique and then estimating the normal for all pixels where there is a good signal from all three lights, i.e. there is no shadow. The normal is estimated as described above.
If the signal from only two lights can be used, then the data can still be processed but constant albedo must be presumed, i.e. constant chromaticity and constant luminance.
Once the 2D grid of surface normals is produced, each frame of normals is integrated using a 2D Poisson solver or the like for example, a Successive OverRelaxation solver (SOR) is used to produce a video of depth maps or surface mesh for each frame.
The generation of the surface mesh for each frame is subject to the boundary conditions of the shadow mark which is used as the boundary conditions for the Poisson solver.
Frame to frame coherency of silhouettes is also taken as a boundary condition.
To verify the accuracy of the technique a MacBeth colour chart was used. The chart was illuminated with each of the coloured lights in turn.
It was found that the technique compensated for impurities in the colours of the lights e.g. the red light produced small amounts of blue and green light in addition to the red light. Also, the technique compensated for colour balance functions that are often used in modern video cameras.
Figure 4a shows the dancer of figure 3 illuminated by all three RGB lights and figure 4b shows the reconstructed image. The dancer is wearing spandex which is a non perfectly Lambertian material. Details can be seen on the reconstructions such as the seam 31 and the hip bones 33 of the dancer. Thus a moving image of the type shown in figure 4B can be produced in real time from the data taken in figure 4A.
Figure 5 is a comparison of the results between a conventional method and those of an embodiment of the present invention.
Figures 5A, 5B and 5C show three frames captured individually using the technique of photometric stereo. In photometric stereo, individual images are captured using a digital still camera. The data from the three images is then processed to form 3D image 3D according to a known method (see for example, R Woodham "photometric method for determining surface orientation from multiple images" Optical Eng. Number 1, pages 139-144 1980).
This can be compared with the method of the present invention as shown in figure 1 where three lights of different colours are used to illuminate the jacket as shown in figure 5E. The 3D image generated using the apparatus of figure 1 is shown in figure 5F.
Although the images of figurc 5D and 5F are similar, only the image of figure 6F can be used as a frame in a real-time 3D video construction. Previously we have discussed issues which may affect the quality of the 3D image, namely impurity of the monochromatic sources and colour balance functions provided in the camera itself.
However, it was found that the error between the 3D image of figure 5Dand that of figure 5F was only 1.4% this error was calculated using the bounding box diagonal.
Figure 6 shows the reconstruction of a complicated textile material Figure 6A shows a model wearing a jumper with a complicated texture pattern. The model is illuminated using three lights sources as explained with reference to figure 1.
Figure 6B shows the image generated as explained with reference to figure 3 for the model of figure 6A. The complicated surface texture of the knit of the jumper can be clearly seen in the generated image.
However, clothing will often have a pattern which is provided by colour on the surface wither in addition to or instead of texture.
Figure 7A is a series of images of a dancing model ((i) -(vii)) taken using the apparatus described with reference to figure 1. The dancing model is wearing the same jumper which was reconstructed in figures 6A and 6B. However, figure 7 will be used to illustrate that a method in accordance with an embodiment can be used to show how a colour pattern can be applied to cloth.
In the results shown in figure 7 a colour video camera was used with a resolution of 1280 x 720. Computation times were of the order of 20 seconds per frame for the depth map recovery and a further 20 seconds per frame for the superposition of the pattern.
The computations were carried out using a 2.8GHz Pentium 4 processor with 2Gb of RAM.
Figure 7B shows a series of 3D images generated of the dancer of figure 7A. Each image of figure 7B corresponds to the frame ((i) -(vii)) of figure 7A shown directly above. Frames (i) to (viii) are selected frames from a sequence of frames: Frame (i) -Frame no. 0 Frame (ii) - Frame no. 250 Frame (iii) -Frame no. 340 Frame (iv) -Frame no. 380 Frame (v) -Frame no. 427 Frame (vi) -Frame no. 463 Frame (vii) -Frame no. 508 In the first method of superimposing a colour pattern onto the dancer, the colour image which is the words ICCV 07 and green and yellow flag are generated using the depth map data as described above. This can be seen to work well for frames (i) to (iii), however, in frame (iv) both the flag and the pattern are staying on the same vertical level even though the dancer is moving down. In frame (iv), the flag is seem to deform well with the dancer's jumper. However, the pattern is staying on the same vertical lever even through the dancer is moving down. Thus the pattern appears to be moving upwards relative to the dancer's jumper. This problem continues in frames (v) to (vii).
Figure 7C illustrates the results of an enhanced method for superposing a pattern onto the jumper. Here, the first depth map of the sequence (i) is used as a template which is deformed to match all subsequent depth maps.
This is done by letting zk(u,v) be the depth map at frame t. A deformable template is set which corresponds to the depth map at frame 0, the template is a triangular mesh with vertices: x i1 N and a set of edges E. At frame t, the mesh is deformed to fit the tih depth map by applying a translation T,' to each vertex x, so the z" vertex at frame 1 moves to x + The images generated in figure 7C were generated using the constraint that the deformations of the template must be compatible with the frame-to-frame 2D optical flow of the original video sequence.
Frame-to-frame optical flow is first computer using a video of normal maps. A standard optical flow algorithm is then used (see for example M Black and P Anadan "The robust estimation of multiple motions: parametric and piecewise smooth flow fields" Computer Vision and Image Understanding, volume 63(1), pages 75 to 104, January 1996) for which every pixel location (u,v) in frame t predicts the displacement dt(u,v) of that pixel in frame (+1. Let (u', v') denote the position in frame I of a pixel which in fram 0 was at (u°, v°). (ur,vt) can be estimated by advecting dt(u,v) using: (u,vJ)=(uJ_1,v_1)+dJ_1(u_I,vJ_1) wherej=l If there was no error in the optical flow and the template from frame zero was deformed to match frame t, then vertex x in frame t is displaced to point: : =(u:,v:,z'(u:,v:)) This constraint can be formulated as an energy term comprising the sum of squared differences between the displaced vertex locations x + T' and the positions predicted by the advected optical flow y1' at frame I: ED (T1' , T,,) + T,' -y 2 The results of the above process are seen in figure 7C. Here it can be seen that the pattern deforms with the jumper and also remains at the same position relative to the jumper. However, looking at the top of the jumper it can be seen that stretching and other geometric artefacts are starting to occur. This is seen from frame (ii) and by frame (Viii) the whole top of the jumper is seen to be distorted. These artefacts are caused by errors in the optical flow due to image noise or occlusions.
To address this issue a further constraint is added to bring rigidity into the picture. To regularise the deformation of the template mesh, translations applied to nearby vertices need to be kept as similar as possible. This is achieved by adding energy term ER: ERTIV,T,)= IIT1' -TI2 (s,J)Ee The above two terms are then combined: Emr(Ti' ,T,)aED+(1-a)ER which is optimised with respect to T1t, T, for every frame t. For optimisation an iterated scheme is used where T, with the optimal translation T' given that every other translation is constant. This leads to: : I 1', =aly -x)+(1-a)----T1 Nz) jEN(I) Where N(i) is the set of neighbours of vertex i and a is a parameter indicating the degree of rigidity of the mesh. The results of this calculation are shown in figure 7D where the pattern can be seen to move and deform with the dancer's jumper and to artefacts are seen in the jumper as the frames progress. In the experiment shown the pattern tracked the jumper for more than 500 frames.
Figure 8 shows 5 views from different angles of the 3D image of the dancer of figures 6 and 7 (frame (iv) of figure 7). The images are shown without the colour pattern. The details of the jumper can be seen in all five views. The mesh contains approximately 180,000 vertices.
The data descnbed with reference to figures 6, 7 and 8 shows how an embodiment of the present invention can be used for modelling cloth and cloth with both complex texture patterns and complex colour patterns.
Figure 9 shows how an embodiment of the present invention can be used for modelling cloth for animation. In figure 9, the moving mesh of figures 6 arid 7 is attached to an articulated skeleton.
Skinning algorithms are well known in the art of computer animation. To generate the character of figure 9a smooth skinning algorithm is used in which each vertex Vk is attached to one of more skeleton joints and a link to each jointj is weighted by W,,k The weights control how much the movement of each joint affects the transformation of a vertex.
v = , = 1 The matrix S represents the transformation from the joint's local space to world space at time instant 1.
The mesh was attached to the skeleton by first aligning a depth pattern of the fixed dress with a fixed skeleton and for each mesh vertex a set of nearest neighbours on the skeleton. The weights are set inversely proportional to these distances. The skeleton is then animated using publicly available mocap data (Carnegie-mellon mocap database http://mocap.cs.cmu.edu). The mesh is animated by playing back one of the captured cloth sequences.

Claims (20)

  1. CLAIMS: I. An imaging system for imaging a moving three dimensional object, the system compnsing: at least three light sources, irradiating the object from three different angles; a video camera provided to collect radiation from said three radiation sources which has been reflected from said object; and an image processor configured to generate a depth map of the three dimensional object, wherein each radiation source emits radiation at a different frequency and said image processor is configured to distinguish between the reflected signal from the three different radiation sources.
  2. 2. An imaging system according to any preceding claim, further comprising a memory configured to store calibration data, said calibration data comprising data from a sample with a same surface characteristic as the object stored with information indicating the orientation of the surface of the sample.
  3. 3. An imaging system according to claim 2, wherein said processor is configured to determine a plurality of surface normals for the object from the collected radiation using the calibration data.
  4. 4. An imaging system according to either of claims 2 or 3, further comprising a calibration board and means to mount said calibration board, said calibration board having a part of its surface with the same surface characteristics as the object and said means to mount comprising means for determining the orientation of the surface of the calibration board.
  5. 5. An imaging system according to any preceding claim, wherein said processor is configured to determine the position of shadows arising as said object moves.
  6. 6. An imaging system according to claim 5, wherein the position of shadows is determined by locating sharp changes in the intensity of the signal measured from each of said radiation sources.
  7. 7. An imaging system according either of claims 5 or 6, wherein said processor is configured to determine the position of shadows before determining the position of surface normals for said object.
  8. 8. An imaging system according to any preceding claim, wherein said object comprises a non-rigid material.
  9. 9. An imaging system according to any preceding claim, wherein said object is cloth.
  10. 10. A system for generating three dimensional images comprising an imaging system according to any preceding claim and means for displaying a three dimensional moving image from said depth map.
  11. 11. A system for generating animation data, said system comprising an imaging system according to any of claims 1 to 9 and means for moving said generated depth map.
  12. 12. A system according to either of claims 10 or 11, further comprising means for applying pattern to the depth map, the means configured to form a 3D template of the object from a frame of the depth map and determine the position of the pattern on said object of said frame and to deform said template with said pattern to match subsequent frames.
  13. 13. A system according to claim 12, wherein said template is deformed using a constraint that the deformations of the template must be compatible with the frame to frame optical flow of the original captured data.
  14. 14. A system according to claim 13, wherein the template is deformed using the further constraint that the deformations be as rigid as the data will allow.
  15. 15. A method for imaging a moving three dimensional object, the method comprising: irradiating said object with at least three light sources from three different angles, wherein each radiation source emits radiation at a different frequency; using a video camera to collect radiation from said three radiation sources which has been reflected from said object; distinguishing between the reflected signal from the three different radiation sources; and generating a depth map of the three dimensional object from the output of the video camera.
  16. 16. A method according to claim 15, further comprising storing calibration data, said calibration data comprising data from a sample with a same surface characteristic as the object stored with information indicating the orientation of the surface of the sample.
  17. 17. A method according to either of claims 15 or 16, further comprising determining the position of shadows arising as said object moves.
  18. 18. A method according to claim 17, wherein the position of shadows is determined by locating sharp changes in the intensity of the signal measured from each of said radiation sources.
  19. 19. A method according to either of claims 17 or 18, wherein the position of shadows is determined before determining the position of surface normals for said object.
  20. 20. A method of animating cloth, the method comprising: imaging cloth according to the method of any of claims 15 to 19 and animating said generated depth map.
GB0718316.3A 2007-09-19 2007-09-19 An imaging system and method Active GB2452944B8 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0718316.3A GB2452944B8 (en) 2007-09-19 2007-09-19 An imaging system and method
JP2008240104A JP2009081853A (en) 2007-09-19 2008-09-19 Imaging system and method
US12/233,967 US20090073259A1 (en) 2007-09-19 2008-09-19 Imaging system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0718316.3A GB2452944B8 (en) 2007-09-19 2007-09-19 An imaging system and method

Publications (5)

Publication Number Publication Date
GB0718316D0 GB0718316D0 (en) 2007-10-31
GB2452944A true GB2452944A (en) 2009-03-25
GB2452944B GB2452944B (en) 2010-08-11
GB2452944A8 GB2452944A8 (en) 2016-09-14
GB2452944B8 GB2452944B8 (en) 2016-09-14

Family

ID=38670193

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0718316.3A Active GB2452944B8 (en) 2007-09-19 2007-09-19 An imaging system and method

Country Status (3)

Country Link
US (1) US20090073259A1 (en)
JP (1) JP2009081853A (en)
GB (1) GB2452944B8 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2490872A (en) * 2011-05-09 2012-11-21 Toshiba Res Europ Ltd Capturing 3D image data by combining an image normal field derived from multiple light source illumination with depth map data

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11157977B1 (en) * 2007-10-26 2021-10-26 Zazzle Inc. Sales system using apparel modeling system and method
US8217993B2 (en) * 2009-03-20 2012-07-10 Cranial Technologies, Inc. Three-dimensional image capture system for subjects
JP5423544B2 (en) * 2010-04-02 2014-02-19 セイコーエプソン株式会社 Optical position detector
JP5423543B2 (en) * 2010-04-02 2014-02-19 セイコーエプソン株式会社 Optical position detector
JP5423542B2 (en) * 2010-04-02 2014-02-19 セイコーエプソン株式会社 Optical position detector
JP5423545B2 (en) * 2010-04-02 2014-02-19 セイコーエプソン株式会社 Optical position detector
US8922547B2 (en) * 2010-12-22 2014-12-30 Electronics And Telecommunications Research Institute 3D model shape transformation method and apparatus
EP2671383B1 (en) 2011-02-04 2017-03-15 Koninklijke Philips N.V. Method of recording an image and obtaining 3d information from the image, camera system
MA41117A (en) 2014-12-05 2017-10-10 Myfiziq Ltd IMAGING OF A BODY
EP3460753A1 (en) * 2017-09-21 2019-03-27 Infaimon, SL Photometric stereo system and method for inspecting objects with a one-shot camera and a computer program
GB201715876D0 (en) * 2017-09-29 2017-11-15 Univ Strathclyde Wireless optical communication and imaging systems and methods
TWI680436B (en) * 2018-12-07 2019-12-21 財團法人工業技術研究院 Depth camera calibration device and method thereof
JP7193425B2 (en) * 2019-07-18 2022-12-20 株式会社ミマキエンジニアリング 3D data generation device, 3D data generation method, and molding system
GB2586157B (en) * 2019-08-08 2022-01-12 Toshiba Kk System and method for performing 3D imaging of an object
CN111340949B (en) * 2020-05-21 2020-09-18 超参数科技(深圳)有限公司 Modeling method, computer device and storage medium for 3D virtual environment
FR3133095B1 (en) * 2022-02-25 2024-02-23 Psa Automobiles Sa Method for calculating three-dimensional surfaces for a vehicle equipped with a driving assistance system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064478A (en) * 1995-03-29 2000-05-16 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Method of and apparatus for automatic detection of three-dimensional defects in moving surfaces by means of color vision systems
EP1126412A2 (en) * 2000-02-16 2001-08-22 Fuji Photo Film Co., Ltd. Image capturing apparatus and distance measuring method
US20020146153A1 (en) * 2001-02-08 2002-10-10 Jinlian Hu Three dimensional measurement, evaluation and grading system for fabric/textile structure/garment appearance
US20040201586A1 (en) * 2000-08-30 2004-10-14 Microsoft Corporation Facial image processing methods and systems
EP1604333A1 (en) * 2003-03-20 2005-12-14 Agilent Technologies, Inc. Optical inspection system, apparatus and method for reconstructing three-dimensional images for printed circuit board and electronics manufacturing inspection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064478A (en) * 1995-03-29 2000-05-16 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Method of and apparatus for automatic detection of three-dimensional defects in moving surfaces by means of color vision systems
EP1126412A2 (en) * 2000-02-16 2001-08-22 Fuji Photo Film Co., Ltd. Image capturing apparatus and distance measuring method
US20040201586A1 (en) * 2000-08-30 2004-10-14 Microsoft Corporation Facial image processing methods and systems
US20020146153A1 (en) * 2001-02-08 2002-10-10 Jinlian Hu Three dimensional measurement, evaluation and grading system for fabric/textile structure/garment appearance
EP1604333A1 (en) * 2003-03-20 2005-12-14 Agilent Technologies, Inc. Optical inspection system, apparatus and method for reconstructing three-dimensional images for printed circuit board and electronics manufacturing inspection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2490872A (en) * 2011-05-09 2012-11-21 Toshiba Res Europ Ltd Capturing 3D image data by combining an image normal field derived from multiple light source illumination with depth map data
GB2490872B (en) * 2011-05-09 2015-07-29 Toshiba Res Europ Ltd Methods and systems for capturing 3d surface geometry
US10097813B2 (en) 2011-05-09 2018-10-09 Kabushiki Kaisha Toshiba Methods and systems for capturing 3D surface geometry

Also Published As

Publication number Publication date
GB2452944A8 (en) 2016-09-14
JP2009081853A (en) 2009-04-16
GB0718316D0 (en) 2007-10-31
GB2452944B (en) 2010-08-11
US20090073259A1 (en) 2009-03-19
GB2452944B8 (en) 2016-09-14

Similar Documents

Publication Publication Date Title
US20090073259A1 (en) Imaging system and method
CN104335005B (en) 3D is scanned and alignment system
CN106643699B (en) Space positioning device and positioning method in virtual reality system
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
US9734609B2 (en) Transprojection of geometry data
JP6507730B2 (en) Coordinate transformation parameter determination device, coordinate transformation parameter determination method, and computer program for coordinate transformation parameter determination
Hernández et al. Non-rigid photometric stereo with colored lights
JP4401727B2 (en) Image display apparatus and method
JP5586594B2 (en) Imaging system and method
JP5236219B2 (en) Distortion correction and integration method using divided imaging, mapping function generation method therefor, distortion correction and integration device using divided imaging, and mapping function generation apparatus therefor
CN105869160A (en) Method and system for implementing 3D modeling and holographic display by using Kinect
KR20130138247A (en) Rapid 3d modeling
CN112016570B (en) Three-dimensional model generation method for background plate synchronous rotation acquisition
JP2013171523A (en) Ar image processing device and method
KR100834157B1 (en) Method for Light Environment Reconstruction for Image Synthesis and Storage medium storing program therefor.
JP2002071315A (en) Projection planar measuring system
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
Gard et al. Projection distortion-based object tracking in shader lamp scenarios
TWI501193B (en) Computer graphics using AR technology. Image processing systems and methods
CN111445528B (en) Multi-camera common calibration method in 3D modeling
CN104732586B (en) A kind of dynamic body of 3 D human body and three-dimensional motion light stream fast reconstructing method
RU2735066C1 (en) Method for displaying augmented reality wide-format object
JP2006221599A (en) Method and apparatus for generating mapping function, and compound picture develop method, and its device
CN113643436A (en) Depth data splicing and fusing method and device
JP2005063041A (en) Three-dimensional modeling apparatus, method, and program

Legal Events

Date Code Title Description
S117 Correction of errors in patents and applications (sect. 117/patents act 1977)

Free format text: REQUEST FILED; REQUEST FOR CORRECTION UNDER SECTION 117 FILED ON 20 JULY 2016.

S117 Correction of errors in patents and applications (sect. 117/patents act 1977)

Free format text: CORRECTIONS ALLOWED; REQUEST FOR CORRECTION UNDER SECTION 117 FILED ON 20 JULY 2016, ALLOWED ON 06 SEPTEMBER 2016.