GB2443004A - Multi camera and waveband imaging apparatus - Google Patents

Multi camera and waveband imaging apparatus Download PDF

Info

Publication number
GB2443004A
GB2443004A GB0620380A GB0620380A GB2443004A GB 2443004 A GB2443004 A GB 2443004A GB 0620380 A GB0620380 A GB 0620380A GB 0620380 A GB0620380 A GB 0620380A GB 2443004 A GB2443004 A GB 2443004A
Authority
GB
United Kingdom
Prior art keywords
image
gathering means
image gathering
waveband
imaging system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0620380A
Other versions
GB0620380D0 (en
Inventor
Tom Heseltine
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AURORA COMP SERVICES Ltd
Original Assignee
AURORA COMP SERVICES Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AURORA COMP SERVICES Ltd filed Critical AURORA COMP SERVICES Ltd
Priority to GB0620380A priority Critical patent/GB2443004A/en
Publication of GB0620380D0 publication Critical patent/GB0620380D0/en
Priority to PCT/GB2007/050607 priority patent/WO2008047157A1/en
Publication of GB2443004A publication Critical patent/GB2443004A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G06K9/00255
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/41Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors
    • H04N3/1593

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

There is disclosed an imaging system comprising at least first 1 and second 2 image gathering means angularly disposed relative to each other and provided with beam splitter means 5 such that both image gathering means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object 3. The first image gathering means is configured to gather image information in a first waveband, and the second image gathering means is configured to gather image information in a second waveband different to the first waveband. There is further provided means for projecting 4 a predetermined pattern onto the object which pattern is detected by the second image gathering means and not by the first image gathering means. Embodiments of the present invention are particularly useful in human face recognition applications, especially in relation to identity verification where fast throughput is required, for example at airport check-in desks.

Description

DUAL WAVEBAND SINGLE AXIS CAMERA FOR MACHINE VISION
BACKGROUND
Significant effort and interest has been applied to automatic recognition of objects for a wide range of applications. The present application relates to the use of human face recognition as an example of recognition systems, but the techniques detailed within the apphcation have application in a wide range of machine vision applications.
RecognItion systems mainly but not exclusively work based on 2-dimensional images or 3-dimensional images. 3-dimensional Images are built by techniques such as stereoscopic vision or laser line scanning of a rotating object. An alternative is the use of 2-Dimensional images with the addition of depth information. An academic paper which explains these techniques in detail is Capturing 24D Depth and Texture of Time Varying Scenes Using Structured Infrared Light', Christian Frueh and Avideh Zakhor, Department of Computer Science and Electrical engineering, University of California, Berkeley 1 There is also known patent application GB 2410794 A, Apparatus and methods for three dimensional scanning', Marcos A Rodrigues, Alan Robinson and Lyuba Alboul, Sheffield Hallam University 2 This technique produces two 2-d images, a normal image from a standard camera system operating in the visible spectrum, and an image with the projected line structure superimposed on it. The normal image may be monochrome or colour but In face recognition systems is usually colour. Systems for product or object recognition may use monochrome where colour information is less Important. A line projection system is used to project a line structure onto the target object The projector is placed in a different axis to the camera such that distortion in the pattern as seen by the camera provides depth Information (Fig 1). This is explained in depth by GB 2410794.. The depth information then provides a framework on which the 2-d image may be rendered, providing a limited 3-d model of the subject. The main limitation of this modelling method is that the model contains only information available from the 2-d camera and no data on the obscured parts of the subject is available from a single picture. Multiple pictures taken of the object rotated in the viewing region may be used to build a more detailed 3-d model.
The system described in Capturing 2Y2d Depth and Texture of Time-Varying Scenes Using Structured Infrared Light' shows how an infrared projection system may be used, with a second camera being sensitive to infrared. The visible image may then be used without a visible projected structure being present. In facial recognition systems, the visible image may then be used for 2-d recognition and operator monitoring. The two images from the two cameras do however have distortion problems that become significant, as the distance of the object from the camera changes. The present application describes embodiments of an invention that removes this image distortion such that the pattern-generated framework remains essentially spatially identical to the normal visible image, irrespective of the distance of the object from the cameras.
Fig 1 shows a typical configuration where two cameras (1, 2) are mounted close to each other and a pattern projector (3), or laser pattern generator (3), is placed off axis. It can clearly be seen (Figures 2, 3) that differences will exist within the images from the two cameras, simply because of the different camera positions. This can be Improved at a fixed distance by rotating the cameras as in Figure 4.
This configuration gives a better result at a fixed distance, however the object is viewed from two different locations and results in some features observed by camera (1) not being seen by camera (2). In addition, image alignment only occurs at a fixed distance.
If the object moves towards or away from the camera system the two images will separate out.
Figure 5 shows that a displacement error of 12.5mm occurs when the object moves 250mm about a distance of 1 metre from two cameras that are separated by 50mm.
This means that when an object such as a face is a 1 metre distance from the cameras, the visible colour image can be overlaid onto the IR pattern derived depth data with a high degree of accuracy. This is particularly important when the effect of rendering key features such as eyes on to the depth data.
If the object moves forwards or backwards by 250mm, the two images separate out by 12.5mm, In facial terms this is the equivalent of moving the eye and placing it on the bridge of the nose. This causes significant problems in achieving accurate rendering of the colour image onto the depth data. Software atgorithms may be employed to manipulate the two images to correct for this error, but this requires knowing the distance of the object to the camera in order to work out the correction.
In machine vision applications this may become more of a problem if the object has large depth variabflity with respect to distance from the camera. In these instances, significant errors in both directions may occur on the same image.
BRIEF SUMMARY OF THE DISCLOSURE
According to a first aspect of the present invention, there is provided an imaging system comprising at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means such that both image gathering means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means is configured to gather image information in a first waveband, wherein the second image gathering means is configured to gather image information in a second waveband different to the first waveband, and wherein there is further provided means for projecting a predetermined pattern onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
According to a second aspect of the present invention, there is provided a method of generating image data, wherein at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means gathers image information in a first waveband, and the second image gathering means gathers image information in a second waveband different to the first wavebarid, and wherein a predetermined pattern is projected onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
The first and second images may be mirror images of each other, but in all cases will effectively be coaxial and have substantially identically perspectives. This means that there is substantially no distortion of one image relative to the other, regardless of the distance of the object from the image gathering means.
The image gathering means may be cameras or video cameras or CCD devices or the like.
In other words, embodiments of the present invention use a beam splitter to combine two cameras, such that they appear on the same axis, removing image errors irrespective of distance.
The use of a beam splitter combines the viewing region of the two cameras so that they are coaxial and with correct adjustment of the lens/magnification of the image. Two images are produced that are essentially identical with the exception that one may be a mirror image of the other. This mirror image may be removed either in software or by camera hardware.
The first image gathering means may be a colour camera giving a normal 2-d image, and the second image gathering means may be an infrared camera providing an image of the projected pattern structure. In this instance, the depth information provided by the second image gathering means provides an accurate frame onto which the 2-d image may be rendered, irrespective of the distance of the object from the cameras.
An embodiment of this technique allows the use of low cost CCTV-style cameras, where one camera is a colour camera and the other camera is a monochrome camera with a response into the lR or near-lR region. In this embodiment, a waveband selective beam splItter may be employed to separate the visible Image from the infrared.
In an alternative embodiment, the IR or near-IR pattern image is removed from the visible colour camera and the visible image removed from the monochrome camera This is done with minimal loss of light to the camera(s) as a very high proportion of both the visible and IR/near-IR light is passed to the appropriate camera. This is in contrast to the first embodiment where all wavebands are shared between the cameras, resulting in lower light intensity availability at each camera.
In both the first and second embodiments, each camera may be provided with its own imaging lens Alternatively, the beam splitter may be placed behind an imaging lens or compound lens that acts as a lens for both cameras, This embodiment allows easy control of Image magnification as a zoom lens may be employed This creates an imaging system that behaves as a single camera but generates both visible and infrared pattern information.
This allows the use of a wide range of lens options such as zoom, auto iris, auto focus etc, without creating images that separate out under different tens configurations In systems that require depth information only, the pattern-generated image is all that is needed. However positioning the target object correctly within the viewing region may be important in some applications. In these applications, visual feedback may be achieved by viewing the image seen by the camera(s) on a display system. The use of the pattern image in facial recognition systems for visual feedback may prove to be quite disconcerting, particularly to those using the system for the first time. As infrequent use may be the norm in security applications, the use of this image may reduce the acceptability of the system. The use of the dual camera system whereby a plain monochrome or colour image is used as the visual feedback significantly improves the public's acceptance of the system. If the configuration described in this connection is employed, the improved feedback performance is achieved without reducing system accuracy.
In an embodiment combining a normal image with a non-encoded pattern structure for 3-d data as described above, problems with occlusions may arise. This problem occurs when features on the object cast shadows seen by the camera(s). Step changes within the object may also cause patterns to be displaced such that Incorrect connections are made between patterns.
In order demonstrate this problem, one may consider a flat object placed in front of a larger flat object. The distance between the two objects is such that the vertical displacement of the patterns on the nearest object is one line with respect to the far object. As there is no coding on the lines, line-tracing software may assume that a particular given tine is a continuous line across the image. In fact given line becomes displaced and becomes an adjacent or near-adjacent line on the nearest object. Line tracing will therefore assume the image is a flat single object, not a flat object in front of a second flat object. Techniques to resolve this problem have been developed in such patent applications as GB24 10794 by Sheffield Hallam University. Alternatively, many other systems of encoding the lines have been developed.
Coded patterns made each line unique by either colour, shape or time displacement. In the case of colour, multi-coloured stripes are projected allowing unique identification of tines However these produce visible patterns on the image. In order to achieve two images (normal colour) and depth data, two time-displaced images are captured; normal colour image followed by pattern-projected image. The time displacement causes motion-related problems with the subject being in two different locations with the two images.
The use of the beam splitter allows the use of colour-encoded patterns to be used at the same instant in time as capturing a normal colour image seen without the pattern projection.
Accordingly, further embodiments of the present invention may employ a plurality of beam splitters, each adapted to separate out a predetermined waveband of incident image light. For example, a first beam splitter (closest to the object) may be configured to separate out light above a first given wavelength and to allow passage of light below the first given wavelength. One or more subsequent beam splitters may be configured to separate out light above progressively longer wavelengths, in each case allowing passage of light above a respective given wavelength. Alternatively, the beam splitters may be configured to allow passage of respectively shorter wavelengths, and to separate out light below progressively shorter wavelength thresholds.
Therefore, this technique has the ability to recover patterns, projected in the corresponding wavelengths in the IR, near-PR or other spectra, simultaneously together with a visible image. The number of cameras, wavelengths and pattern colours is determined by the total spectral bandwidth employed, and the bandwidth of the individual patterns/beam splitters. The above discusses the spectrum in terms of IR and visible, however this distinction is arbitrary in nature. The visible spectrum may be divided in the same way as the IR.
Moreover, multiple projecting means may be provided to project multiple patterns at different wavelengths or wavebands corresponding to multiple cameras or beam splitters. This allows a distribution of projectors to remove the effect of the occlusions whilst avoiding the problem of time-multiplexed systems where motion of the subject creates distortion of the final Image.
Throughout the description and claims of this specification, the words "compris& and "contain" and variations of the words, for example "comprising" and "comprises", means "including but not limited to", and is not intended to (and does not) exclude other moieties, additives, components, integers or steps.
Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
Features, integers, characteristics, compounds, chemical moieties or groups described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention, and to show how it may be carried into effect, reference shall now be made to the accompanying drawings, in which:
FIGURE 1 shows a prior art camera system;
FIGURE 2 shows an image gathered by a first camera of the Figure 1 system; FIGURE 3 shows an image gathered by a second camera of the Figure 1 system; FIGURE 4 shows a modification of the Figure 1 system; FIGURE 5 illustrates how displacement errors occur in the Figure 1 system; FIGURE 6 shows a first embodiment of the present invention; FIGURE 7 shows a second embodiment of the present invention; FIGURE 8 shows a third embodiment of the present invention; FIGURE 9 illustrates how occlusion problems may occur; FIGURE 10 shows a fourth embodiment of the present invention; and FIGURE 11 shows a fifth embodiment of the present invention.
DETAILED DESCRIPTION
Figures 1 to 5 have been discussed in the introduction to the present application.
Figure 6 shows a first embodiment in which the two cameras 1, 2 are mounted at right angles to each other, and a beam splitter 5 is provided so that the cameras 1, 2 gather coincident images of the object 3 despite not being spatially co-located. In this embodiment, camera 1 is a normal colour camera, camera 2 is a near-IR monochrome camera, and the beam splitter 5 splits light across all wavelengths. Due to the beam splitter 5, the image gathered by camera 2 may be a mirror reflection of the image gathered by camera 1, but this can be corrected in hardware or software, resulting in the two images being substantially identical and coincident..
Figure 7 shows a second embodiment in which the beam splitter 5 is adapted to separate visible from near-JR light, passing visible light to camera 1 and near-IR light to camera 2. In this way, the near-IR pattern image is removed from camera 1, and the visible light image is removed from camera 2. This is done with very low loss of light to the cameras 1, 2, as a very high proportion of both the visible and the near-IR light is passed to the appropriate camera.
In Figures 6 and 7, each camera 1, 2 is provided with its own lens system (not shown).
In a third embodiment, shown in Figure 8, a single lens system 6 is provided, and the beam splitter 5 is located behind the lens and arranged to split light between two CCDs 7, 8. This embodiment allows easy control of the image magnification, since a zoom lens may be employed, as well as auto-iris, auto-focus etc. In systems that require depth information only, the pattern-generated Image Is all that is needed. However positioning the target object correctly within the viewing region may be important in some applications. In these applications, visual feedback may be achieved by viewing the image seen by the camera on a display system. The use of the pattern image in facial recognition systems for visual feedback may prove to be quite disconcerting, particularly to those using the system for the first time. As infrequent use may be the norm in security applications, the use of this image may reduce the acceptability of the system. The use of the dual camera system whereby a plain monochrome or colour image is used as the visual feedback significantly improves the public's acceptance of the system. If the configuration described in this application is employed, the improved feedback performance is achieved without reducing system accuracy.
In the embodiment combining a normal image with a non-encoded pattern structure for 3D data as described above, problems with occlusions arise. This problem occurs when features on the object cast shadows seen by the camera Step changes within the object may also cause patterns to be displaced such that incorrect connections are made between patterns.
In Figure 9 a flat object 9 is placed in front of a larger flat object 10. The distance between the two objects 9, 10 is such that the vertical displacement of the line patterns A to 0 on the nearest object 9 is one line with respect to the far object 10. As there is no coding on the lines, line-tracing software may assume that line D, for example, is a continuous line across the image. In fact line D becomes displaced and becomes E on the nearest object 9. Line tracing will therefore assume the Image is a flat single object, not a flat object 9 in front of a second flat object 10. Techniques to resolve this problem have been developed in such patent applications as GB24 10794, the full disclosure of which is hereby incorporated into the present application by reference. Alternatively, many systems of encoding the lines have been developed.
Figure 10 shows an embodiment multiple beam splitters 5, 5, 5', and multiple cameras C1 to C,,. The beam splitters are each configured to allow passage of light only above or below a predetermined wavelength, effectively acting as high or low pass filters for light of progressively decreasing or increasing wavelength. The cameras may be configured to be sensitive only to the appropriate wavelengths of light.. This allows recovery of patterns, projected in the corresponding wavelengths in, say, the IR spectrum, together with a visible image.
Figure 11 shows a generic form where a projector or projectors P1 project a spectrum of light LT with pattern structures in the bands L1 L2 L+1 The projected light reflected off of the subject or object 3 is optionally band pass filtered to remove light outside the wavelengths of interest. Each wavelength of interest is separated from the combined projected light such that the camera received light in the range L1 L2 L 1... Beam splitters BS1 to BS operating as low pass filters reflecting light above the cut-off frequency for each bandwidth of interest. The cascade nature results in pass band filtered light, of the specific band of interest to each camera respectively. Where multiple projectors are used each projector may project a pattern of a specific wavelength corresponding to the response of a specific camera. This allows a distribution of projectors to remove the effect of the occlusions whilst avoiding the problem of time-multiplexed systems where motion of the subject creates distortion of the final image.

Claims (14)

1. An imaging system comprising at least first and second image gathering means angula,ly disposed relative to each other and provided with beam splitter means such that both image gathering means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means is configured to gather image information in a first waveband, wherein the second image gathering means is configured to gather image information in a second waveband different to the first waveband, and wherein there is further provided means for projecting a predetermined pattern onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
2. An imaging system as claimed in claim 1, wherein the first and second images are mirror images of each other, and in which there is further provided means for reflecting one image.
3. An imaging system as claimed in any preceding claim, wherein the beam splitter is adapted to split light across all wavebands of interest.
4. An imaging system as claimed in claim I or 2, wherein the beam splitter is adapted to reflect light in the second waveband and to allow passage of light in the first waveband.
5. An imaging system as claimed in any preceding claim, wherein the first image gathering means collects visible colour light, and the second image gathering means collects light in a non-visible waveband, e.g. infra red or near-Infra red.
6. An imaging system as claimed in any preceding claim, wherein each image gathering means is a video camera with its own lens system.
7. An imaging system as claimed in any one of claims I to 5, wherein each image gathering means is a CCD or CMOS or other photosensitive array, and in which the beam splitter is located between the photosensitive arrays and a single, common lens system.
8. An imaging system as claimed in any preceding claim, wherein there are provided more than two image gathering means and more than one beam spfltter.
9. An imaging system as claimed in claim 8, comprising n beam splitters and n+1 image gathering means, where n is a positive integer.
10. An imaging system as claimed in claim 9, wherein the n beam splitters are arranged so as to split light into n+1 different wavebands, and wherein each image gathering means is arranged to image light in a different one of the n+1 wavebands.
11. A method of generating image data, wherein at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means gathers image information in a first waveband, and the second image gathering means gathers image information in a second waveband different to the first waveband, and wherein a predetermined pattern is projected onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
12. A method as claimed in claim 11, using an imaging system as claimed in any one of claims I to 10.
13. An imaging system substantially as hereinbefore described with reference to or as shown in Figures 6 to 11 of the accompanying drawings.
14. A method of generating image data substantially as hereinbefore described with reference to or as shown in Figures 6 to 11 of the accompanying drawings.
GB0620380A 2006-10-16 2006-10-16 Multi camera and waveband imaging apparatus Withdrawn GB2443004A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0620380A GB2443004A (en) 2006-10-16 2006-10-16 Multi camera and waveband imaging apparatus
PCT/GB2007/050607 WO2008047157A1 (en) 2006-10-16 2007-10-03 Dual waveband single axis camera for machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0620380A GB2443004A (en) 2006-10-16 2006-10-16 Multi camera and waveband imaging apparatus

Publications (2)

Publication Number Publication Date
GB0620380D0 GB0620380D0 (en) 2006-11-22
GB2443004A true GB2443004A (en) 2008-04-23

Family

ID=37491496

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0620380A Withdrawn GB2443004A (en) 2006-10-16 2006-10-16 Multi camera and waveband imaging apparatus

Country Status (2)

Country Link
GB (1) GB2443004A (en)
WO (1) WO2008047157A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101145249B1 (en) * 2008-11-24 2012-05-25 한국전자통신연구원 Apparatus for validating face image of human being and method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006070099A2 (en) * 2004-12-23 2006-07-06 Sagem Defense Securite Method for identifying a person from the person's features, with fraud detection

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6411918B1 (en) * 1998-12-08 2002-06-25 Minolta Co., Ltd. Method and apparatus for inputting three-dimensional data
JP2000283721A (en) * 1999-03-30 2000-10-13 Minolta Co Ltd Three-dimensional input device
US6754370B1 (en) * 2000-08-14 2004-06-22 The Board Of Trustees Of The Leland Stanford Junior University Real-time structured light range scanning of moving scenes
US20030067538A1 (en) * 2001-10-04 2003-04-10 Myers Kenneth J. System and method for three-dimensional data acquisition
DE102005014525B4 (en) * 2005-03-30 2009-04-16 Siemens Ag Device for determining spatial coordinates of object surfaces

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006070099A2 (en) * 2004-12-23 2006-07-06 Sagem Defense Securite Method for identifying a person from the person's features, with fraud detection

Also Published As

Publication number Publication date
WO2008047157A1 (en) 2008-04-24
GB0620380D0 (en) 2006-11-22

Similar Documents

Publication Publication Date Title
US7837330B2 (en) Panoramic three-dimensional adapter for an optical instrument and a combination of such an adapter and such an optical instrument
JP5426174B2 (en) Monocular 3D imaging
CN106412426B (en) Total focus camera and method
US8334893B2 (en) Method and apparatus for combining range information with an optical image
US20200081530A1 (en) Method and system for registering between an external scene and a virtual image
US20020140835A1 (en) Single sensor chip digital stereo camera
KR101737085B1 (en) 3D camera
US20080158345A1 (en) 3d augmentation of traditional photography
KR20150068299A (en) Method and system of generating images for multi-surface display
WO2012029299A1 (en) Image capture device, playback device, and image-processing method
US9019603B2 (en) Two-parallel-channel reflector with focal length and disparity control
WO2011134215A1 (en) Stereoscopic camera device
CN106709894B (en) Image real-time splicing method and system
JP2010181826A (en) Three-dimensional image forming apparatus
JP2015188251A (en) Image processing system, imaging apparatus, image processing method, and program
US9565421B2 (en) Device for creating and enhancing three-dimensional image effects
JP3818028B2 (en) 3D image capturing apparatus and 3D image capturing method
CN104469340A (en) Stereoscopic video co-optical-center imaging system and imaging method thereof
JP2011215545A (en) Parallax image acquisition device
GB2443004A (en) Multi camera and waveband imaging apparatus
JP6367803B2 (en) Method for the description of object points in object space and combinations for its implementation
EP2716053B1 (en) Grid modulated single lens 3-d camera
JP3564383B2 (en) 3D video input device
KR20160031869A (en) System for projecting stereoscopy using one projector
JP2010268097A (en) Three-dimensional display device and three-dimensional display method

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)