GB2226923A - Automatic convergent stereoscopic machine vision - Google Patents

Automatic convergent stereoscopic machine vision Download PDF

Info

Publication number
GB2226923A
GB2226923A GB8900279A GB8900279A GB2226923A GB 2226923 A GB2226923 A GB 2226923A GB 8900279 A GB8900279 A GB 8900279A GB 8900279 A GB8900279 A GB 8900279A GB 2226923 A GB2226923 A GB 2226923A
Authority
GB
United Kingdom
Prior art keywords
machine vision
vision system
automatic
views
automatic convergent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB8900279A
Other versions
GB8900279D0 (en
GB2226923B (en
Inventor
Peng Seng Toh
Aik Meng Fong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB8900279A priority Critical patent/GB2226923B/en
Publication of GB8900279D0 publication Critical patent/GB8900279D0/en
Publication of GB2226923A publication Critical patent/GB2226923A/en
Application granted granted Critical
Publication of GB2226923B publication Critical patent/GB2226923B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

An automatic convergent stereoscopic machine vision is a dynamic system capable of automatic focusing, automatic convergence and of obtaining accurate three dimensional information through matching within the fovea of the viewing systems. The dynamic property comes from a closed-loop servo system that integrates depth cues from convergence, focusing and disparity. It comprises three views of which the centre view is the directed attention view. It assigns a small area around the centre of each view as the foveae to provide reliable three dimensional data. It uses correlation matching within overlapping sections of the views and a master-slave focusing technique. Rotational control is provided for the two side views, to produce the automatic convergences. <IMAGE>

Description

Automatic Convergent Stereoscopic Machine Vision This invention relates to a method of and an apparatus for automatic convergent stereoscopic machine vision using only passive means.
Human vision is stereoscopic and possess the ability to integrate different physiological and psychological depth cues. It adapts rapidly to the great variation of the world scenes and is able to converge onto the object of interest for effective stereoscopic processing. Through eye convergence human vision acquires a high accuracy of stereoscopic depth information. Hitherto, the process of automatically and dynamically converging onto the object of interest has often been ignored in machine stereo vision although its important is well known in human vision study.
Convergence is the process that makes the optical axes of our eyes intersect at a common point on an object as illustrated in Figl. The major difficulty in machine vision implementation is due to the strong interactions between eye (camera) convergence and other depth cues such as accommodation (focus) and disparity (by matching). For example, convergence will be difficult if the eyes cannot accommodate. The subconscious muscular tension that rotates the eyes is initially derived from gross matching and subsequently, convergence enables more precise matching. The complexity due to the interaction of multiple cues can only be realised by a closed-loop system with intelligent controls.This invention is therefore motivated by the manner in which human vision uses collective control of the eye convergence, accommodation, matching and fovea specialization for accomplishing highly accurate vision tasks.
In human binocular vision, convergence ensures a known and relatively large overlapped region in both eyes. It is equally important for machine stereopsis to have a known large overlapped region so that disparity matching can proceed to derive the three dimensional information about the object(s) or scene.
Generally, the existing vision systems are not constructed to capture the desired view automatically. Laborious manual operation for camera and lighting adjustment are often required prior to processing an image. Vision or image processing often assumes that the view has already been taken with a sharp focus and correct aperture ignoring the fact that human intervention is substantial. Furthermore, there is rarely any passive stereo vision system that is equipped with convergence ability. The concept of foveal specialization is not adopted and hence no attention mechanism. The overlapping region of two or more views taken for stereoscopic processing is largely unknown. As a result, the correspondence matching problem is ill-posed and provides no unique solution. The correspondence matching problem remains intractable due to some factors below: 1. Photometric distortion; 2.Geometric distortion; 3. Many-to-many mapping; 4. Undeterministic non-overlapped region.
This invention of automatic convergent stereoscopic machine vision (ACONS) adopts a concept similar to human stereo vision and it provides some solutions to the inadequacies of the existing stereo vision systems.
The block diagram of ACONS is shown in Fig4 and it has the following salient features: 1. A symmetrical and meridionally aligned configuration (FigS) to ensure that the optical axes will intersect in space. This arrangement will also reduce the search space for correspondence to one dimension without the need to calculate the epipolar line. The optical axes (Fig5.1) and the lines linking the view centres (Fig 5.4) are coplanar and form the horizontal meridian (Fig 5.3).
2. Coupled cameras rotational mechanism as illustrated in Fig 4 and Fig 6.1 for converging onto the area of interest.
3. A small area (with high resolution) that simulates the fovea is processed to ensure that the problem is well-posed with a unique solution. Photometric and geometric distortions are largely reduced. The equivalent of machine vision fovea is the small area surrounding the optical centre of the camera system or other types of photoreceptor array as illustrated by Fig 2.3. Fig 2.1 shows the area on the photoreceptor array corresponding to the periphery vision and Fig 2.2 points to the view centre.
4. A master-slave focusing technique. The focus of the centre view is adjusted first and is assigned as the master focus. The foci of the two side views are slaves and dictated by the master focus.
5. Provision for a choice of attentive focusing or relaxed focusing. Attentive focusing only brings the foveal image into focus; Relaxed focusing selects a mean focus to accommodate the whole visual field which usually of varied depths.
6. Parallel stereo algorithm is more readily applicable to each of the small overlappped areas.
7. Reduction in the volume of data and parallel processing render real-time application possible.
8. Operation in normal lighting condition.
9. The emphasis of using 'passive means', as opposed to 'active means', does not need special transmitters or transducers like ultra-sonic, infrared, laser to illuminate the object of interest. The implication of using passive means therefore allows this invention to perform certain stereoscopic functions currently not easily possible otherwise.
Although this invention uses some properties of human stereo vision arrrangement it has several features that are superior to human stereoscopic vision, two of which are: 1. Larger operational range due to the adjustable interocular distance or baseline as depicted in Fig 3 which is the plan view of Fig S.
2. Improved accuracy provided by zoom adjustment leading to very fine resolution. The zoom controls are ganged if used.
As the ACONS is a servo-control opto-electronic system to realise auto-convergence and auto-focusing, it is equipped with additional mechanisms and constraints that are distinct from other vision systems.
1. Incorporation of an attention mechanism which is the centre view (Fig7.3) with foveal field of view as in Fig 7.2. This is important or else convergence will not be possible.
2. Restriction of the locus of convergence to lie along the optical axis of the center view or the direction of attention (Fig7.1) as provided by the attention mechanism.
3. Rotation of the two side cameras are identical. This features a deviation from human stereopsis which largely reduces computation and actuation time. The depth of the convergence point can be read off directly from the angle of rotation as a form of extraretinal information.
Facing the critical issues of corresponding matching problems, ACONS addresses the problems principally by convergence and foveal specialization. As the two or more views for stereoscopic vision are made to converge onto the surface of the object of interest by the automatic convergent process, these views are largely overlapped thereby reducing the photometric and geometric distortion. The concept of foveal specialization is to restrict correspondence matching within a small area like a biological visual fovea. The disparity cue is especially accurate and reliable within the fovea but becomes less reliable away from the view centre. Within the fovea, matching is close to one-to-one mapping.
Although the principal use of this invention is to provide range data, many machine vision applications such as object recognition, motion detection and robot sensor, are also made more amenable by this invention. As compared with other ranging systems which are mostly active types, it does not suffer from the following problems: 1. Subceptibility to the reflectance type of the object surface. Laser, sonar and active optical ranging system are each effective for a certain type of surface only. For example, laser method is difficult for specular surfaces whereby the transmitted laser beam may not be reflected back to the receiver.
2. Interference from the active ranging system to the object under view and in turn vulnerability to the interference arising from the environment of operation.
Referred features of the various aspects of the invention are set out in the claims.
There follows a description by way of specific embodiments of the present invention, reference being made to the accompanying drawings in which: Fig 1 is an illustration of convergence by human eyes.
Fig 2 is an illustration of the division of the photoreceptor field into periphery, fovea and view centre.
Fig 3 is a horizontal meridional plane whereby the three view centres are located.
Fig 4 is a block diagram of the invention.
Fig 5 is a perspective drawing of the meridian and the three views.
Fig 6 is an illustration of ganged rotation for converging onto the object of interest.
Fig 7 is an illustration of the centre view and its foveal field of view.
The block diagram of this invention (ACONS) is illustrated in Fig 4. All the essential blocks are referenced by numbers. The three views are provided by three video cameras 10, 13 and 15 with adjustable focus lens 12, 14 and 17 respectively.
The necessary frame grabbers are installed. The ouputs from the video cameras to the multiplexer 20 and matching unit 23 are digitized video signal. The multiplexers 20 and 22 are controlled by the view selector 21 which selects one of the three views. The letters L, C and R indicated on the multiplexers referred to the left, centre and right view respectively. The essential blocks 18, 23, 25 and 26 are described separately in details.
Focus detection unit 18: The principle of passive automatic focusing of a monocular image makes use of the property of comparative power of the Fourier spectrum of the intensity gradient between a clear and a blur image. The power spectrum of the intensity gradient is defined as follows: P(u)= I G(u) 12 where P(u) is the power spectrum of the Fourier transform G(u),
g(x) is the gradient of the intensity function f(x); df(x) dx Extension to two dimension is straight forward.
The total power within the spectrum is the summation of all the discrete power components, Pt = ss P(u) for all u.
Or equivalently, the Fourier transform of the first differential of a function is, G(u) = u F(u) with F(u) the Fourier transform of f(x).
P(u) = U2 IF(u)l2 The higher frequencies components of the spectrum contribute more weight to the total spectral power. The zero frequency component is completely eliminated and hence the shift in brightness alone will not affect the total power. An in-focus image has defined edges of large intensity variation. In other words, larger intensity gradient will increase the higher frequencies components. As such, the clearer image will contain more total spectral power and eventually peak at the in-focus image. While not reaching the peak power, an error signal is sent to the focus controller 19 so that it will move the focusing len to a new position. Upon the detection of the in-focus image, no more error signal is sent to the focus controller 19 and hence correctly set its focal length.The multiplexer 22 then switches to L and R, setting their focal length 14 and 17 respectively to that of the master focus 14.
A point to note is that the area to be focused is limited to the fovea so that other unintended objects and the background will not influence the focusing control.
Side Views Rotation Controller 24: This control ensures that convergence will be located along the optical axis of the center view. The distance from convergence to both view centers is the same thus forming an isosceles triangle. Three constraints are imposed to ensure that convergence is formed at the right place: 1. Convergence can be formed only along the optical axis of the center view corresponding to the direction of attention.
2. The distance from convergence to all three views are approximately the same for object further than the standoff distance to take into account the effect of depth of field. As such, all three images are focused by the master focal length determined by the center view.
3. Approximately the same image characteristics (intensity, contrast) should appear in all foveae. Matching the frequency or power spectrum extracted in the focus detection unit can be used as a further quick check for this constraint. The constraints integrator 25 performs the task of verification.
The sequence of operation of 24 is described as follows: (1) The initial angular position (Oo) is noted regardless of where the two side views are viewing. It can be initialized such that the optical axes of the two side views are parallel to one another and hence parallel to the direction of attention. This has the advantage of having a known converging direction which is the inward rotation.
(2) The total spectral power of either foveal view is computed. For instance, the total spectral power of the left image Ptl(0o) is computed. The computation of the total spectral power is carried out in the focus detection unit 18 with the multiplexer 20 switched to L(left).
(3) The controller 24 activates a large positive rotation corresponds to a inward movement regardless of the signal arriving from the focus detection unit 18 through the constraints integrator 25.
= = + a; the new angular position is On = Oo + a, The total spectral power for the new position, Ptl(0n) is then computed.
(4) DIfference in power between the current and the previous view is computed, APt = Ptl(0n) - Put(00); (5) If APt is positive, it shows that the previous direction of rotation is correct, funher inward movement is taken, AO = + a; 6n+1 = On + AO.
A negative APt implies a reversal in rotation is necessary, such that A a, On+1 = On - a.
(6) Again, the difference in power is computed, APti = Ptl(0n+l) - Ptl(0n); step (5) is repeated to verify the direction of rotation. If APt changes sign, equivalent to the detection of zero-crossing, it indicates that the image is coming into focus. Both images focal conditions are to be monitored upon the detection of the change of sign. In this case, the change in total spectral power in the right view is used to confirm the result obtained by the left view.
APtr = P(On+i) - Ptr(0n) and if APtr also changes sign simultaneously, the two views are about to converge.
Consequently, the direction of rotation will have to be reversed with more refined step, ao= - p.
This action is performed until the next change of sign occurs and this indicates that the three views are very likely to have converged onto the fixation point now. Further verification of convergence is carried out by matching.
Matching unit 23: Under ideal convergent situation, the same point on the object should appear in the center of all three views. Two or three views stereo matching algorithms utilizing features or area correlation methods are applicable. The video signal from all the three video cameras are connected to this unit as inputs. This matching unit 23 produces a disparity map where the depth of the object surface visible within the foveas of all three views can be calculated accurately using triangulation by 26. The depth in the periphery can also be calculated if so desired but the reliability of the result is usually poor. The disparity map generated by this unit is passed on to the constraints integrator and further on to range data calculator 26.
Constraints Integrator 25: To integrate different types of cues so that depth perception accuracy can be improved.
Depth within the foveae are accurately determined by disparity matching. Depth of object seen in periphery is determined by both disparity and focusing information. The two inputs to this unit are disparities and the focusing error signal.
With all the individual units of ACONS described, the sequence of operation for ACONS is as follows: When the ACONS is positioned to face the object of interest frontally, its image falls onto the photoreceptor field of the center view 13 unambiguously. Attentive focusing is then carried out, meaning that the focus of the center view 14 is adjusted entirely according to the foveal image. The focus of the centre view is the master focus which is used to set the focal length for the other two views 12 and 17. Initially, due to the ganged focusing system 19, the two side views are unlikely to be directed to the point seen by the center view and so will appear blurred. After having set their foci, the two ganged views begin to rotate as controlled by 24 until both images are focused simultaneously. All three views are now converged onto the same spot, the fixation point.The end results are well focused views with a large overlapping of the scene for all views, which render calculation of accurate range data possible.
Generally, the ACONS can be used as the front-end imaging system for most of the passive machine vision applications with added advantage of providing stereoscopic views. There are some applications where it excels: 1. Robot eyes-on-hand system with emphasis on stereo vision. A miniature ACONS unit can be built with a short baseline for an industrial robot with restricted range of operation.
2. Range data sensors with large dynamic range.
3. Motion detector and moving target tracker. An ACONS bound to a six-axis robot can track in all direction.
4. Robot manipulator trajectory control by being provided with a real-time range with motion information in a three dimensional environment.
5. Rapid object recognition using integrated cues which are hitherto not easily available especially those of accurate three dimensional information with motion cues.
ACONS is modular by itself and has great versatility, therefore multiple ACONS units can be cascaded to cover a wider area. More sample points for range data is possible with this extension.
The following described an example of an autonomous operation using this invention.
1. The ACONS is set to focus the general scene by relaxed focusing technique.
2. If object of interest does not appear on this view, it then scans the environment and obtains sequence of images.
3. Each image frame is analyzed for object of interest through object recognition method.
4. Upon the location of the object of interest, the ACONS stops scanning, i.e. adjusting its direction of attention.
5. An interest operator is applied to the object of interest to mark the prominent points (interest points).
6. The vectors (direction and magnitude) from the view center to the nearest interest point is computed.
7. Fine attention adjustment is then initiated to bring the view center to coincide with the interest points This is the desired direction of attention.
8. Automatic focusing and converging process proceed from this direction of attention.
9. Either two or three dimensional information about the scene is obtainable by utilizing appropriate algorithms.
The sequence of autonomous operation is again illustrated with a flow chart as shown:
START e Initialise ACONS Conditions: focusing = relaxed, centre view ='on, side views focusing = ignored, side views rotation =ignored.
Grabs frames ect of interest No detected ? YYes \ detected ? / I Interest points (features) extraction Select nearest interest point from object of interest. Align direction of attention.
v Move view centre to coincide with the interest point l v Activate convergence process to obtain As per operation of ACONS overlapped sterebview L Determine range data or other information desired ; END

Claims (29)

  1. CLAIMS 1. An automatic convergent stereoscopic machine vision system that employs a centre view, two symmetrical side views, focusing and side-view rotation control.
  2. 2. An automatic convergent stereoscopic machine vision system as claimed in claim 1 where the optical axes of the centre view and the two side views are able to intersect at a point on the object surface providing convergence.
  3. 3. An automatic convergent stereoscopic machine vision system as claimed in claim 2, the optical axes and hence the optical centres of the centre view and the two side views are coplanar on the horizontal meridian.
  4. 4. An automatic convergent stereoscopic machine vision system as claimed in claim 3 wherein movement of the optical centers of all three views are restricted to the meridian.
  5. 5. An automatic convergent stereoscopic machine vision system as claimed in claim 4 wherein the locus of convergence lies along the optical axis of the centre view.
  6. 6. An automatic convergent stereoscopic machine vision system as claimed in claim 5 wherein the centre view is directed to the object of interest.
  7. 7. An automatic convergent stereoscopic machine vision system as claimed in claim 6, all three views having adjustable focus control.
  8. 8. An automatic convergent stereoscopic machine vision system as claimed in claim 7 wherein the focus of the centre view is successively adjusted by the focus detection unit until it is in-focus.
  9. 9. An automatic convergent stereoscopic machine vision system as claimed in claim 8, the focus of the centre view is determined by the foveal image alone achieving a means of directed attention.
  10. 10. An automatic convergent stereoscopic machine vision system as claimed in claim 9 wherein the focus is controlled by a focus detection unit producing an error signal when it is not in-focus.
  11. 11. An automatic convergent stereoscopic machine vision system as claimed in claim 10 wherein the corrected focus of the centre view is used to set the focal length of the two side views to an equal amount regardless of their states of focus.
  12. 12. An automatic convergent stereoscopic machine vision system as claimed in claim 11 wherein the focus detection unit is connected to the two side views and produces outof-focus error signal when the two side views are not in-focus implies that all the views are not in convergence.
  13. 13. An automatic convergent stereoscopic machine vision system as claimed in claim 12 wherein the error signal is used to activate the mechanism to rotate the two views.
  14. 14. An automatic convergent stereoscopic machine vision system as claimed in claim 13 that requires the rotation of the two side views are in synchronism and speed but in counter direction with one another.
  15. 15. An automatic convergent stereoscopic machine vision system as claimed in claim 14 wherein the maximum permissible range of rotation is from 0 to 90 degrees as shown in Fig 6.
  16. 16. An automatic convergent stereoscopic machine vision system as claimed in claim 15 wherein the rotation stops when both side views are in-focus.
  17. 17. An automatic convergent stereoscopic machine vision system as claimed in claim 16 wherein the stopped position is confirmed by matching.
  18. 18. An automatic convergent stereoscopic machine vision system as claimed in claim 17 wherein feature or area matching techniques are applied.
  19. 19. An automatic convergent stereoscopic machine vision system as claimed in claim 18 wherein the matching is confined to the foveae of the views only.
  20. 20. An automatic convergent stereoscopic machine vision system as claimed in claim 19 wherein a very small tolerance is allowed for matching deviation, thus serving as a confirmation to the stopped position first defined by the focusing control.
  21. 21. An automatic convergent stereoscopic machine vision system as claimed in all the previous claims wherein a state of convergence is reached for all three views.
  22. 22. An automatic convergent stereoscopic machine vision system as claimed in claim 21 wherein the same object point appears on the view centres of the three views.
  23. 23. An automatic convergent stereoscopic machine vision system as claimed in claim 22 wherein the image of the same object point is characterised by having almost equal intensity, constrast and features on all the foveal views.
  24. 24. An automatic convergent stereoscopic machine vision system as claimed in claim 23 wherein maximum overlapping image is found within the foveae of all views.
  25. 25. An automatic convergent stereoscopic machine vision system as claimed in claim 24 wherein either binocular or trinocular stereoscopic matching algorithms is carried out to obtain a disparity map.
  26. 26. An automatic convergent stereoscopic machine vision system as claimed in claim 25 wherein the disparity values are obtained accurately closer to the view centre.
  27. 27. An automatic convergent stereoscopic machine vision system as claimed in claim 26 wherein the disparity map is triangulated to provide calculations for the range of the object as first attended to by the centre view.
  28. 28. An automatic convergent stereoscopic machine vision system as claimed in claim 27 wherein the range of the point of convergence is related to the angle of rotation of the two side views.
  29. 29. An automatic convergent stereoscopic machine vision system as described herein with reference to Fig 1-7 of the accompanying drawings.
GB8900279A 1989-01-06 1989-01-06 Automatic convergent stereoscopic machine vision Expired - Fee Related GB2226923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB8900279A GB2226923B (en) 1989-01-06 1989-01-06 Automatic convergent stereoscopic machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB8900279A GB2226923B (en) 1989-01-06 1989-01-06 Automatic convergent stereoscopic machine vision

Publications (3)

Publication Number Publication Date
GB8900279D0 GB8900279D0 (en) 1989-03-08
GB2226923A true GB2226923A (en) 1990-07-11
GB2226923B GB2226923B (en) 1993-03-17

Family

ID=10649698

Family Applications (1)

Application Number Title Priority Date Filing Date
GB8900279A Expired - Fee Related GB2226923B (en) 1989-01-06 1989-01-06 Automatic convergent stereoscopic machine vision

Country Status (1)

Country Link
GB (1) GB2226923B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0563737A1 (en) * 1992-03-23 1993-10-06 Canon Kabushiki Kaisha Multilens imaging apparatus with correction of misregistration
EP0564885A1 (en) * 1992-03-23 1993-10-13 Canon Kabushiki Kaisha Multilens imaging apparatus with inclined image sensors
FR2699296A1 (en) * 1992-12-14 1994-06-17 Boute Olivier Video camera arrangement for forming three=dimensional images - has three cameras mounted on arm and spaced apart by variable distance and angle of convergence, with automatic objective regulation for depth, centre camera being fixed
EP0811876A1 (en) * 1996-06-05 1997-12-10 Pieter O. Zanen Method and apparatus for three-dimensional measurement and imaging having focus-related convergence compensation
WO1998047294A1 (en) * 1997-04-15 1998-10-22 Aea Technology Plc Camera control apparatus and method
WO1999030508A1 (en) * 1997-12-05 1999-06-17 Mcgill University Stereoscopic gaze controller
WO1999063378A1 (en) * 1998-06-01 1999-12-09 Dan Mugur Diaconu Motion picture camera focus indicator using the parallax geometry of two video cameras
US7108378B1 (en) 2001-01-29 2006-09-19 Maguire Jr Francis J Method and devices for displaying images for viewing with varying accommodation
EP1916838A1 (en) * 2006-10-26 2008-04-30 Fluke Corporation Integrated multiple imaging device
FR2909778A1 (en) * 2006-12-06 2008-06-13 Intersigne Soc Par Actions Sim Image capturing method for diffusing fixed/animated images, involves distributing sensors on arc of circle with specific radius, where length of cords separating optical centers is maintained constant irrespective of length of radius
EP2026589A1 (en) * 2007-08-10 2009-02-18 Honda Research Institute Europe GmbH Online calibration of stereo camera systems including fine vergence movements
EP2255540A2 (en) * 2008-02-19 2010-12-01 Bae Systems Information And Electronic Systems Focus actuated vergence
EP2328341A1 (en) * 2008-08-20 2011-06-01 Tokyo Institute of Technology Long-distance target detection camera system
EP2400765A1 (en) * 2010-06-22 2011-12-28 Sony Ericsson Mobile Communications AB Stereoscopic image capturing apparatus, method and computer program
CN101702076B (en) * 2009-10-30 2012-08-22 深圳市掌网立体时代视讯技术有限公司 Stereoscopic shooting auto convergence tracking method and system

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0563737A1 (en) * 1992-03-23 1993-10-06 Canon Kabushiki Kaisha Multilens imaging apparatus with correction of misregistration
EP0564885A1 (en) * 1992-03-23 1993-10-13 Canon Kabushiki Kaisha Multilens imaging apparatus with inclined image sensors
US5499051A (en) * 1992-03-23 1996-03-12 Canon Kabushiki Kaisha Multi-lens image pickup apparatus having unsharpness correcting mechanism
US5668595A (en) * 1992-03-23 1997-09-16 Canon Kabushiki Kaisha Multi-lens imaging apparatus having a mechanism for combining a plurality of images without displacement of registration
FR2699296A1 (en) * 1992-12-14 1994-06-17 Boute Olivier Video camera arrangement for forming three=dimensional images - has three cameras mounted on arm and spaced apart by variable distance and angle of convergence, with automatic objective regulation for depth, centre camera being fixed
EP0811876A1 (en) * 1996-06-05 1997-12-10 Pieter O. Zanen Method and apparatus for three-dimensional measurement and imaging having focus-related convergence compensation
EP1400842A1 (en) * 1996-06-05 2004-03-24 Pieter O. Zanen Method for three-dimensional measurement and imaging having focus-related convergence compensation
WO1998047294A1 (en) * 1997-04-15 1998-10-22 Aea Technology Plc Camera control apparatus and method
WO1999030508A1 (en) * 1997-12-05 1999-06-17 Mcgill University Stereoscopic gaze controller
US5984475A (en) * 1997-12-05 1999-11-16 Mcgill University Stereoscopic gaze controller
WO1999063378A1 (en) * 1998-06-01 1999-12-09 Dan Mugur Diaconu Motion picture camera focus indicator using the parallax geometry of two video cameras
US7108378B1 (en) 2001-01-29 2006-09-19 Maguire Jr Francis J Method and devices for displaying images for viewing with varying accommodation
EP1916838A1 (en) * 2006-10-26 2008-04-30 Fluke Corporation Integrated multiple imaging device
FR2909778A1 (en) * 2006-12-06 2008-06-13 Intersigne Soc Par Actions Sim Image capturing method for diffusing fixed/animated images, involves distributing sensors on arc of circle with specific radius, where length of cords separating optical centers is maintained constant irrespective of length of radius
WO2008081115A2 (en) * 2006-12-06 2008-07-10 Telerelief Method and device for taking shots using a plurality of optoelectronic sensors
WO2008081115A3 (en) * 2006-12-06 2008-09-18 Telerelief Method and device for taking shots using a plurality of optoelectronic sensors
EP2026589A1 (en) * 2007-08-10 2009-02-18 Honda Research Institute Europe GmbH Online calibration of stereo camera systems including fine vergence movements
EP2255540A2 (en) * 2008-02-19 2010-12-01 Bae Systems Information And Electronic Systems Focus actuated vergence
EP2255540A4 (en) * 2008-02-19 2011-07-06 Bae Systems Information Focus actuated vergence
US8970677B2 (en) 2008-02-19 2015-03-03 Bae Systems Information And Electronic Systems Integration Inc. Focus actuated vergence
EP2328341A1 (en) * 2008-08-20 2011-06-01 Tokyo Institute of Technology Long-distance target detection camera system
EP2328341A4 (en) * 2008-08-20 2011-09-14 Tokyo Inst Tech Long-distance target detection camera system
CN101702076B (en) * 2009-10-30 2012-08-22 深圳市掌网立体时代视讯技术有限公司 Stereoscopic shooting auto convergence tracking method and system
EP2400765A1 (en) * 2010-06-22 2011-12-28 Sony Ericsson Mobile Communications AB Stereoscopic image capturing apparatus, method and computer program

Also Published As

Publication number Publication date
GB8900279D0 (en) 1989-03-08
GB2226923B (en) 1993-03-17

Similar Documents

Publication Publication Date Title
GB2226923A (en) Automatic convergent stereoscopic machine vision
Kuniyoshi et al. Active stereo vision system with foveated wide angle lenses
US4751570A (en) Generation of apparently three-dimensional images
US5883662A (en) Apparatus for three-dimensional measurement and imaging having focus-related convergance compensation
EP0888017A2 (en) Stereoscopic image display apparatus and related system
Coombs et al. Real-time smooth pursuit tracking for a moving binocular robot
EP3753246B1 (en) Imaging system and method for producing stereoscopic images using more than two cameras and gaze direction
EP0466277A1 (en) Passive ranging and rapid autofocusing
Krishnan et al. Range estimation from focus using a non-frontal imaging camera
JPH11508057A (en) Imaging device with three-dimensional measurement and focus-related convergence correction and method of use
US20120113231A1 (en) 3d camera
WO2011123155A1 (en) Frame linked 2d/3d camera system
WO2010087794A1 (en) Single camera for stereoscopic 3-d image capture
JPH06339155A (en) Three-dimensional image pickup system
Cortes et al. Increasing optical tracking workspace of VR applications using controlled cameras
Viéville et al. Experimenting with 3D vision on a robotic head
KR20020042917A (en) The Apparatus and Method for Vergence Control of Stereo Camera
EP0779535A1 (en) Camera with variable deflection
GB2250604A (en) Small standoff one-camera-stereo adaptor
JP2791092B2 (en) 3D camera device
GB2147762A (en) An artifical binocular vision system
Tanaka et al. 3-D tracking of a moving object by an active stereo vision system
JP4235291B2 (en) 3D image system
JPH0777659A (en) Multiple-lens optical device
KR200180439Y1 (en) Automatic controlling apparatus for main visual angle in three-dimensional image camera

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 19940106