WO2014130584A1 - Binocular fixation imaging method and apparatus - Google Patents

Binocular fixation imaging method and apparatus Download PDF

Info

Publication number
WO2014130584A1
WO2014130584A1 PCT/US2014/017214 US2014017214W WO2014130584A1 WO 2014130584 A1 WO2014130584 A1 WO 2014130584A1 US 2014017214 W US2014017214 W US 2014017214W WO 2014130584 A1 WO2014130584 A1 WO 2014130584A1
Authority
WO
WIPO (PCT)
Prior art keywords
binocular
image
region
fixation
scene
Prior art date
Application number
PCT/US2014/017214
Other languages
French (fr)
Inventor
Nicolas S. Holliman
Graham J. Woodgate
Original Assignee
Reald Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reald Inc. filed Critical Reald Inc.
Priority to CN201480022341.7A priority Critical patent/CN105432078B/en
Priority to KR1020157025997A priority patent/KR20150121127A/en
Priority to US14/768,824 priority patent/US10129538B2/en
Priority to EP14754036.3A priority patent/EP2959685A4/en
Publication of WO2014130584A1 publication Critical patent/WO2014130584A1/en
Priority to US16/162,545 priority patent/US20190166360A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Definitions

  • the present disclosure generally relates to image processing, and more specifically, to depth budget and image processing methods, and technologies.
  • Depth budget has become an important concept in binocular image creation. It may create a limit on the total binocular effect in a three dimensional image. This limit in practice is determined by considering many factors including the limits of the human visual system and the parameters of the image display device being used to present the image to the viewer. For stereoscopic images presented in an image plane the depth budget is often discussed in terms of depth behind and in- front of the image plane.
  • FIGURE 15 illustrates a reference U.S. Patent No. 6,798,406, which generally provides a method for producing a stereoscopic image using at least one real or simulated camera wherein the depth of a scene is mapped to a predetermined depth budget in the perceived stereoscopic image.
  • FIGURE 21 illustrates a reference U.S. Patent Pub. No. US 2011/7,983,477, which generally discusses variable depth mapping from scene to perceived stereoscopic image.
  • a method such as that disclosed in U.S. Patent No. 8,300,089 can also be used for variable depth mapping in the depth (Z) dimension.
  • the eye's binocular fixation may be determined using a range of eye tracking devices, either by tracking both eyes or by tracking a single eye and inferring the other from this information.
  • a binocular fixation tracking system is the Eyelink 1000, by Research Ltd., Mississauga, Ontario, Canada, which tracks both eyes at high speed.
  • An aspect of the present disclosure provides a controller that implements variation of the content of binocular images depending upon which region of a binocular image a viewer is fixating.
  • An aspect of the present disclosure includes locally controlling the viewer's perceived depth impression depending on where in perceived depth in an image the viewer is fixating. This has the benefits of enabling the perceived depth to be optimized across the image for quality and performance reasons.
  • a binocular imaging system may include a display for presenting a left eye image and a right eye image perceptually simultaneously, in which the left eye image has an associated left eye field of view of the display and the right eye image has an associated right eye field of view of the display.
  • a gaze tracking element may also be included that may identify at least one or both gaze directions of the left eye and the right eye.
  • the binocular imaging system may further include an image controller that may calculate a binocular region of fixation for the left and right eye, and that alters the displayed left and right eye images.
  • the image controller may alter a subsequently displayed binocular image in response to a change in the region of binocular fixation between a currently displayed binocular image and the subsequently displayed binocular image. Altering the displayed left and right eye images may affect the local image depth content in the binocular region of fixation and surrounding the binocular region of fixation.
  • the binocular region of fixation may include a three dimensional region in which the location varies with the gaze direction of one or both of the left and right eyes.
  • a method for varying binocular image content may include displaying a current binocular image, and using input from the current binocular image, information from a gaze tracker and scene depth measurement information to calculate a region of binocular interest (RBI) in a scene.
  • the method may also include determining whether the region of binocular interest has changed and calculating the scene depth range for mapping to the depth budget when the region of binocular interest has changed.
  • the method may include using a camera control algorithm to generate a
  • the method for varying binocular image content may further include receiving a second input from the gaze tracker and scene depth measure and using the second input from the current binocular image, the gaze tracker and the scene depth measure to calculate the region of binocular interest in the scene when the region of binocular interest has not substantially changed.
  • the method may also include determining a region of binocular fixation in display space (RBF d ) by using gaze tracking information from a viewer watching a displayed binocular image and calculating the equivalent region of binocular fixation in a scene space (RBF S ) by using the region of binocular fixation in display space (RBF d ) provided an image controller.
  • the method may include using the region of binocular fixation in display space (RBF d ) and the equivalent region of binocular fixation in the scene space (RBF S ).
  • the method may further include changing the region of binocular interest based on scene changes while the region of binocular fixation in display space does not substantially change.
  • a method for varying binocular image content may include displaying a current binocular image, using input from the current binocular image and a gaze tracker to calculate a subsequent region of binocular fixation, and determining any change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation. If the case of a change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation, the method may include calculating a disparity range of the subsequent range of binocular fixation. The method may also include determining whether the disparity range is substantially zero and creating a subsequently displayed image when the disparity range is not substantially zero. The method may also include making the currently displayed image, the subsequently displayed binocular image.
  • the method may include receiving a second input from the gaze tracker and using the second input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the subsequent region of binocular fixation has not substantially changed.
  • the method may include receiving a third input from the gaze tracker and using the third input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the disparity range is approximately zero.
  • the gaze tracker may determine the disparity within the fixated region, in which the gaze tracker determines the plane of fixation from the difference between left eye and right eye screen fixation points.
  • the method may include comparing the image disparity of the subsequent object with zero, in which the subsequent object is being imaged where it is the closest object to a viewer in the region of binocular fixation.
  • the method may also include altering a subsequently displayed image in response to a change in the region of binocular fixation between the currently displayed binocular image and the subsequently displayed binocular image and also may form a currently displayed binocular image.
  • Forming a currently displayed binocular image may include estimating a 3D region of fixation and projecting the 3D region of fixation into an image plane to form a binocular region of fixation.
  • the currently displayed binocular image is formed as a left image and a right image and may be selected from a larger source image.
  • FIGURE 1 is a schematic diagram illustrating one embodiment of is a binocular imaging apparatus, in accordance with the present disclosure
  • FIGURE 2 is a schematic diagram illustrating one embodiment of images for the left and right eye, in accordance with the present disclosure
  • FIGURE 3 is a schematic diagram illustrating one embodiment of a currently displayed binocular image pair in accordance with the present disclosure
  • FIGURE 4 is a schematic diagram illustrating one embodiment of the viewer's region of binocular fixation, in accordance with the present disclosure
  • FIGURE 5 is a schematic diagram illustrating one embodiment of an a displayed image with little to no disparity, in accordance with the present disclosure
  • FIGURE 6 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure
  • FIGURE 7 is a schematic diagram illustrating one embodiment of a binocular image pair, in accordance with the present disclosure
  • FIGURE 8 is a schematic diagram illustrating one embodiment of a displayed image, in accordance with the present disclosure.
  • FIGURE 9 is a schematic diagram illustrating one embodiment of a displayed image, in accordance with the present disclosure.
  • FIGURE 10 is a schematic diagram illustrating one embodiment of a displayed image, in accordance with the present disclosure.
  • FIGURE 11 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure.
  • FIGURE 12 is a schematic diagram illustrating one embodiment of a gaze tracking system, in accordance with the present disclosure.
  • FIGURE 13 is a schematic diagram illustrating one embodiment of a scene space, in accordance with the present disclosure.
  • FIGURE 14 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure.
  • FIGURE 15 is a schematic diagram illustrating one embodiment of a binocular image, in accordance with the present disclosure.
  • FIGURE 16 is a schematic diagram illustrating one embodiment of a scene depth range and depth budgets, in accordance with the present disclosure
  • FIGURE 17 is a schematic diagram illustrating one embodiment of an image controller's response, in accordance with the present disclosure.
  • FIGURE 18 is a schematic diagram illustrating one embodiment of an image controller's response, in accordance with the present disclosure.
  • FIGURE 19 is a schematic diagram illustrating one embodiment of an image controller's response, in accordance with the present disclosure.
  • FIGURE 20 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure.
  • FIGURE 21 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure
  • FIGURE 22 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure
  • FIGURE 23 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure
  • FIGURE 24 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure
  • FIGURE 25 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure
  • FIGURE 26 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure.
  • FIGURE 27 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure
  • FIGURE 28 is a schematic diagram illustrating one embodiment of a scene deptch range and perceived depth budget.
  • FIGURE 29 is a schematic diagram illustrating one embodiment of an image system, in accordance with the present disclosure.
  • a binocular imaging system may include a display for presenting a left eye image and a right eye image perceptually simultaneously, in which the left eye image has an associated left eye field of view of the display and the right eye image has an associated right eye field of view of the display.
  • a gaze tracking element may also be included that may identify at least one or both gaze directions of the left eye and the right eye.
  • the binocular imaging system may further include an image controller that may calculate a binocular region of fixation for the left and right eye, and that alters the displayed left and right eye images.
  • the image controller may alter a subsequently displayed binocular image in response to a change in the region of binocular fixation between a currently displayed binocular image and the subsequently displayed binocular image. Altering the displayed left and right eye images may affect the local image depth content in the binocular region of fixation and surrounding the binocular region of fixation.
  • the binocular region of fixation may include a three dimensional region in which the location varies with the gaze direction of one or both of the left and right eyes.
  • a method for varying binocular image content may include displaying a current binocular image, and using input from the current binocular image, information from a gaze tracker and scene depth measurement information to calculate a region of binocular interest (RBI) in a scene.
  • the method may also include determining whether the region of binocular interest has changed and calculating the scene depth range for mapping to the depth budget when the region of binocular interest has changed.
  • the method may include using a camera control algorithm to generate a subsequently displayed binocular image using the scene depth range and making the currently displayed image, the subsequently displayed binocular image.
  • the method for varying binocular image content may further include receiving a second input from the gaze tracker and scene depth measure and using the second input from the current binocular image, the gaze tracker and the scene depth measure to calculate the region of binocular interest in the scene when the region of binocular interest has not substantially changed.
  • the method may also include determining a region of binocular fixation in display space (RBFa) by using gaze tracking information from a viewer watching a displayed binocular image and calculating the equivalent region of binocular fixation in a scene space (RBF S ) by using the region of binocular fixation in display space (RBF d ) provided an image controller.
  • the method may include using the region of binocular fixation in display space (RBF d ) and the equivalent region of binocular fixation in the scene space (RBF S ).
  • the method may further include changing the region of binocular interest based on scene changes while the region of binocular fixation in display space does not substantially change.
  • a method for varying binocular image content may include displaying a current binocular image, using input from the current binocular image and a gaze tracker to calculate a subsequent region of binocular fixation, and determining any change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation. If the case of a change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation, the method may include calculating a disparity range of the subsequent range of binocular fixation. The method may also include determining whether the disparity range is substantially zero and creating a subsequently displayed image when the disparity range is not substantially zero. The method may also include making the currently displayed image, the subsequently displayed binocular image.
  • the method may include receiving a second input from the gaze tracker and using the second input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the subsequent region of binocular fixation has not substantially changed.
  • the method may include receiving a third input from the gaze tracker and using the third input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the disparity range is approximately zero.
  • the gaze tracker may determine the disparity within the fixated region, in which the gaze tracker determines the plane of fixation from the difference between left eye and right eye screen fixation points.
  • the method may include comparing the image disparity of the subsequent object with zero, in which the subsequent object is being imaged where it is the closest object to a viewer in the region of binocular fixation.
  • the method may also include altering a subsequently displayed image in response to a change in the region of binocular fixation between the currently displayed binocular image and the subsequently displayed binocular image and also may form a currently displayed binocular image.
  • Forming a currently displayed binocular image may include estimating a 3D region of fixation and projecting the 3D region of fixation into an image plane to form a binocular region of fixation.
  • the currently displayed binocular image is formed as a left image and a right image and may be selected from a larger source image.
  • FIGURE 1 illustrates a binocular imaging system which may include a binocular image display 5 for presenting different images perceptually and substantially simultaneously to the left and right eyes.
  • the viewer who sees the binocular images presented by image display 5 in which left eye images are seen by the left eye 1 with a field of view of the display 3, and right eye images are seen by the right eye 2 with a field of view of the display 4.
  • the binocular imaging system may also include a gaze tracking element 6 that may identify one or both gaze directions of the left eye and right eye 7, 8, respectively, and may include a way to calculate the viewer's binocular region of fixation 9.
  • the gaze tracking element 6 may calculate the viewer's binocular region of fixation 9 by way of any appropriate processing or computing system that may include a computer readable medium.
  • the binocular region of fixation 9 may be a three dimensional region in which the location varies with the direction of gaze.
  • the binocular imaging system may also include a way to alter the displayed images in such a way as to affect the local image depth content both in and surrounding the binocular region of fixation.
  • a method may discuss a fixated region which has substantially zero disparity.
  • This embodiment may provide a method in the image controller 10 that may alter the subsequently displayed images in response to the change of the viewer's region of binocular fixation 9 between the currently displayed binocular image and the subsequently displayed binocular image, as generally illustrated in FIGURES 6, 2, and 1.
  • step S60 forms the currently displayed images as a left image 22 and right image 23.
  • the currently displayed images may be selected from larger source images 20 and 21, as shown in FIGURE 2.
  • step S61 of FIGURE 6 the current binocular images are displayed by the image display 5 of FIGURE 1.
  • the left and right images 22 and 23 of FIGURE 2 may contain images of objects for example 24, 25, 26 whose horizontal location may differ. This horizontal difference between images of the same object in different locations in left and right eye views is known as image disparity and its magnitude and sign controls the depth perceived by the viewer when they binocularly fuse the left and right image.
  • the image controller 10 of FIGURE receives input from the gaze tracker 6 and uses this input to calculate the subsequent binocular region of fixation.
  • step S63 the controller determines any change in binocular fixation between the current and subsequent fixations, if there is no change in the binocular fixation between the current and subsequent fixations, the controller continues at step S62.
  • step S64 when there is a change in binocular fixation, the image controller 10 calculates which subsequent object in the scene is being imaged where it is the closest object to the viewer in the region of binocular fixation.
  • step S65 the controller compares the image disparity of the subsequent object with zero, if the disparity is zero, the controller continues at step S62.
  • step S66 the controller uses the image disparity of the subsequent object to adjust the subsequently displayed images so that the image disparity of the subsequent object becomes substantially zero.
  • One method is illustrated in FIGURES 3, 4 and 5.
  • FIGURE 3 illustrates the currently displayed binocular image pair 22, 23 where the region of binocular fixation is aligned with the object 25 in the image.
  • FIGURE 3 also includes an illustrative line showing an object with zero disparity in the displayed image. The horizontal disparity between the left and right images for object 25 is zero, as is indicated by the illustrative line 30.
  • the viewer's region of binocular fixation has moved from object 25 to object 24 and the image controller 10 reacts by creating the subsequently displayed images 40 and 41.
  • Region 40 illustrates a newly selected region to be displayed in the left eye view
  • region 41 illustrates a newly selected region to be displayed in the right eye view.
  • the image controller achieves this by finding the disparity for object 24 and, in this case, then slides the right image window to the right by that number of pixels so the disparity of subsequently fixated object 24 is zero.
  • FIGURE 5 The resulting subsequently displayed images are shown in FIGURE 5 where the horizontal disparity for object 24 is now zero as shown by the illustrative line 50.
  • FIGURE 5 includes an illustrative line to show the new object with zero disparity in the displayed image.
  • step S66 of FIGURE 6 the subsequently displayed images 40 and 41 are now made the currently displayed images and control returns to step S61 where the currently displayed images are displayed by the image display 5.
  • Another related embodiment may include a fixated region which may have substantially zero disparity.
  • This embodiment adjusts the imagery in a similar manner to that of the previous embodiment but uses the gaze tracker to determine the disparity within the fixated region.
  • the gaze detector can determine the plane of fixation from the difference between left and right eye screen fixation points. If the plane of fixation is in front of the display screen, for example when the left eye's fixation point on the display is to the right of the right eye's fixation point, it can be inferred with little to no calculation that the imagery in the fixated region has negative disparity. Shifting the imagery relative to each other can remove this negative disparity and provide for substantially zero disparity as in the previous embodiment.
  • Yet another embodiment may have a fixated region with disparity and the surrounding region may have substantially no disparity.
  • This embodiment may provide a method in the image controller 10 that may alter the subsequently displayed images in response to the change in the viewer's region of binocular fixation 9 between the currently displayed binocular image and the subsequently displayed binocular image. Discussion is provided in FIGURES 11, 7, 8, 9, and 10.
  • step SI 10 forms the currently displayed images as in FIGURE 7 for left eye 70 and right eye 71, respectively.
  • FIGURE 7 includes a left image of a currently displayed binocular pair 70, a right image of a currently displayed binocular image pair, a binocular region of fixation in the projected in the image plane 72, an object seen in the left and right image 73, another object seen in the left and right image 74, and yet another object seen in the left and right image.
  • a 3D region of fixation is measured or estimated and is projected into the image plane 72 to form the binocular region of fixation. Any image information outside this region 72 in the images 70 and 71 is the same in each image.
  • FIGURE 8 includes an illustrative line 80 showing there is zero image disparity for object 75, an illustrative line 81 showing there is zero image disparity for object disparity of object 73, an illustrative line 82 showing the horizontal position of object 74 in left image 70, an illustrative line 83 showing horizontal position of object 74 in right image 71 and a horizontal image disparity 84 for object 74.
  • the objects 75 and 73 included in the scene may be outside region 72 and may have no binocular disparity shown by illustrative lines 80 and 81 in FIGURE 8. Meanwhile, the object 74 inside region 72 has binocular disparity 84 and is shown by illustrative lines 82 and 83.
  • step Si l l of FIGURE 1 the current binocular image pair 70 and 71 is displayed on the image display 5 of FIGURE 1.
  • step SI 12 the image controller 10 of FIGURE 1 receives input from the gaze tracker and calculates the subsequent region of binocular fixation 92 in FIGURE 9.
  • step 113 if the region of binocular fixation has not changed, then control returns to step SI 12 of FIGURE 11.
  • step S I 14 of FIGURE 1 the region of binocular fixation has changed and the image controller calculates the depth range of the subsequent region of binocular fixation in the scene.
  • the depth range information may be used to create a formed binocular image for the region of binocular fixation 92 which may be combined with a monocular image to form the subsequently displayed images 90 and 91 as shown in FIGURE 9.
  • FIGURE 9 includes a left image 90 of a binocular image pair, a right image 91 of a binocular image pair, and a binocular region of fixation 92 in the projected image plane.
  • FIGURE 10 includes an illustrative line 1100 showing there is zero image disparity for object 75, an illustrative line 101 showing there is zero image disparity for object 73, an illustrative line 102 showing horizontal position of object 74 in left image 70, an illustrative line 03 showing horizontal position of object 74 in right image 71, and a horizontal image disparity 104 for object 74. Additionally, objects 74 and 75 outside the region of binocular fixation no longer have any disparity as illustrated by lines 100 and 101. [0068] Finally, in step SI 16 the currently displayed binocular image becomes the subsequently displayed binocular image and control returns to step Si l l .
  • FIGURE 12 is a schematic diagram illustrating one embodiment of a gaze tracking system.
  • FIGURE 12 provides an example of a viewer and a display 153 and the different elements in display space.
  • a viewer's eyes 155 are looking at a displayed binocular image.
  • the left eye of the viewer may be looking in a direction referred to as a left eye gaze direction 120 and the right eye of the view may be looking in a direction referred to as a right eye gaze direction 121.
  • the viewer's eyes 155 may be tracked by a gaze tracking system 6.
  • scene 122 depicts a scene as perceived in a fused binocular image and region 160 RBFa may be a region of binocular fixation in display space.
  • the gaze tracking system 6 may provide gaze tracking information which may be used to calculate a viewer's subsequent region of binocular fixation, among other things.
  • FIGURE 13 is a schematic diagram illustrating one embodiment of a scene space.
  • FIGURE 13 provides an example of cameras and a scene space.
  • cameras 154 may be located in a position for capturing images of a scene.
  • Located by the cameras 154 may be a depth measurement system 156.
  • the depth measurement system 156 is illustrated as centrally located between the cameras 154, this is for discussion purposes only and not of limitation as the depth measurement system may be located in other positions with respect to the scene space as appropriate.
  • the range 150 may be the total scene depth range and the scene depth range 163 may represent the scene depth range to map to a depth budget in display space.
  • the region 162 may be the region of binocular interest, RBI, 162 and the region 161 may be the region of binocular fixation projected into scene space, RBF S 161.
  • depth mapping from scene to display space may be determined by the region of binocular fixation in the display space. This embodiment is described referring to FIGURES 20, 15, 16, 17, 18, and 19.
  • FIGURE 15 includes a scene depth range 150, a depth measurement element 156, cameras 154, a virtual display 152, a physical display 153, and a viewer's eyes 155.
  • the specific mapping of depth from the scene space being imaged 150 to the display space perceived depth budget 151 can be calculated using a pre-existing camera control algorithm such as in reference U.S. Patent No. 6,798,406, given a depth measurement element 156 to determine the range of depth in the scene.
  • the depth range 150 in the scene can, for example, be computed from a depth map in synthetic scenes or an optical or laser range finder in real scenes.
  • FIGURE 16 includes a calculated region corresponding to RBFs in scene space 161, a calculated region of binocular interest RBI in scene space 162, a scene depth range to map to depth budget 163, a perceived depth budget 151, and a measured region of binocular fixation RBF d in display space 160.
  • the RBFd is used by the image controller 10 of FIGURE 1 to calculate the equivalent region of binocular fixation in the scene space RBF S 161.
  • RBF S 161 may then be used to calculate the region of binocular interest in scene space RBI 162.
  • RBI encompasses any objects that fall in a volume of space that is a super-set of the RBF S .
  • the RBI may be any convenient three-dimensional shape including, but not limited to, a parallelepiped, cylinder, ellipse, frustum, and so forth.
  • step S204 of FIGURE 20 once the RBI is calculated, the scene depth range that is to be mapped to the perceived depth budget can be found by calculating the depth extent of the RBI, illustrated in FIGURE 16 as 163. This allows the application of any depth mapping camera control algorithm as generally discussed in U.S. Patent No. 6,798,406 to generate a subsequent binocular image in step S205 and set this for display in step S206.
  • FIGURES 17, 18 and 19 illustrate the image controller's response to a real time change in the viewer's region of binocular fixation RBF d .
  • FIGURE 17 includes scene depth range to map to a depth budget 163, a perceived depth budget 151, and a changed location of RBF d in display space 170.
  • FIGURE 17 the RBF d has changed to a different position in the display space 170 as detected by the gaze-tracking element 6 and calculated by the image controller 10.
  • the image controller 10 calculates a new RBFs 180 as illustrated in FIGURE 18 and additionally calculates a new RBI 181 that forms a volume of space that is a superset of the RBFs 180.
  • FIGURE 18 includes a calculated changed location of RBF S in scene space 180, a calculated changed location of RBI in scene space 181, a scene depth range to map to a depth budget 182, and a perceived depth budget.
  • the new RBI may be larger, or smaller than that the current value.
  • the scene depth range 182 to be mapped to the depth budget 151 will then also change. Once the scene depth range 182 is known, the application of any depth mapping camera control algorithm as generally discussed in U.S. Patent No. 6,798,406, can map the newly calculated scene depth range 182 to the display perceived depth budget 151.
  • FIGURE 19 shows the new mapping of scene depth to depth budget.
  • the technical benefit is that as the viewer's gaze moves around the scene, as displayed in the binocular image, the depth in the RBFa and corresponding RBI is continuously optimized to fit the available depth budget 151.
  • FIGURE 19 includes a scene depth range 182 to map to a depth budget, a perceived depth budget 151, and a perceived depth range of the entire image 190.
  • a depth measure element 156 which in computer graphics may be a depth buffer, or in photography, may be a range finder such as an optical or laser device.
  • Yet another embodiment may include a fixated region which may have preferred disparity and the surrounding region may have a different disparity where the total disparity does not exceed a predetermined limit using variable z-region mapping.
  • This embodiment may provide a method in the image controller 10 that is able to alter the subsequently displayed images in response to the change in the viewer's region of binocular fixation 9, between the currently displayed binocular image and the subsequently displayed binocular image, with reference to FIGURES 26, 20, 21, 22, 23, 24, and 25.
  • FIGURE 21 includes a scene depth range 150, cameras 154, a depth measurement element 156, a near region 210, a far region 212, a region of interest 21 1, a viewer's eyes 155 and a perceived depth range 151.
  • step S260 a first binocular image is formed. This can be formed when a scene depth range 150, as shown in FIGURE 15, is mapped to a perceived depth range 151 using a method as disclosed in references such as U.S. Patent No. 6,798,406, U.S. Patent Application Pub. No. US 201 1/7,983,477 or U.S. Patent No. 8,300,089 both of which are herein incorporated by reference in their entirety.
  • the first current binocular image is then displayed in step S261.
  • step S262 the image controller receives input from the gaze tracker 6 this allows identification of the region of binocular fixation RBFa 160 in display space. From this, the region of binocular fixation in scene space RBF S 161 can be found and with additional input from the scene depth measurement element 156 the region of binocular interest RBI 162 in the scene can be calculated. Knowing the RBI or scene depth range 150, it is possible to calculate 163, which is the scene depth range 150 to be mapped to the perceived depth budget 151 in display space. In this instance 163 is approximately the same as the scene depth range 150, for example, the RBI has not changed, and so no change in the depth mapping is required and step S263 can return to step S262.
  • FIGURE 22 includes a scene depth range 150, cameras 154, a depth measurement element 156, a near region 210, a far region 212, a region of interest 211 , a scene depth range 163 to map to a depth budget, a viewer's eyes 155, and a perceived depth range 151.
  • FIGURE 23 the viewer's gaze has changed and the input from the gaze tracker identifies a subsequent RBFa 230.
  • FIGURE 23 and similarly FIGURES 24 and 25 all include a scene depth range 150, cameras 154, a depth measurement element 156, a near region 210, a far region 212, a region of interest 21 1, a viewer's eyes 155, and a perceived depth range 151. Then as illustrated in FIGURE 24 this allows a subsequent RBF S 240 to be calculated and from this the subsequent RBI 241. As the subsequent RBI 241 is now different from the current RBI 162 (as illustrated in FIGURE 16), execution continues at step S264 and the subsequent scene depth range 163 is calculated.
  • Step S265 then calculates a new mapping of depth from the scene to the display space.
  • FIGURE 25 illustrates one way to implement the mapping for step S265 using a multi- region depth mapping algorithm such as generally disclosed in U.S. Patent No. 7,983,477.
  • the RBI can be considered as a region of interest 211 dividing the scene into three regions including a nearer region 210 and a further region 212. These are then mapped to three corresponding regions in the scene space, 213, 214, and 215. Because the regions 213, 214, and 215 may differ in the amount of perceived depth allocated to them, the region of interest 211 and hence the RBI can be given a preferential amount of scene depth compared to the near and far regions. Additionally it prevents any objects of the scene from appearing outside of the perceived depth range 151, such as for example, the single region mapping as illustrated in FIGURE 19. Once the subsequent image is formed it is set to be the current image in step S266 and control return to step S261.
  • Yet another embodiment may include a fixated region which may have preferred disparity and the surrounding region may have a different disparity where the total disparity does not exceed a predetermined limit using variable camera parameters in one or two dimensions.
  • This embodiment may provide a method in the image controller 10 that may alter the subsequently displayed images in response to changes in the viewer's region of binocular fixation 9 between the currently displayed binocular image and the subsequently displayed binocular image, with reference to FIGURES 27, 28, 29, and 30.
  • the flowchart step S300 forms a first binocular image.
  • This image may be formed when a scene depth range 150 is mapped to a perceived depth range 151 using a method as disclosed in references such as U.S. Patent No. 6,798,406, U.S. Patent No.
  • step S302 the image controller receives input from the gaze tracker 6 and this may allow identification of the region of binocular fixation RBFa 160 in display space. From this, the region of binocular fixation RBF S 161 in scene space can be found and with additional input from the scene depth measurement element 156 the region of binocular interest RBI 162 in the scene can be calculated. Knowing the RBI it is possible to calculate 163 the scene depth range to be mapped to the perceived depth range 151 in display space. If the RBI has not changed no change in the depth mapping is required and step S303 returns to S302.
  • FIGURE 27 when an RBI has been identified, a locally varying depth mapping from scene space to display space can be calculated in S304. This can vary the stereoscopic camera parameters used to capture the image. For example, a full stereoscopic 3D effect near the RBI may change to a simple 2D effect outside the RBI, as illustrated in FIGURE 27.
  • FIGURE 27 includes 270 and 271 which may be the locally varying perceived depth range in the image, objects 272 which may be objects outside the RBI (162) and are allocated no perceived depth using disparity, a scene depth range 163 to map to a depth budget, and a perceived depth budget 151.
  • FIGURE 28 includes 280 and 281 which may be the new locally varying perceived depth range in the image, objects 282 which may be objects outside the new RBI (181) and which are allocated no perceived depth, a scene depth range 182 to map to a depth budget, and a perceived depth budget.
  • FIGURE 29 illustrates how the camera parameters used for rendering in step S305 can vary in one dimension depending where in the stereoscopic image, different scene elements may appear.
  • FIGURE 29 includes a scene depth range 163.
  • elements away from the RBI may be rendered from a single central camera viewpoint C, while elements in the RBI are rendered with a stereoscopic camera setting A 0 , which may be calculated using methods as generally discussed in U.S. Patent No. 6,798,406.
  • the camera setup is linearly interpolated, with the interaxial separation Ai reducing until the individual use of the single central camera C is appropriate.
  • a further embodiment of this approach is to vary the camera parameters with vertical as well as horizontal element position, so that the regions of the image that are horizontally and vertically close to the RBI, are rendered with full stereoscopic effect.
  • Listing 1 provides an outline of a GLSL, as generally discussed in OpenGL Reference Pages, at http://www.opengl.org/documentation/glsl/, a vertex shader solution for interpolating the camera parameters appropriate for projecting and shading vertices in a real time computer graphics system, in order to produce a foveated stereoscopic rendering effect.
  • 29 float fadeZone 30.0; // Width of cross fade region.
  • weightY (vertY - rABound) / fadeZone ;
  • weightY max( weightY, 0.0 ) ; // Calculate weight in the Y direction.
  • weight max( weight, weightY ); // Choose to use max of X and Y weights.
  • MVPMat projectionMatrix * viewMatrix * modelMatrix ;
  • cMVPMat cProjectionMatrix * cViewMatrix * modelMatrix ;
  • Lines 53-60 describe how the surface normal vectors are transformed using the weighted normal transformation matrix and then used to calculate a shaded color value for the vertex using for illustration a single light Lambertian shading model.
  • Lines 62-68 describe how the vertex position is transformed using the weighted model-view-projection matrix.
  • Yet another embodiment may include a fixated region which may have preferred disparity and the surrounding region may have a different disparity in which the total disparity does not exceed a predetermined limit using variable camera parameters in three dimensions.
  • a tri-linear interpolation model to calculate the weight value will allow the depth dimension to be foveated as well as the two image dimensions. This can be implemented using a camera model as described in U.S. Patent No. 7,983,477 or U.S. Patent No. 8,300,089 in which the mapping of the depth dimension is variable.
  • the benefit may include optimizing the depth presentation of the image seen in the foveated region while reducing the computational or depth budget demands for drawing the image regions representing the scene in-front and behind this region in depth. For example, in a driving game the best image quality is given to the region of the scene to which the driver is attending.
  • Another embodiment may have a fixated region which may have preferred disparity and the surrounding region may have a different disparity in which the total disparity does not exceed a predetermined limit by any of the above methods and multiple fixated regions are computed to allow multiple gaze-tracked viewers to watch the same screen and attend to different parts of the screen. For example, if there are multiple viewers of the same screen then the multiple viewers are general unlikely to be fixating on the same region of the image.
  • This can be solved with multiple gaze tracking devices, for example each wearing a head mounted Eye Link II eye tracker from SR Research Ltd., Mississauga, Ontario, Canada. Using the eye tracking information, each viewers' RBI can be calculated and used to determine the individual weights used to control the camera parameters across the screen.
  • this embodiment enables multiple viewers to look at a gaze tracked image and although temporally varying the regions of interest are often similar enough between viewers, as generally disclosed in Active Vision, Findlay and Gilchrist, OUP, 2003, this may result in savings in image regions to which none of the viewers may attend.
  • a fixated region may have preferred disparity and the surrounding region may have a different disparity in which the total disparity does not exceed a predetermined limit by any of the above methods and the disparity is temporally modified to distinguish the region or a part of one of the regions.
  • temporal disparity modification a viewer relocates their fixation to a region of interest and then after a period the disparity in the region of fixation is altered to introduce a noticeable depth alteration. In turn this introduces a detectable change in the convergence and/or divergence of each eye allowing the eyes mean fixation point to be calculated.
  • this can be used to provide an embodiment for user interfaces in which the icons on a desktop are given differential depth within a finite but coarse fixation region. One icon may then be temporally modified. If differential changes in convergence and/or divergence of the eyes is detected it can then be inferred that the eyes are fixating the varying icon.
  • the icon could then be primed for selection. Once an icon is primed in this way the viewer can activate the primed icon by pressing a button, long fixation, blinking, any combination thereof, and so forth.
  • a further enhancement can allow the system to dynamically adapt to the user, if the system does not detect any change in eye convergence and/or divergence it can alter which icon is varied in disparity and eventually prime a specific icon when it eventually detects a temporal change in eye convergence.
  • the benefit of temporally altering the disparity here is to use the induced temporal changes in eye convergence and/or divergence to increase the overall system confidence in regards to which icon is being attended too.
  • temporal disparity modification might not be continuous.
  • An alternative use of the temporal variation in disparity may be to attract attention to regions of a binocular image. In this case regions outside the region of attention can be varied to attract the viewer to attend to them, for example because of the looming effect as generally discussed in Basic Vision, Snowden, Thompson, Troscianko, OUP, 2006.
  • a direct benefit of this is in warning systems in which there is a need for the viewer's attention to be drawn to urgent information in a region outside their current region of fixation.
  • the disparity of a warning icon outside the region of interest is varied temporally.
  • One example of such an icon may be a low battery warning indicator. Although it is unlikely that it would be in the viewer's region of fixation, it is important to draw the viewer's attention to the icon when the battery is lower than a predetermined capacity remaining. It may be evident to those skilled in the art that there are many other icons in which this may benefit in many types of information presentation systems
  • Yet another embodiment may include a fixated region which may have preferred disparity and the surrounding region may have a different disparity in which the total disparity does not exceed a predetermined limit by any of the above methods and one or more of the following image quality parameters are also varied.
  • image quality parameters are listed below.
  • the first image quality parameter that may be varied is color quality in terms of bits per pixel, or other color representation scheme. This may benefit the highlighting of certain areas using enhanced or reduced color representations and the benefit of reduced color computation time and/or energy saving in the GPU and/or reduced display bandwidth in those areas of reduced color representations.
  • Another image quality parameter that may be varied is grey level quality in terms of bits per pixel, or other grey level representation scheme. This may provide the benefit of highlighting certain areas using enhanced or reduced grey level representations and the benefit of reduced grey level computation time and/or reduced display bandwidth in those areas of reduced grey level representations.
  • image luminance in terms of total light power for example, by using less of the display brightness range, or by using a high dynamic range display with an ability to boost brightness in a particular region of an image.
  • This has benefits including, but not limited to, reduced power usage in regions of the screen with lower brightness and lower visibility of high frequency image artifacts such as aliasing in the lower brightness regions of the image when lower resolution image content is used.
  • image contrast for example, by changing the gamma curve of the displayed image. This has the benefit of masking the visibility of other performance changes. For example, reduced resolution can result in blockiness in the image which can be masked with a low pass filter.
  • image spatial frequency content for example, using high, low or band pass filters.
  • regions can be blurred to reduce computation and reduce spatial resolution that may be appropriate in some regions of the image. This may contribute to reducing computational demands in regions of the screen with lower spatial frequency.
  • Another image quality parameter that may be varied is image temporal frequency using higher or lower image refresh rates in different areas of the screen. This may contribute to reducing computational and display bandwidth conditions in regions of the screen with lower temporal frequency.
  • Another image quality parameter that may be varied is scene geometry content in which the quality of the computer graphics model is varied by changing the quality of geometric model used to represent objects. This may contribute to reducing computational bandwidth conditions in regions of the screen with reduced quality geometric models, for example, lower number of triangles in geometry meshes.
  • Another image quality parameter that may be varied is scene texture image content in which the quality of the computer graphics model texture images is varied. This may contribute to reducing computational bandwidth conditions in regions of the screen with reduced quality texture images, for example lower resolution images.
  • Another image quality parameter that may be varied is computer graphics rendering parameters so that effects including specular highlights, reflection, refraction, transparency vary in quality between the image regions. This may contribute to reducing computational bandwidth conditions in regions of the screen with reduced graphics effects.
  • Another image quality parameter that may be varied is disparity gradient in terms of maximum gradient allowed in one region compared to another region. This may contribute to improving perceived image quality in image regions in which disparity gradient may otherwise be too high to fuse the images comfortably, or so high that it may be detrimental to task performance.
  • binocular fixation may be a volume in space around the point of intersection of the two optical axes of the eyes.
  • binocular image may be a pattern of light that generates separate stimulus for the two eyes. This may include multiple resolvable views in different directions over each pupil. It can, for example, be generated using discrete views or continuous wave fronts, technically produced using stereoscopic, auto-stereoscopic, multiscopic or holographic optical devices.
  • a binocularly fused image may be a perceptually single view (cyclopean view) of the world formed by fusing two images. This may provide a sensation of (perceived) depth in the scene.
  • capture may be a process that generates a binocular image from the real world or synthetic data.
  • the binocular image may be using optical functions such as still or motion cameras, or rendered using computer graphics or other image synthesis mechanisms.
  • depth budget may be a range of perceived depth, implying a range of binocular disparity that has been chosen as the total limit of perceived depth seen in a binocularly fused image.
  • the depth budget may be chosen for comfort or technical reasons.
  • depth mapping may be the process of capturing depth from a scene and reproducing it as perceived depth in a binocular image.
  • depth measurement or depth measurement element may be a mechanism, real or virtual, for measuring distance, depth, of a surface from a fixed point.
  • this may be a laser rangefinder, an optical range finder, and so forth.
  • this may be a depth map, or a geometric calculation that measures the distance from a fixed point.
  • the depth measurements may be relative to camera position and may be used to calculate a depth mapping from scene space to the perceived image space.
  • gaze tracking may include methods for following the eyes movements to determine the direction of gaze. These can be implemented with devices that employ direct contact with the eye or are remote measurement elements that, for example, follow reflections of light from the eye.
  • foveated images may be an image that is perceived in the foveal region of the retina.
  • a foveated region may be a region is an image or a scene that is perceived in the foveal region of the retina.
  • an image may be a pattern of light that can be detected by the retina.
  • disparity may be a difference in the location of a point, normally horizontal, in which horizontal is taken to be defined by the line joining the two eyes and the disparity is measured on the retina.
  • a monoscopic image may be an image that is substantially the same when viewed from any direction. If presented to both eyes, both eyes receive substantially the same pattern of light. For example, a standard 2D TV presents a monoscopic stimulus, each pixel broadcasts the substantially similar or the same light in all viewing directions.
  • a region of binocular fixation in display space may be RBFa or a volume in display space that corresponds to the region of overlap of the gaze zones of the two eyes.
  • a region of binocular fixation in scene space may be RBF S or a volume in scene space that corresponds to the region of overlap of the gaze zones of the two eyes.
  • a region of binocular interest may be an RBI or a volume of scene space that includes the region of binocular fixation and is extended to include the scene limited by the gaze zones of the two eyes.
  • scene depth range may be a range of depth measured in the scene, usually that may be mapped to a range of perceived depth in a fused binocular image.
  • a stereoscopic image may be an image that includes a pair of images that are presented separately to each eye. The implication is that the position of each of the viewer's eyes is important when viewing a stereoscopic image as a different pattern of light is received on the two retinas.
  • rendering may be the process of creating an image from a synthetic scene.
  • synthetic scenes may be scenes in a computer graphics, virtual world or depth-based image that may be physically real, though my represent physically real scenes.
  • a view may be a unique image visible in a single direction.
  • a scene may be a real world or synthetic scene which is being captured and then reproduced as a binocular image.
  • the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to ten percent and corresponds to, but is not limited to, component values, angles, et cetera. Such relativity between items ranges between less than one percent to ten percent.
  • embodiments of the present disclosure may be used in a variety of optical systems.
  • the embodiment may include or work with a variety of projectors, projection systems, optical components, computer systems, processors, self- contained projector systems, visual and/or audiovisual systems and electrical and/or optical devices.
  • aspects of the present disclosure may be used with practically any apparatus related to optical and electrical devices, optical systems, display systems, presentation systems or any apparatus that may contain any type of optical system.
  • embodiments of the present disclosure may be employed in optical systems, devices used in visual and/or optical presentations, visual peripherals and so on and in a number of computing environments including the Internet, intranets, local area networks, wide area networks and so on.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A controller that may implement variation of the content of binocular images which may depend upon which region of a binocular image a viewer is fixating. An aspect of the present disclosure may include locally controlling the viewer's perceived depth impression which may depend on where in perceived depth in an image the viewer is fixating. This may enable the perceived depth to be optimized across the image for quality and performance reasons.

Description

Binocular fixation imaging method and apparatus
Cross-Reference to Related Applications
[0001] This application claims priority to U.S. Provisional Patent Application Ser. No. 61/766,599, filed February 19, 2013, entitled "Binocular fixation imaging method and apparatus," (Attorney Reference No. 95194936.358000), the entirety of which is herein incorporated by reference.
Technical Field
[0002] The present disclosure generally relates to image processing, and more specifically, to depth budget and image processing methods, and technologies.
Background
[0003] Depth budget has become an important concept in binocular image creation. It may create a limit on the total binocular effect in a three dimensional image. This limit in practice is determined by considering many factors including the limits of the human visual system and the parameters of the image display device being used to present the image to the viewer. For stereoscopic images presented in an image plane the depth budget is often discussed in terms of depth behind and in- front of the image plane.
[0004] Techniques are known for controlling the perceived depth in stereoscopic images so that the total binocular effect remains within the depth budget by controlling the capture or synthesis of the image(s). FIGURE 15 illustrates a reference U.S. Patent No. 6,798,406, which generally provides a method for producing a stereoscopic image using at least one real or simulated camera wherein the depth of a scene is mapped to a predetermined depth budget in the perceived stereoscopic image. FIGURE 21 illustrates a reference U.S. Patent Pub. No. US 2011/7,983,477, which generally discusses variable depth mapping from scene to perceived stereoscopic image. In addition a method such as that disclosed in U.S. Patent No. 8,300,089 can also be used for variable depth mapping in the depth (Z) dimension.
[0005] The eye's binocular fixation may be determined using a range of eye tracking devices, either by tracking both eyes or by tracking a single eye and inferring the other from this information. One example of a binocular fixation tracking system is the Eyelink 1000, by Research Ltd., Mississauga, Ontario, Canada, which tracks both eyes at high speed. Brief Summary
[0006] An aspect of the present disclosure provides a controller that implements variation of the content of binocular images depending upon which region of a binocular image a viewer is fixating. An aspect of the present disclosure includes locally controlling the viewer's perceived depth impression depending on where in perceived depth in an image the viewer is fixating. This has the benefits of enabling the perceived depth to be optimized across the image for quality and performance reasons.
[0007] According to an aspect of the disclosure, a binocular imaging system may include a display for presenting a left eye image and a right eye image perceptually simultaneously, in which the left eye image has an associated left eye field of view of the display and the right eye image has an associated right eye field of view of the display. A gaze tracking element may also be included that may identify at least one or both gaze directions of the left eye and the right eye. The binocular imaging system may further include an image controller that may calculate a binocular region of fixation for the left and right eye, and that alters the displayed left and right eye images. The image controller may alter a subsequently displayed binocular image in response to a change in the region of binocular fixation between a currently displayed binocular image and the subsequently displayed binocular image. Altering the displayed left and right eye images may affect the local image depth content in the binocular region of fixation and surrounding the binocular region of fixation. The binocular region of fixation may include a three dimensional region in which the location varies with the gaze direction of one or both of the left and right eyes.
[0008] According to another aspect of the disclosure, a method for varying binocular image content may include displaying a current binocular image, and using input from the current binocular image, information from a gaze tracker and scene depth measurement information to calculate a region of binocular interest (RBI) in a scene. The method may also include determining whether the region of binocular interest has changed and calculating the scene depth range for mapping to the depth budget when the region of binocular interest has changed. The method may include using a camera control algorithm to generate a
subsequently displayed binocular image using the scene depth range and making the currently displayed image, the subsequently displayed binocular image.
[0009] The method for varying binocular image content may further include receiving a second input from the gaze tracker and scene depth measure and using the second input from the current binocular image, the gaze tracker and the scene depth measure to calculate the region of binocular interest in the scene when the region of binocular interest has not substantially changed. The method may also include determining a region of binocular fixation in display space (RBFd) by using gaze tracking information from a viewer watching a displayed binocular image and calculating the equivalent region of binocular fixation in a scene space (RBFS) by using the region of binocular fixation in display space (RBFd) provided an image controller. In determining whether the region of binocular interest has changed, the method may include using the region of binocular fixation in display space (RBFd) and the equivalent region of binocular fixation in the scene space (RBFS). The method may further include changing the region of binocular interest based on scene changes while the region of binocular fixation in display space does not substantially change.
[0010] According to another aspect of the disclosure, a method for varying binocular image content may include displaying a current binocular image, using input from the current binocular image and a gaze tracker to calculate a subsequent region of binocular fixation, and determining any change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation. If the case of a change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation, the method may include calculating a disparity range of the subsequent range of binocular fixation. The method may also include determining whether the disparity range is substantially zero and creating a subsequently displayed image when the disparity range is not substantially zero. The method may also include making the currently displayed image, the subsequently displayed binocular image.
[0011] Continuing the discussion, the method may include receiving a second input from the gaze tracker and using the second input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the subsequent region of binocular fixation has not substantially changed. The method may include receiving a third input from the gaze tracker and using the third input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the disparity range is approximately zero. The gaze tracker may determine the disparity within the fixated region, in which the gaze tracker determines the plane of fixation from the difference between left eye and right eye screen fixation points. [0012] In the case the method determines whether the disparity range is substantially zero, the method may include comparing the image disparity of the subsequent object with zero, in which the subsequent object is being imaged where it is the closest object to a viewer in the region of binocular fixation. The method may also include altering a subsequently displayed image in response to a change in the region of binocular fixation between the currently displayed binocular image and the subsequently displayed binocular image and also may form a currently displayed binocular image. Forming a currently displayed binocular image may include estimating a 3D region of fixation and projecting the 3D region of fixation into an image plane to form a binocular region of fixation. The currently displayed binocular image is formed as a left image and a right image and may be selected from a larger source image.
Brief Description of the Drawings
[0013] Embodiments are illustrated by way of example in the accompanying figures, in which like reference numbers indicate similar parts, and in which:
[0014] FIGURE 1 is a schematic diagram illustrating one embodiment of is a binocular imaging apparatus, in accordance with the present disclosure;
[0015] FIGURE 2 is a schematic diagram illustrating one embodiment of images for the left and right eye, in accordance with the present disclosure;
[0016] FIGURE 3 is a schematic diagram illustrating one embodiment of a currently displayed binocular image pair in accordance with the present disclosure;
[0017] FIGURE 4 is a schematic diagram illustrating one embodiment of the viewer's region of binocular fixation, in accordance with the present disclosure;
[0018] FIGURE 5 is a schematic diagram illustrating one embodiment of an a displayed image with little to no disparity, in accordance with the present disclosure;
[0019] FIGURE 6 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure; [0020] FIGURE 7 is a schematic diagram illustrating one embodiment of a binocular image pair, in accordance with the present disclosure;
[0021] FIGURE 8 is a schematic diagram illustrating one embodiment of a displayed image, in accordance with the present disclosure;
[0022] FIGURE 9 is a schematic diagram illustrating one embodiment of a displayed image, in accordance with the present disclosure;
[0023] FIGURE 10 is a schematic diagram illustrating one embodiment of a displayed image, in accordance with the present disclosure;
[0024] FIGURE 11 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure;
[0025] FIGURE 12 is a schematic diagram illustrating one embodiment of a gaze tracking system, in accordance with the present disclosure;
[0026] FIGURE 13 is a schematic diagram illustrating one embodiment of a scene space, in accordance with the present disclosure;
[0027] FIGURE 14 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure;
[0028] FIGURE 15 is a schematic diagram illustrating one embodiment of a binocular image, in accordance with the present disclosure;
[0029] FIGURE 16 is a schematic diagram illustrating one embodiment of a scene depth range and depth budgets, in accordance with the present disclosure;
[0030] FIGURE 17 is a schematic diagram illustrating one embodiment of an image controller's response, in accordance with the present disclosure;
[0031] FIGURE 18 is a schematic diagram illustrating one embodiment of an image controller's response, in accordance with the present disclosure;
[0032] FIGURE 19 is a schematic diagram illustrating one embodiment of an image controller's response, in accordance with the present disclosure; [0033] FIGURE 20 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure;
[0034] FIGURE 21 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure;
[0035] FIGURE 22 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure;
[0036] FIGURE 23 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure;
[0037] FIGURE 24 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure;
[0038] FIGURE 25 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure;
[0039] FIGURE 26 is a schematic diagram illustrating a flow chart, in accordance with the present disclosure;
[0040] FIGURE 27 is a schematic diagram illustrating one embodiment of scene depth range and perceived depth range, in accordance with the present disclosure;
[0041] FIGURE 28 is a schematic diagram illustrating one embodiment of a scene deptch range and perceived depth budget; and
[0042] FIGURE 29 is a schematic diagram illustrating one embodiment of an image system, in accordance with the present disclosure.
Detailed Description
[0043] Generally, according to one aspect of the disclosure, a binocular imaging system may include a display for presenting a left eye image and a right eye image perceptually simultaneously, in which the left eye image has an associated left eye field of view of the display and the right eye image has an associated right eye field of view of the display. A gaze tracking element may also be included that may identify at least one or both gaze directions of the left eye and the right eye. The binocular imaging system may further include an image controller that may calculate a binocular region of fixation for the left and right eye, and that alters the displayed left and right eye images. The image controller may alter a subsequently displayed binocular image in response to a change in the region of binocular fixation between a currently displayed binocular image and the subsequently displayed binocular image. Altering the displayed left and right eye images may affect the local image depth content in the binocular region of fixation and surrounding the binocular region of fixation. The binocular region of fixation may include a three dimensional region in which the location varies with the gaze direction of one or both of the left and right eyes.
[0044] According to another aspect of the disclosure, a method for varying binocular image content may include displaying a current binocular image, and using input from the current binocular image, information from a gaze tracker and scene depth measurement information to calculate a region of binocular interest (RBI) in a scene. The method may also include determining whether the region of binocular interest has changed and calculating the scene depth range for mapping to the depth budget when the region of binocular interest has changed. The method may include using a camera control algorithm to generate a subsequently displayed binocular image using the scene depth range and making the currently displayed image, the subsequently displayed binocular image.
[0045] The method for varying binocular image content may further include receiving a second input from the gaze tracker and scene depth measure and using the second input from the current binocular image, the gaze tracker and the scene depth measure to calculate the region of binocular interest in the scene when the region of binocular interest has not substantially changed. The method may also include determining a region of binocular fixation in display space (RBFa) by using gaze tracking information from a viewer watching a displayed binocular image and calculating the equivalent region of binocular fixation in a scene space (RBFS) by using the region of binocular fixation in display space (RBFd) provided an image controller. In determining whether the region of binocular interest has changed, the method may include using the region of binocular fixation in display space (RBFd) and the equivalent region of binocular fixation in the scene space (RBFS). The method may further include changing the region of binocular interest based on scene changes while the region of binocular fixation in display space does not substantially change.
[0046] According to another aspect of the disclosure, a method for varying binocular image content may include displaying a current binocular image, using input from the current binocular image and a gaze tracker to calculate a subsequent region of binocular fixation, and determining any change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation. If the case of a change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation, the method may include calculating a disparity range of the subsequent range of binocular fixation. The method may also include determining whether the disparity range is substantially zero and creating a subsequently displayed image when the disparity range is not substantially zero. The method may also include making the currently displayed image, the subsequently displayed binocular image.
[0047] Continuing the discussion, the method may include receiving a second input from the gaze tracker and using the second input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the subsequent region of binocular fixation has not substantially changed. The method may include receiving a third input from the gaze tracker and using the third input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the disparity range is approximately zero. The gaze tracker may determine the disparity within the fixated region, in which the gaze tracker determines the plane of fixation from the difference between left eye and right eye screen fixation points.
[0048] In the case the method determines whether the disparity range is substantially zero, the method may include comparing the image disparity of the subsequent object with zero, in which the subsequent object is being imaged where it is the closest object to a viewer in the region of binocular fixation. The method may also include altering a subsequently displayed image in response to a change in the region of binocular fixation between the currently displayed binocular image and the subsequently displayed binocular image and also may form a currently displayed binocular image. Forming a currently displayed binocular image may include estimating a 3D region of fixation and projecting the 3D region of fixation into an image plane to form a binocular region of fixation. The currently displayed binocular image is formed as a left image and a right image and may be selected from a larger source image. There are systems disclosed, as generally discussed in U.S. Patent No. 4,634, 384 and U.S. Patent Publication No. 2003/0067476, both of which are herein incorporated by reference in their entirety, which differentially vary two-dimensional image properties based on knowledge of a foveated region. These systems do not address binocular conditions.
[0049] There are systems disclosed, as generally discussed in U.S. Patent Publication No. 2012/0200676 that use eye tracking to adjust a computer graphics model in a scene so that a stereoscopic image rendered from the model is altered. These systems adjust the entire model and do not result in differential changes across the stereoscopic image in order to vary depth mapping from scene to image inside and outside the region of binocular interest.
[0050] Additionally, there are systems disclosed, as generally discussed in U.S. Patent No. 6, 198,484, which are herein incorporated by reference in their entirety, that alter the stereoscopic image presentation based on head position and/or eye tracking to account for motion parallax. These systems do not make differential changes across the stereoscopic image in order to vary depth mapping from scene to image inside and outside the region of binocular interest.
[0051] FIGURE 1 illustrates a binocular imaging system which may include a binocular image display 5 for presenting different images perceptually and substantially simultaneously to the left and right eyes. The viewer who sees the binocular images presented by image display 5 in which left eye images are seen by the left eye 1 with a field of view of the display 3, and right eye images are seen by the right eye 2 with a field of view of the display 4. The binocular imaging system may also include a gaze tracking element 6 that may identify one or both gaze directions of the left eye and right eye 7, 8, respectively, and may include a way to calculate the viewer's binocular region of fixation 9. The gaze tracking element 6 may calculate the viewer's binocular region of fixation 9 by way of any appropriate processing or computing system that may include a computer readable medium. The binocular region of fixation 9 may be a three dimensional region in which the location varies with the direction of gaze. The binocular imaging system may also include a way to alter the displayed images in such a way as to affect the local image depth content both in and surrounding the binocular region of fixation. [0052] In one embodiment a method may discuss a fixated region which has substantially zero disparity. This embodiment may provide a method in the image controller 10 that may alter the subsequently displayed images in response to the change of the viewer's region of binocular fixation 9 between the currently displayed binocular image and the subsequently displayed binocular image, as generally illustrated in FIGURES 6, 2, and 1.
[0053] In FIGURE 6, step S60 forms the currently displayed images as a left image 22 and right image 23. The currently displayed images may be selected from larger source images 20 and 21, as shown in FIGURE 2. In step S61 of FIGURE 6, the current binocular images are displayed by the image display 5 of FIGURE 1.
[0054] The left and right images 22 and 23 of FIGURE 2 may contain images of objects for example 24, 25, 26 whose horizontal location may differ. This horizontal difference between images of the same object in different locations in left and right eye views is known as image disparity and its magnitude and sign controls the depth perceived by the viewer when they binocularly fuse the left and right image.
[0055] Also in FIGURE 6, as shown in step S62, the image controller 10 of FIGURE 1, receives input from the gaze tracker 6 and uses this input to calculate the subsequent binocular region of fixation.
[0056] Continuing the discussion with respect to FIGURES 1, 2, and 6, in step S63 the controller determines any change in binocular fixation between the current and subsequent fixations, if there is no change in the binocular fixation between the current and subsequent fixations, the controller continues at step S62.
[0057] In step S64, when there is a change in binocular fixation, the image controller 10 calculates which subsequent object in the scene is being imaged where it is the closest object to the viewer in the region of binocular fixation. In step S65 the controller compares the image disparity of the subsequent object with zero, if the disparity is zero, the controller continues at step S62.
[0058] In step S66 the controller uses the image disparity of the subsequent object to adjust the subsequently displayed images so that the image disparity of the subsequent object becomes substantially zero. One method is illustrated in FIGURES 3, 4 and 5.
[0059] FIGURE 3 illustrates the currently displayed binocular image pair 22, 23 where the region of binocular fixation is aligned with the object 25 in the image. FIGURE 3 also includes an illustrative line showing an object with zero disparity in the displayed image. The horizontal disparity between the left and right images for object 25 is zero, as is indicated by the illustrative line 30. [0060] In FIGURE 4, the viewer's region of binocular fixation has moved from object 25 to object 24 and the image controller 10 reacts by creating the subsequently displayed images 40 and 41. Region 40 illustrates a newly selected region to be displayed in the left eye view and region 41 illustrates a newly selected region to be displayed in the right eye view. The image controller achieves this by finding the disparity for object 24 and, in this case, then slides the right image window to the right by that number of pixels so the disparity of subsequently fixated object 24 is zero.
[0061] The resulting subsequently displayed images are shown in FIGURE 5 where the horizontal disparity for object 24 is now zero as shown by the illustrative line 50. FIGURE 5 includes an illustrative line to show the new object with zero disparity in the displayed image.
[0062] In step S66 of FIGURE 6, the subsequently displayed images 40 and 41 are now made the currently displayed images and control returns to step S61 where the currently displayed images are displayed by the image display 5.
[0063] Another related embodiment may include a fixated region which may have substantially zero disparity. This embodiment adjusts the imagery in a similar manner to that of the previous embodiment but uses the gaze tracker to determine the disparity within the fixated region. When the left and right eye images are displayed on a stereoscopic device, the gaze detector can determine the plane of fixation from the difference between left and right eye screen fixation points. If the plane of fixation is in front of the display screen, for example when the left eye's fixation point on the display is to the right of the right eye's fixation point, it can be inferred with little to no calculation that the imagery in the fixated region has negative disparity. Shifting the imagery relative to each other can remove this negative disparity and provide for substantially zero disparity as in the previous embodiment.
[0064] Yet another embodiment may have a fixated region with disparity and the surrounding region may have substantially no disparity. This embodiment may provide a method in the image controller 10 that may alter the subsequently displayed images in response to the change in the viewer's region of binocular fixation 9 between the currently displayed binocular image and the subsequently displayed binocular image. Discussion is provided in FIGURES 11, 7, 8, 9, and 10.
[0065] In FIGURE 11, step SI 10 forms the currently displayed images as in FIGURE 7 for left eye 70 and right eye 71, respectively. FIGURE 7 includes a left image of a currently displayed binocular pair 70, a right image of a currently displayed binocular image pair, a binocular region of fixation in the projected in the image plane 72, an object seen in the left and right image 73, another object seen in the left and right image 74, and yet another object seen in the left and right image. To form the currently displayed images, a 3D region of fixation is measured or estimated and is projected into the image plane 72 to form the binocular region of fixation. Any image information outside this region 72 in the images 70 and 71 is the same in each image. This may be a monocular image providing the same information to both the viewers' eyes. Any image information inside region 72 is binocular. This image information may be rendered or captured or synthesized with binocular disparity information. The result is illustrated in FIGURE 8. FIGURE 8 includes an illustrative line 80 showing there is zero image disparity for object 75, an illustrative line 81 showing there is zero image disparity for object disparity of object 73, an illustrative line 82 showing the horizontal position of object 74 in left image 70, an illustrative line 83 showing horizontal position of object 74 in right image 71 and a horizontal image disparity 84 for object 74. The objects 75 and 73 included in the scene may be outside region 72 and may have no binocular disparity shown by illustrative lines 80 and 81 in FIGURE 8. Meanwhile, the object 74 inside region 72 has binocular disparity 84 and is shown by illustrative lines 82 and 83.
[0066] In step Si l l of FIGURE 1 1, the current binocular image pair 70 and 71 is displayed on the image display 5 of FIGURE 1. In step SI 12 the image controller 10 of FIGURE 1 receives input from the gaze tracker and calculates the subsequent region of binocular fixation 92 in FIGURE 9. In step 113 if the region of binocular fixation has not changed, then control returns to step SI 12 of FIGURE 11.
[0067] In step S I 14 of FIGURE 1 1, the region of binocular fixation has changed and the image controller calculates the depth range of the subsequent region of binocular fixation in the scene. Continuing the discussion, in step S I 15 the depth range information may be used to create a formed binocular image for the region of binocular fixation 92 which may be combined with a monocular image to form the subsequently displayed images 90 and 91 as shown in FIGURE 9. FIGURE 9 includes a left image 90 of a binocular image pair, a right image 91 of a binocular image pair, and a binocular region of fixation 92 in the projected image plane. The result is highlighted in FIGURE 10, in which the region with binocular disparity 92 includes object 73 which has binocular disparity 104. FIGURE 10 includes an illustrative line 1100 showing there is zero image disparity for object 75, an illustrative line 101 showing there is zero image disparity for object 73, an illustrative line 102 showing horizontal position of object 74 in left image 70, an illustrative line 03 showing horizontal position of object 74 in right image 71, and a horizontal image disparity 104 for object 74. Additionally, objects 74 and 75 outside the region of binocular fixation no longer have any disparity as illustrated by lines 100 and 101. [0068] Finally, in step SI 16 the currently displayed binocular image becomes the subsequently displayed binocular image and control returns to step Si l l .
[0069] FIGURE 12 is a schematic diagram illustrating one embodiment of a gaze tracking system. FIGURE 12 provides an example of a viewer and a display 153 and the different elements in display space. In FIGURE 12, a viewer's eyes 155 are looking at a displayed binocular image. The left eye of the viewer may be looking in a direction referred to as a left eye gaze direction 120 and the right eye of the view may be looking in a direction referred to as a right eye gaze direction 121. The viewer's eyes 155 may be tracked by a gaze tracking system 6. As illustrated in FIGURE 12, scene 122 depicts a scene as perceived in a fused binocular image and region 160 RBFa may be a region of binocular fixation in display space. The gaze tracking system 6 may provide gaze tracking information which may be used to calculate a viewer's subsequent region of binocular fixation, among other things.
[0070] FIGURE 13 is a schematic diagram illustrating one embodiment of a scene space. FIGURE 13 provides an example of cameras and a scene space. In FIGURE 13, cameras 154 may be located in a position for capturing images of a scene. Located by the cameras 154 may be a depth measurement system 156. Although the depth measurement system 156 is illustrated as centrally located between the cameras 154, this is for discussion purposes only and not of limitation as the depth measurement system may be located in other positions with respect to the scene space as appropriate. In FIGURE 13, the range 150 may be the total scene depth range and the scene depth range 163 may represent the scene depth range to map to a depth budget in display space. Additionally, as depicted in FIGURE 13, the region 162 may be the region of binocular interest, RBI, 162 and the region 161 may be the region of binocular fixation projected into scene space, RBFS 161.
[0071] In yet another embodiment, depth mapping from scene to display space and may be determined by the region of binocular fixation in the display space. This embodiment is described referring to FIGURES 20, 15, 16, 17, 18, and 19.
[0072] Referring to FIGURE 15, the viewer is looking at a first current binocular image and sees perceived depth in it, in this case, within some pre-determined perceived depth budget 151. FIGURE 15 includes a scene depth range 150, a depth measurement element 156, cameras 154, a virtual display 152, a physical display 153, and a viewer's eyes 155. The specific mapping of depth from the scene space being imaged 150 to the display space perceived depth budget 151 can be calculated using a pre-existing camera control algorithm such as in reference U.S. Patent No. 6,798,406, given a depth measurement element 156 to determine the range of depth in the scene. The depth range 150 in the scene can, for example, be computed from a depth map in synthetic scenes or an optical or laser range finder in real scenes.
[0073] Referring to the flowchart in FIGURE 20, a first current binocular image is produced in step S200 and then displayed in step S201. Referring to FIGURE 16, while the viewer is looking at the displayed binocular image their gaze is being tracked using a gaze tracking element 6 of FIGURE 1 and this information is used to determine the region of binocular fixation in display space RBFd 160. FIGURE 16 includes a calculated region corresponding to RBFs in scene space 161, a calculated region of binocular interest RBI in scene space 162, a scene depth range to map to depth budget 163, a perceived depth budget 151, and a measured region of binocular fixation RBFd in display space 160.
[0074] The RBFd is used by the image controller 10 of FIGURE 1 to calculate the equivalent region of binocular fixation in the scene space RBFS 161. RBFS 161 may then be used to calculate the region of binocular interest in scene space RBI 162. RBI encompasses any objects that fall in a volume of space that is a super-set of the RBFS. The RBI may be any convenient three-dimensional shape including, but not limited to, a parallelepiped, cylinder, ellipse, frustum, and so forth.
[0075] In step S204 of FIGURE 20, once the RBI is calculated, the scene depth range that is to be mapped to the perceived depth budget can be found by calculating the depth extent of the RBI, illustrated in FIGURE 16 as 163. This allows the application of any depth mapping camera control algorithm as generally discussed in U.S. Patent No. 6,798,406 to generate a subsequent binocular image in step S205 and set this for display in step S206.
[0076] FIGURES 17, 18 and 19 illustrate the image controller's response to a real time change in the viewer's region of binocular fixation RBFd. FIGURE 17 includes scene depth range to map to a depth budget 163, a perceived depth budget 151, and a changed location of RBFd in display space 170.
[0077] In FIGURE 17 the RBFd has changed to a different position in the display space 170 as detected by the gaze-tracking element 6 and calculated by the image controller 10. The image controller 10 then calculates a new RBFs 180 as illustrated in FIGURE 18 and additionally calculates a new RBI 181 that forms a volume of space that is a superset of the RBFs 180. FIGURE 18 includes a calculated changed location of RBFS in scene space 180, a calculated changed location of RBI in scene space 181, a scene depth range to map to a depth budget 182, and a perceived depth budget. Depending on the contents of the scene the new RBI may be larger, or smaller than that the current value. The scene depth range 182 to be mapped to the depth budget 151 will then also change. Once the scene depth range 182 is known, the application of any depth mapping camera control algorithm as generally discussed in U.S. Patent No. 6,798,406, can map the newly calculated scene depth range 182 to the display perceived depth budget 151.
[0078] The result in FIGURE 19 shows the new mapping of scene depth to depth budget. The technical benefit is that as the viewer's gaze moves around the scene, as displayed in the binocular image, the depth in the RBFa and corresponding RBI is continuously optimized to fit the available depth budget 151. FIGURE 19 includes a scene depth range 182 to map to a depth budget, a perceived depth budget 151, and a perceived depth range of the entire image 190.
[0079] Of importance is that this embodiment will also operate in animated scenes where the RBI changes due to scene changes even when the RBFa region of binocular fixation does not change. This is measured by a depth measure element 156, which in computer graphics may be a depth buffer, or in photography, may be a range finder such as an optical or laser device.
[0080] Yet another embodiment may include a fixated region which may have preferred disparity and the surrounding region may have a different disparity where the total disparity does not exceed a predetermined limit using variable z-region mapping. This embodiment may provide a method in the image controller 10 that is able to alter the subsequently displayed images in response to the change in the viewer's region of binocular fixation 9, between the currently displayed binocular image and the subsequently displayed binocular image, with reference to FIGURES 26, 20, 21, 22, 23, 24, and 25.
[0081] FIGURE 21 includes a scene depth range 150, cameras 154, a depth measurement element 156, a near region 210, a far region 212, a region of interest 21 1, a viewer's eyes 155 and a perceived depth range 151.
[0082] Referring to the flowcharts in FIGURE 26 and FIGURE 20. In step S260 a first binocular image is formed. This can be formed when a scene depth range 150, as shown in FIGURE 15, is mapped to a perceived depth range 151 using a method as disclosed in references such as U.S. Patent No. 6,798,406, U.S. Patent Application Pub. No. US 201 1/7,983,477 or U.S. Patent No. 8,300,089 both of which are herein incorporated by reference in their entirety. The first current binocular image is then displayed in step S261.
[0083] Referring to FIGURE 22, in step S262 the image controller receives input from the gaze tracker 6 this allows identification of the region of binocular fixation RBFa 160 in display space. From this, the region of binocular fixation in scene space RBFS 161 can be found and with additional input from the scene depth measurement element 156 the region of binocular interest RBI 162 in the scene can be calculated. Knowing the RBI or scene depth range 150, it is possible to calculate 163, which is the scene depth range 150 to be mapped to the perceived depth budget 151 in display space. In this instance 163 is approximately the same as the scene depth range 150, for example, the RBI has not changed, and so no change in the depth mapping is required and step S263 can return to step S262. FIGURE 22 includes a scene depth range 150, cameras 154, a depth measurement element 156, a near region 210, a far region 212, a region of interest 211 , a scene depth range 163 to map to a depth budget, a viewer's eyes 155, and a perceived depth range 151.
[0084] Alternatively referring to FIGURE 23, the viewer's gaze has changed and the input from the gaze tracker identifies a subsequent RBFa 230. FIGURE 23 and similarly FIGURES 24 and 25 all include a scene depth range 150, cameras 154, a depth measurement element 156, a near region 210, a far region 212, a region of interest 21 1, a viewer's eyes 155, and a perceived depth range 151. Then as illustrated in FIGURE 24 this allows a subsequent RBFS 240 to be calculated and from this the subsequent RBI 241. As the subsequent RBI 241 is now different from the current RBI 162 (as illustrated in FIGURE 16), execution continues at step S264 and the subsequent scene depth range 163 is calculated.
[0085] Step S265 then calculates a new mapping of depth from the scene to the display space. FIGURE 25 illustrates one way to implement the mapping for step S265 using a multi- region depth mapping algorithm such as generally disclosed in U.S. Patent No. 7,983,477. Here the RBI can be considered as a region of interest 211 dividing the scene into three regions including a nearer region 210 and a further region 212. These are then mapped to three corresponding regions in the scene space, 213, 214, and 215. Because the regions 213, 214, and 215 may differ in the amount of perceived depth allocated to them, the region of interest 211 and hence the RBI can be given a preferential amount of scene depth compared to the near and far regions. Additionally it prevents any objects of the scene from appearing outside of the perceived depth range 151, such as for example, the single region mapping as illustrated in FIGURE 19. Once the subsequent image is formed it is set to be the current image in step S266 and control return to step S261.
[0086] Yet another embodiment may include a fixated region which may have preferred disparity and the surrounding region may have a different disparity where the total disparity does not exceed a predetermined limit using variable camera parameters in one or two dimensions. This embodiment may provide a method in the image controller 10 that may alter the subsequently displayed images in response to changes in the viewer's region of binocular fixation 9 between the currently displayed binocular image and the subsequently displayed binocular image, with reference to FIGURES 27, 28, 29, and 30.
[0087] In FIGURE 14, the flowchart step S300 forms a first binocular image. This image may be formed when a scene depth range 150 is mapped to a perceived depth range 151 using a method as disclosed in references such as U.S. Patent No. 6,798,406, U.S. Patent No.
7,983,477, or U.S. Patent No. 8,300,089. The current binocular image 5 is displayed in step
S301.
[0088] Referring to FIGURE 14, in step S302, the image controller receives input from the gaze tracker 6 and this may allow identification of the region of binocular fixation RBFa 160 in display space. From this, the region of binocular fixation RBFS 161 in scene space can be found and with additional input from the scene depth measurement element 156 the region of binocular interest RBI 162 in the scene can be calculated. Knowing the RBI it is possible to calculate 163 the scene depth range to be mapped to the perceived depth range 151 in display space. If the RBI has not changed no change in the depth mapping is required and step S303 returns to S302.
[0089] Referring to FIGURE 27, when an RBI has been identified, a locally varying depth mapping from scene space to display space can be calculated in S304. This can vary the stereoscopic camera parameters used to capture the image. For example, a full stereoscopic 3D effect near the RBI may change to a simple 2D effect outside the RBI, as illustrated in FIGURE 27. FIGURE 27 includes 270 and 271 which may be the locally varying perceived depth range in the image, objects 272 which may be objects outside the RBI (162) and are allocated no perceived depth using disparity, a scene depth range 163 to map to a depth budget, and a perceived depth budget 151.
[0090] If the RBI changes, as shown in FIGURE 28, then the region of the image with full stereoscopic 3D effect can be changed too. The benefit becomes any region of the displayed image away from the RBI may be rendered from a single camera viewpoint, saving the computational costs of rendering two images while keeping the foveated region at the highest possible quality. FIGURE 28 includes 280 and 281 which may be the new locally varying perceived depth range in the image, objects 282 which may be objects outside the new RBI (181) and which are allocated no perceived depth, a scene depth range 182 to map to a depth budget, and a perceived depth budget.
[0091] FIGURE 29 illustrates how the camera parameters used for rendering in step S305 can vary in one dimension depending where in the stereoscopic image, different scene elements may appear. FIGURE 29 includes a scene depth range 163. In this case elements away from the RBI may be rendered from a single central camera viewpoint C, while elements in the RBI are rendered with a stereoscopic camera setting A0, which may be calculated using methods as generally discussed in U.S. Patent No. 6,798,406. In the zone between the full stereoscopic and the two-dimensional image regions the camera setup is linearly interpolated, with the interaxial separation Ai reducing until the individual use of the single central camera C is appropriate.
[0092] A further embodiment of this approach is to vary the camera parameters with vertical as well as horizontal element position, so that the regions of the image that are horizontally and vertically close to the RBI, are rendered with full stereoscopic effect.
[0093] One possible implementation of these embodiments is illustrated in Listing 1 which provides an outline of a GLSL, as generally discussed in OpenGL Reference Pages, at http://www.opengl.org/documentation/glsl/, a vertex shader solution for interpolating the camera parameters appropriate for projecting and shading vertices in a real time computer graphics system, in order to produce a foveated stereoscopic rendering effect.
Listing 1:
1 // Listing 1, Method for ID and 2D foveated stereoscopic cameras using GLSL.
2 // This method is called once to draw left picture and once for right picture.
3
4 in vec4 inPosition, inNormal; // Input information about each vertex.
5 out vec4 vsColor; // Output colour for per-vertex shading.
6
7 uniform mat4 modelMatrix; // Model transformation is common to all views.
8
9 uniform mat4 viewMatrix, projectionMatrix; // Left or right stereoscopic camera position.
10 uniform mat3 normalMatrix;
11
12 uniform mat4 cViewMatrix, cProjectionMatrix; // Centre view camera position.
13 uniform mat3 cNormalMatrix;
14
15 uniform float rABound; // Scene space boundary to start cross fading.
16
17 // The scene space origin of the foveated region.
18 uniform float originX, originY;
19
20 uniform vec3 lightDirection; // Used to calculate shaded colour at the vertex.
21 uniform vec4 lightColor, ambientColor;
22
23 void main(void)
24 {
25 vec3 normal, normLightDir;
26 27 float vertX, vertY;
28 vec4 vmPosition ;
29 float fadeZone = 30.0; // Width of cross fade region.
30
31 float weight, inv Weight, weighty, invWeightY ;
32
33 mat4 MVPMat;
34 mat4 cMVPMat;
35
36 mat4 weightedMVPMat;
37 mat3 weightedNormMat;
38
39 vmPosition = (cViewMatrix * modelMatrix) * inPosition;
40 vertX = abs( vmPosition.x + originX );
41 vertY = abs( vmPosition.y + originY );
42
43 weight = (vertX - rABound) / fadeZone ;
44 weightY = (vertY - rABound) / fadeZone ;
45
46 weight = max( weight, 0.0 ) ; // Calculate weight in X direction.
47 weight = min( weight, 1.0 ) ;
48 weightY = max( weightY, 0.0 ) ; // Calculate weight in the Y direction.
49 weightY = min( weightY, 1.0 ) ; //
50 weight = max( weight, weightY ); // Choose to use max of X and Y weights.
51 inv Weight = 1.0 - weight; // NB 1.0 == (weight + invWeight )
52
53 // Calculate weighted transformation matrix for the surface normal.
54 weightedNormMat = (invWeight * normalMatrix) + (weight *
rANormalMatrix) ;
55 normal = weightedNormMat * inNormal;
56 normal = normalize( normal );
57 normLightDir = normalize( lightDirection );
58
59 // Output vertex colour using weighted normal projection and a diffuse lighting model.
60 vsColor = ambientColor * 0.3 + lightColor * max(dot(normal, normLightDir), 0.0); 61
62 // Calculate weighted projection matrix for the vertex geometry.
63 MVPMat = projectionMatrix * viewMatrix * modelMatrix ;
64 cMVPMat = cProjectionMatrix * cViewMatrix * modelMatrix ;
65 weightedMVPMat = (invWeight * MVPMat) + (weight * cMVPMat) ;
66
67 // Output the projected vertex position using the weighted MVP matrix.
68 gl_Position = weightedMVPMat * inPosition;
69 }
[0094] The shader described in Listing 1 is called once for the left eye view and once for the right eye view. Lines 4 through 21 declare the variables that are set before the shader runs. Additionally, lines 9 and 10 describe the camera parameters needed for a left or right eye position. Lines 12 and 13 describe the camera parameters for a single central monoscopic view. Also, lines 25-37 declare the variables that may be used during the calculations of the foveated camera parameters.
[0095] Of note are the variables rABound on line 15 and the fadeZone on line 29, which in combination with the foveated region origin given by originX and originY on line 18, primarily determine the position and extent of the foveated region where stereoscopic rendering will be implemented. At the boundary of this region given by rABound the camera parameters will be interpolated over a scene distance primarily determined by fadeZone to become a monoscopic image.
[0096] The appropriate weighting to do this is calculated between lines 39 and 51. Note, if the weight value calculated has a value of 1.0 then the monoscopic zone has been reached and the calculations between lines 53 and 68 may be calculated once for the left camera and not the right camera view. Resulting in substantial computational savings compared to calculating the projection and shading calculations separately for both eyes.
[0097] Where these calculations are appropriate, for the foveated stereoscopic zone and for one camera's view in the monoscopic zone then:
a. Lines 53-60 describe how the surface normal vectors are transformed using the weighted normal transformation matrix and then used to calculate a shaded color value for the vertex using for illustration a single light Lambertian shading model.
b. Lines 62-68 describe how the vertex position is transformed using the weighted model-view-projection matrix.
[0098] The resulting shaded color vsColor and the transformed vertex position gl Position are passed onto the next stage in the computer graphics rendering pipeline.
[0099] Yet another embodiment may include a fixated region which may have preferred disparity and the surrounding region may have a different disparity in which the total disparity does not exceed a predetermined limit using variable camera parameters in three dimensions. Using a tri-linear interpolation model to calculate the weight value will allow the depth dimension to be foveated as well as the two image dimensions. This can be implemented using a camera model as described in U.S. Patent No. 7,983,477 or U.S. Patent No. 8,300,089 in which the mapping of the depth dimension is variable.
[00100] The benefit may include optimizing the depth presentation of the image seen in the foveated region while reducing the computational or depth budget demands for drawing the image regions representing the scene in-front and behind this region in depth. For example, in a driving game the best image quality is given to the region of the scene to which the driver is attending.
[00101] Another embodiment may have a fixated region which may have preferred disparity and the surrounding region may have a different disparity in which the total disparity does not exceed a predetermined limit by any of the above methods and multiple fixated regions are computed to allow multiple gaze-tracked viewers to watch the same screen and attend to different parts of the screen. For example, if there are multiple viewers of the same screen then the multiple viewers are general unlikely to be fixating on the same region of the image. This can be solved with multiple gaze tracking devices, for example each wearing a head mounted Eye Link II eye tracker from SR Research Ltd., Mississauga, Ontario, Canada. Using the eye tracking information, each viewers' RBI can be calculated and used to determine the individual weights used to control the camera parameters across the screen.
[00102] Continuing this discussion, this embodiment enables multiple viewers to look at a gaze tracked image and although temporally varying the regions of interest are often similar enough between viewers, as generally disclosed in Active Vision, Findlay and Gilchrist, OUP, 2003, this may result in savings in image regions to which none of the viewers may attend.
[00103] Further, in yet another embodiment, a fixated region may have preferred disparity and the surrounding region may have a different disparity in which the total disparity does not exceed a predetermined limit by any of the above methods and the disparity is temporally modified to distinguish the region or a part of one of the regions. In the example utilizing temporal disparity modification a viewer relocates their fixation to a region of interest and then after a period the disparity in the region of fixation is altered to introduce a noticeable depth alteration. In turn this introduces a detectable change in the convergence and/or divergence of each eye allowing the eyes mean fixation point to be calculated.
[00104] Continuing this discussion, this can be used to provide an embodiment for user interfaces in which the icons on a desktop are given differential depth within a finite but coarse fixation region. One icon may then be temporally modified. If differential changes in convergence and/or divergence of the eyes is detected it can then be inferred that the eyes are fixating the varying icon.
[00105] When the eyes differential change in convergence is detected the icon could then be primed for selection. Once an icon is primed in this way the viewer can activate the primed icon by pressing a button, long fixation, blinking, any combination thereof, and so forth.
[00106] A further enhancement can allow the system to dynamically adapt to the user, if the system does not detect any change in eye convergence and/or divergence it can alter which icon is varied in disparity and eventually prime a specific icon when it eventually detects a temporal change in eye convergence.
[00107] The benefit of temporally altering the disparity here is to use the induced temporal changes in eye convergence and/or divergence to increase the overall system confidence in regards to which icon is being attended too.
[00108] Further, temporal disparity modification might not be continuous. An alternative use of the temporal variation in disparity may be to attract attention to regions of a binocular image. In this case regions outside the region of attention can be varied to attract the viewer to attend to them, for example because of the looming effect as generally discussed in Basic Vision, Snowden, Thompson, Troscianko, OUP, 2006.
[00109] A direct benefit of this is in warning systems in which there is a need for the viewer's attention to be drawn to urgent information in a region outside their current region of fixation. In this case, the disparity of a warning icon outside the region of interest is varied temporally. One example of such an icon may be a low battery warning indicator. Although it is unlikely that it would be in the viewer's region of fixation, it is important to draw the viewer's attention to the icon when the battery is lower than a predetermined capacity remaining. It may be evident to those skilled in the art that there are many other icons in which this may benefit in many types of information presentation systems
[00110] Yet another embodiment may include a fixated region which may have preferred disparity and the surrounding region may have a different disparity in which the total disparity does not exceed a predetermined limit by any of the above methods and one or more of the following image quality parameters are also varied. The varied image quality parameters are listed below.
[00111] The first image quality parameter that may be varied is color quality in terms of bits per pixel, or other color representation scheme. This may benefit the highlighting of certain areas using enhanced or reduced color representations and the benefit of reduced color computation time and/or energy saving in the GPU and/or reduced display bandwidth in those areas of reduced color representations.
[00112] Another image quality parameter that may be varied is grey level quality in terms of bits per pixel, or other grey level representation scheme. This may provide the benefit of highlighting certain areas using enhanced or reduced grey level representations and the benefit of reduced grey level computation time and/or reduced display bandwidth in those areas of reduced grey level representations.
[00113] Another image quality parameter that may be varied is image luminance in terms of total light power, for example, by using less of the display brightness range, or by using a high dynamic range display with an ability to boost brightness in a particular region of an image. This has benefits including, but not limited to, reduced power usage in regions of the screen with lower brightness and lower visibility of high frequency image artifacts such as aliasing in the lower brightness regions of the image when lower resolution image content is used.
[00114] Another image quality parameter that may be varied is image contrast, for example, by changing the gamma curve of the displayed image. This has the benefit of masking the visibility of other performance changes. For example, reduced resolution can result in blockiness in the image which can be masked with a low pass filter.
[00115] Another image quality parameter that may be varied is image spatial frequency content, for example, using high, low or band pass filters. In one example, regions can be blurred to reduce computation and reduce spatial resolution that may be appropriate in some regions of the image. This may contribute to reducing computational demands in regions of the screen with lower spatial frequency.
[00116] Another image quality parameter that may be varied is image temporal frequency using higher or lower image refresh rates in different areas of the screen. This may contribute to reducing computational and display bandwidth conditions in regions of the screen with lower temporal frequency.
[00117] Another image quality parameter that may be varied is scene geometry content in which the quality of the computer graphics model is varied by changing the quality of geometric model used to represent objects. This may contribute to reducing computational bandwidth conditions in regions of the screen with reduced quality geometric models, for example, lower number of triangles in geometry meshes.
[00118] Another image quality parameter that may be varied is scene texture image content in which the quality of the computer graphics model texture images is varied. This may contribute to reducing computational bandwidth conditions in regions of the screen with reduced quality texture images, for example lower resolution images.
[00119] Another image quality parameter that may be varied is computer graphics rendering parameters so that effects including specular highlights, reflection, refraction, transparency vary in quality between the image regions. This may contribute to reducing computational bandwidth conditions in regions of the screen with reduced graphics effects.
[00120] Another image quality parameter that may be varied is disparity gradient in terms of maximum gradient allowed in one region compared to another region. This may contribute to improving perceived image quality in image regions in which disparity gradient may otherwise be too high to fuse the images comfortably, or so high that it may be detrimental to task performance.
[00121] As discussed herein and in at least one embodiment, binocular fixation may be a volume in space around the point of intersection of the two optical axes of the eyes.
[00122] As discussed herein and in one at least embodiment, binocular image may be a pattern of light that generates separate stimulus for the two eyes. This may include multiple resolvable views in different directions over each pupil. It can, for example, be generated using discrete views or continuous wave fronts, technically produced using stereoscopic, auto-stereoscopic, multiscopic or holographic optical devices.
[00123] As discussed herein and in at least one embodiment, a binocularly fused image may be a perceptually single view (cyclopean view) of the world formed by fusing two images. This may provide a sensation of (perceived) depth in the scene.
[00124] As discussed herein and in at least one embodiment, capture may be a process that generates a binocular image from the real world or synthetic data. The binocular image may be using optical functions such as still or motion cameras, or rendered using computer graphics or other image synthesis mechanisms.
[00125] As discussed herein and in at least one embodiment, depth budget may be a range of perceived depth, implying a range of binocular disparity that has been chosen as the total limit of perceived depth seen in a binocularly fused image. The depth budget may be chosen for comfort or technical reasons.
[00126] As discussed herein and in at least one embodiment, depth mapping may be the process of capturing depth from a scene and reproducing it as perceived depth in a binocular image.
[00127] As discussed herein and in at least one embodiment, depth measurement or depth measurement element may be a mechanism, real or virtual, for measuring distance, depth, of a surface from a fixed point. In the real world scenes this may be a laser rangefinder, an optical range finder, and so forth. In synthetic scenes this may be a depth map, or a geometric calculation that measures the distance from a fixed point. In most or all cases the depth measurements may be relative to camera position and may be used to calculate a depth mapping from scene space to the perceived image space.
[00128] As discussed herein and in at least one embodiment, gaze tracking may include methods for following the eyes movements to determine the direction of gaze. These can be implemented with devices that employ direct contact with the eye or are remote measurement elements that, for example, follow reflections of light from the eye.
[00129] As discussed herein and in at least one embodiment, foveated images may be an image that is perceived in the foveal region of the retina.
[00130] As discussed herein and in at least one embodiment, a foveated region may be a region is an image or a scene that is perceived in the foveal region of the retina.
[00131] As discussed herein and in at least one embodiment, an image may be a pattern of light that can be detected by the retina.
[00132] As discussed herein and in at least one embodiment, disparity may be a difference in the location of a point, normally horizontal, in which horizontal is taken to be defined by the line joining the two eyes and the disparity is measured on the retina.
[00133] As discussed herein and in at least one embodiment, a monoscopic image may be an image that is substantially the same when viewed from any direction. If presented to both eyes, both eyes receive substantially the same pattern of light. For example, a standard 2D TV presents a monoscopic stimulus, each pixel broadcasts the substantially similar or the same light in all viewing directions.
[00134] As discussed herein and in at least one embodiment, a region of binocular fixation in display space may be RBFa or a volume in display space that corresponds to the region of overlap of the gaze zones of the two eyes.
[00135] As discussed herein and in at least one embodiment, a region of binocular fixation in scene space may be RBFS or a volume in scene space that corresponds to the region of overlap of the gaze zones of the two eyes.
[00136] As discussed herein and in at least one embodiment, a region of binocular interest may be an RBI or a volume of scene space that includes the region of binocular fixation and is extended to include the scene limited by the gaze zones of the two eyes.
[00137] As discussed herein and in at least one embodiment, scene depth range may be a range of depth measured in the scene, usually that may be mapped to a range of perceived depth in a fused binocular image.
[00138] As discussed herein and in at least one embodiment, a stereoscopic image may be an image that includes a pair of images that are presented separately to each eye. The implication is that the position of each of the viewer's eyes is important when viewing a stereoscopic image as a different pattern of light is received on the two retinas.
[00139] As discussed herein and in at least one embodiment, rendering may be the process of creating an image from a synthetic scene.
[00140] As discussed herein and in at least one embodiment, synthetic scenes may be scenes in a computer graphics, virtual world or depth-based image that may be physically real, though my represent physically real scenes.
[00141] As discussed herein and in at least one embodiment, a view may be a unique image visible in a single direction.
[00142] As discussed herein and in at least one embodiment, a scene may be a real world or synthetic scene which is being captured and then reproduced as a binocular image.
[00143] As may be used herein, the terms "substantially" and "approximately" provide an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to ten percent and corresponds to, but is not limited to, component values, angles, et cetera. Such relativity between items ranges between less than one percent to ten percent.
[00144] It should be noted that embodiments of the present disclosure may be used in a variety of optical systems. The embodiment may include or work with a variety of projectors, projection systems, optical components, computer systems, processors, self- contained projector systems, visual and/or audiovisual systems and electrical and/or optical devices. Aspects of the present disclosure may be used with practically any apparatus related to optical and electrical devices, optical systems, display systems, presentation systems or any apparatus that may contain any type of optical system. Accordingly, embodiments of the present disclosure may be employed in optical systems, devices used in visual and/or optical presentations, visual peripherals and so on and in a number of computing environments including the Internet, intranets, local area networks, wide area networks and so on.
[00145] Regarding the disclosed embodiments in detail, it should be understood that the embodiment is not limited in its application or creation to the details of the particular arrangements shown, because the embodiment is capable of other arrangements. Moreover, aspects of the embodiment may be set forth in different combinations and arrangements to define embodiments unique in their own right. Also, the terminology used herein is for the purpose of description and not of limitation. [00146] While various embodiments in accordance with the principles disclosed herein have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with any claims and their equivalents issuing from this disclosure. Furthermore, the above advantages and features are provided in described embodiments, but shall not limit the application of such issued claims to processes and structures accomplishing any or all of the above advantages.
[00147] Additionally, the section headings herein are provided for consistency with the suggestions under 37 CFR 1.77 or otherwise to provide organizational cues. These headings shall not limit or characterize the invention(s) set out in any claims that may issue from this disclosure. Specifically and by way of example, although the headings refer to a "Technical Field," the claims should not be limited by the language chosen under this heading to describe the so-called field. Further, a description of a technology in the "Background" is not to be construed as an admission that certain technology is prior art to any embodiment(s) in this disclosure. Neither is the "Summary" to be considered as a characterization of the embodiment(s) set forth in issued claims. Furthermore, any reference in this disclosure to "invention" in the singular should not be used to argue that there is only a single point of novelty in this disclosure. Multiple embodiments may be set forth according to the limitations of the multiple claims issuing from this disclosure, and such claims accordingly define the embodiment(s), and their equivalents, that are protected thereby. In all instances, the scope of such claims shall be considered on their own merits in light of this disclosure, but should not be constrained by the headings set forth herein.

Claims

Claims What is claimed is:
1. A binocular imaging system, comprising: a display for presenting a left eye image and a right eye image perceptually simultaneously, wherein the left eye image has an associated left eye field of view of the display and the right eye image has an associated right eye field of view of the display; a gaze tracking element that identifies at least one or both gaze directions of the left eye and the right eye; and an image controller that calculates a binocular region of fixation for the left and right eye, and that alters the displayed left and right eye images.
2. The binocular imaging system of claim 1, wherein altering the displayed left and right eye images further comprises affecting the local image depth content in the binocular region of fixation and surrounding the binocular region of fixation.
3. The binocular imaging system of claim 1, wherein the binocular region of fixation further comprises a three dimensional region in which the location varies with the gaze direction of one or both of the left and right eyes.
4. The binocular imaging system of claim 1 , wherein the image controller alters a subsequently displayed binocular image in response to a change in the region of binocular fixation between a currently displayed binocular image and the subsequently displayed binocular image.
5. A method for varying binocular image content, comprising: displaying a current binocular image; using input from the current binocular image, information from a gaze tracker and scene depth measurement information to calculate a region of binocular interest (RBI) in a scene; determining whether the region of binocular interest has changed; calculating the scene depth range for mapping to the depth budget when the region of binocular interest has changed; using a camera control algorithm to generate a subsequently displayed binocular image using the scene depth range; and making the currently displayed image, the subsequently displayed binocular image.
6. The method for varying binocular image content of claim 5, further comprising receiving a second input from the gaze tracker and scene depth measure and using the second input from the current binocular image, the gaze tracker and the scene depth measure to calculate the region of binocular interest in the scene when the region of binocular interest has not substantially changed.
7. The method for varying binocular image content of claim 5, further comprising determining a region of binocular fixation in display space (RBFa) by using gaze tracking information from a viewer watching a displayed binocular image.
8. The method for varying binocular image content of claim 7, further comprising calculating the equivalent region of binocular fixation in a scene space (RBFS) by using the region of binocular fixation in display space (RBFa) provided an image controller.
9. The method for varying binocular image content of claim 8, wherein determining whether the region of binocular interest has changed further comprises using the region of binocular fixation in display space (RBFa) and the equivalent region of binocular fixation in the scene space (RBFS).
10. The method for varying binocular image content of claim 5, further comprising changing the region of binocular interest based on scene changes while the region of binocular fixation in display space does not substantially change.
11. A method for varying binocular image content, comprising: displaying a current binocular image; using input from the current binocular image and a gaze tracker to calculate a subsequent region of binocular fixation; determining any change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation; calculating a disparity range of the subsequent range of binocular fixation when there is a change in binocular fixation between a current region binocular fixation and the subsequent region of binocular fixation; determining whether the disparity range is substantially zero; creating a subsequently displayed image when the disparity range is not substantially zero; and making the currently displayed image, the subsequently displayed binocular image.
12. The method for varying binocular image content of claim 11, further comprising receiving a second input from the gaze tracker and using the second input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the subsequent region of binocular fixation has not substantially changed.
13. The method for varying binocular image content of claim 11, further comprising receiving a third input from the gaze tracker and using the third input from the current binocular image and the gaze tracker to calculate a subsequent region of binocular fixation when the disparity range is approximately zero.
14. The method for varying binocular image content of claim 11, further comprising the gaze tracker that determines the disparity within the fixated region, wherein the gaze tracker determines the plane of fixation from the difference between left eye and right eye screen fixation points.
15. The method for varying binocular image content of claim 11, wherein determining whether the disparity range is substantially zero further comprises comparing the image disparity of the subsequent object with zero, wherein the subsequent object is being imaged where it is the closest object to a viewer in the region of binocular fixation.
16. The method for varying binocular image content of claim 11, further comprising altering a subsequently displayed image in response to the change in the region of binocular fixation between the currently displayed binocular image and the subsequently displayed binocular image.
17. The method for varying binocular image content of claim 11, further comprising forming a currently displayed binocular image.
18. The method for varying binocular image content of claim 1 1, wherein forming a currently displayed binocular image further comprises estimating a 3D region of fixation and projecting the 3D region of fixation into an image plane to form a binocular region of fixation.
19. The method for varying binocular image content of claim 18, wherein the currently displayed binocular image is formed as a left image and a right image.
20. The method for varying binocular image content of claim 19, wherein the currently displayed binocular image is selected from a larger source image.
PCT/US2014/017214 2013-02-19 2014-02-19 Binocular fixation imaging method and apparatus WO2014130584A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201480022341.7A CN105432078B (en) 2013-02-19 2014-02-19 Binocular gaze imaging method and equipment
KR1020157025997A KR20150121127A (en) 2013-02-19 2014-02-19 Binocular fixation imaging method and apparatus
US14/768,824 US10129538B2 (en) 2013-02-19 2014-02-19 Method and apparatus for displaying and varying binocular image content
EP14754036.3A EP2959685A4 (en) 2013-02-19 2014-02-19 Binocular fixation imaging method and apparatus
US16/162,545 US20190166360A1 (en) 2013-02-19 2018-10-17 Binocular fixation imaging method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361766599P 2013-02-19 2013-02-19
US61/766,599 2013-02-19

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US14/768,824 A-371-Of-International US10129538B2 (en) 2013-02-19 2014-02-19 Method and apparatus for displaying and varying binocular image content
US16/162,545 Continuation US20190166360A1 (en) 2013-02-19 2018-10-17 Binocular fixation imaging method and apparatus

Publications (1)

Publication Number Publication Date
WO2014130584A1 true WO2014130584A1 (en) 2014-08-28

Family

ID=51391774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/017214 WO2014130584A1 (en) 2013-02-19 2014-02-19 Binocular fixation imaging method and apparatus

Country Status (5)

Country Link
US (2) US10129538B2 (en)
EP (1) EP2959685A4 (en)
KR (1) KR20150121127A (en)
CN (1) CN105432078B (en)
WO (1) WO2014130584A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107155102A (en) * 2016-03-04 2017-09-12 铜陵巨城科技有限责任公司 3D automatic focusing display method and system thereof

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104023221B (en) * 2014-06-23 2016-04-13 深圳超多维光电子有限公司 Stereo image parallax control method and device
US10212414B2 (en) 2016-08-01 2019-02-19 Microsoft Technology Licensing, Llc Dynamic realignment of stereoscopic digital consent
CN106406063A (en) * 2016-10-28 2017-02-15 京东方科技集团股份有限公司 Holographic display system and holographic display method
US10582184B2 (en) * 2016-12-04 2020-03-03 Juyang Weng Instantaneous 180-degree 3D recording and playback systems
KR102706803B1 (en) * 2016-12-29 2024-09-13 엘지디스플레이 주식회사 Virtual reality device
GB2565302B (en) * 2017-08-08 2022-04-13 Sony Interactive Entertainment Inc Head-mountable apparatus and methods
JP6897467B2 (en) * 2017-10-02 2021-06-30 富士通株式会社 Line-of-sight detection device, line-of-sight detection program, and line-of-sight detection method
CN109558012B (en) * 2018-12-26 2022-05-13 北京七鑫易维信息技术有限公司 Eyeball tracking method and device
US11756259B2 (en) * 2019-04-17 2023-09-12 Rakuten Group, Inc. Display controlling device, display controlling method, program, and non-transitory computer-readable information recording medium
CN109901290B (en) * 2019-04-24 2021-05-14 京东方科技集团股份有限公司 Method and device for determining gazing area and wearable device
CN115937291B (en) * 2022-09-14 2023-12-15 北京字跳网络技术有限公司 Binocular image generation method and device, electronic equipment and storage medium
CN117472316B (en) * 2023-12-13 2024-05-14 荣耀终端有限公司 Display control method, electronic equipment, storage medium and chip system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0641132A1 (en) 1993-08-26 1995-03-01 Matsushita Electric Industrial Co., Ltd. Stereoscopic image pickup and display apparatus
US20060210111A1 (en) * 2005-03-16 2006-09-21 Dixon Cleveland Systems and methods for eye-operated three-dimensional object location
KR20070061091A (en) * 2005-12-08 2007-06-13 한국전자통신연구원 Method for deciding the position of an observer, and an observer tracking 3d display system and method using that
KR20090020892A (en) * 2007-08-24 2009-02-27 주식회사 나노박스 Apparatus and method for displaying three dimensional picture using display pixel varying
US20100171697A1 (en) * 2009-01-07 2010-07-08 Hyeonho Son Method of controlling view of stereoscopic image and stereoscopic image display using the same
KR101046259B1 (en) * 2010-10-04 2011-07-04 최규호 Stereoscopic image display apparatus according to eye position
US20110228051A1 (en) 2010-03-17 2011-09-22 Goksel Dedeoglu Stereoscopic Viewing Comfort Through Gaze Estimation
WO2013018004A1 (en) 2011-07-29 2013-02-07 Sony Mobile Communications Ab Gaze controlled focusing of stereoscopic content

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4634384A (en) 1984-02-02 1987-01-06 General Electric Company Head and/or eye tracked optically blended display system
JP3478606B2 (en) * 1994-10-12 2003-12-15 キヤノン株式会社 Stereoscopic image display method and apparatus
US5583795A (en) * 1995-03-17 1996-12-10 The United States Of America As Represented By The Secretary Of The Army Apparatus for measuring eye gaze and fixation duration, and method therefor
EP0817123B1 (en) 1996-06-27 2001-09-12 Kabushiki Kaisha Toshiba Stereoscopic display system and method
GB2354389A (en) 1999-09-15 2001-03-21 Sharp Kk Stereo images with comfortable perceived depth
GB0329312D0 (en) * 2003-12-18 2004-01-21 Univ Durham Mapping perceived depth to regions of interest in stereoscopic images
US7675513B2 (en) * 2008-03-14 2010-03-09 Evans & Sutherland Computer Corp. System and method for displaying stereo images
AT10236U3 (en) 2008-07-10 2009-09-15 Avl List Gmbh MEASURING ARRANGEMENT AND METHOD FOR DETECTING MEASUREMENT DATA
JP2010045584A (en) * 2008-08-12 2010-02-25 Sony Corp Solid image correcting apparatus, solid image correcting method, solid image display, solid image reproducing apparatus, solid image presenting system, program, and recording medium
US8300089B2 (en) 2008-08-14 2012-10-30 Reald Inc. Stereoscopic depth mapping
RU2511706C2 (en) * 2009-02-05 2014-04-10 Хойа Корпорейшн Method of evaluating spectacle lenses, method of designing spectacle lenses, method of manufacturing spectacle lenses, system for manufacturing spectacle lenses and spectacle lens
EP2325618A1 (en) * 2009-11-18 2011-05-25 ESSILOR INTERNATIONAL (Compagnie Générale d'Optique) Method for determining binocular performance of a pair of spectacle lenses
KR101727899B1 (en) * 2010-11-26 2017-04-18 엘지전자 주식회사 Mobile terminal and operation control method thereof
US20140218488A1 (en) * 2011-05-17 2014-08-07 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V Methods and device for processing digital stereo image content
CN102842301B (en) * 2012-08-21 2015-05-20 京东方科技集团股份有限公司 Display frame adjusting device, display device and display method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0641132A1 (en) 1993-08-26 1995-03-01 Matsushita Electric Industrial Co., Ltd. Stereoscopic image pickup and display apparatus
US20060210111A1 (en) * 2005-03-16 2006-09-21 Dixon Cleveland Systems and methods for eye-operated three-dimensional object location
KR20070061091A (en) * 2005-12-08 2007-06-13 한국전자통신연구원 Method for deciding the position of an observer, and an observer tracking 3d display system and method using that
KR20090020892A (en) * 2007-08-24 2009-02-27 주식회사 나노박스 Apparatus and method for displaying three dimensional picture using display pixel varying
US20100171697A1 (en) * 2009-01-07 2010-07-08 Hyeonho Son Method of controlling view of stereoscopic image and stereoscopic image display using the same
US20110228051A1 (en) 2010-03-17 2011-09-22 Goksel Dedeoglu Stereoscopic Viewing Comfort Through Gaze Estimation
KR101046259B1 (en) * 2010-10-04 2011-07-04 최규호 Stereoscopic image display apparatus according to eye position
WO2013018004A1 (en) 2011-07-29 2013-02-07 Sony Mobile Communications Ab Gaze controlled focusing of stereoscopic content

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2959685A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107155102A (en) * 2016-03-04 2017-09-12 铜陵巨城科技有限责任公司 3D automatic focusing display method and system thereof

Also Published As

Publication number Publication date
US10129538B2 (en) 2018-11-13
US20160007016A1 (en) 2016-01-07
US20190166360A1 (en) 2019-05-30
CN105432078B (en) 2017-09-22
EP2959685A1 (en) 2015-12-30
EP2959685A4 (en) 2016-08-24
CN105432078A (en) 2016-03-23
KR20150121127A (en) 2015-10-28

Similar Documents

Publication Publication Date Title
US20190166360A1 (en) Binocular fixation imaging method and apparatus
RU2541936C2 (en) Three-dimensional display system
US9106906B2 (en) Image generation system, image generation method, and information storage medium
EP1328129A1 (en) Apparatus for generating computer generated stereoscopic images
US11659158B1 (en) Frustum change in projection stereo rendering
Blum et al. The effect of out-of-focus blur on visual discomfort when using stereo displays
US8866887B2 (en) Computer graphics video synthesizing device and method, and display device
JP2012253690A (en) Program, information storage medium, and image generation system
US20080238930A1 (en) Texture processing apparatus, method and program
US20200186775A1 (en) Dynamic convergence adjustment in augmented reality headsets
WO2018064287A1 (en) Predictive virtual reality display system with post rendering correction
CN109870820A (en) Pin hole reflection mirror array integration imaging augmented reality device and method
US11431955B1 (en) Systems and methods for temporal anti-aliasing
US20130342536A1 (en) Image processing apparatus, method of controlling the same and computer-readable medium
JP2012060345A (en) Multi-viewpoint image creation device, multi-viewpoint image creation method and multi-viewpoint image display system
US20130120360A1 (en) Method and System of Virtual Touch in a Steroscopic 3D Space
WO2017085803A1 (en) Video display device and video display method
KR101172507B1 (en) Apparatus and Method for Providing 3D Image Adjusted by Viewpoint
Ardouin et al. Design and evaluation of methods to prevent frame cancellation in real-time stereoscopic rendering
Richardt et al. Stereo coherence in watercolour rendering
CN113661514B (en) Apparatus and method for enhancing image
JP2009237310A (en) False three-dimensional display method and false three-dimensional display apparatus
Shen et al. 3-D perception enhancement in autostereoscopic TV by depth cue for 3-D model interaction
Kellnhofer et al. Improving perception of binocular stereo motion on 3D display devices
Chen et al. A View-Dependent Stereoscopic System Using Depth-Image-Based Tracking

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201480022341.7

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14754036

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 14768824

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2014754036

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20157025997

Country of ref document: KR

Kind code of ref document: A