US20120200495A1 - Autostereoscopic Rendering and Display Apparatus - Google Patents
Autostereoscopic Rendering and Display Apparatus Download PDFInfo
- Publication number
- US20120200495A1 US20120200495A1 US13/501,732 US200913501732A US2012200495A1 US 20120200495 A1 US20120200495 A1 US 20120200495A1 US 200913501732 A US200913501732 A US 200913501732A US 2012200495 A1 US2012200495 A1 US 2012200495A1
- Authority
- US
- United States
- Prior art keywords
- image
- user
- display
- dimensional object
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/31—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
- H04N13/279—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/31—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
- H04N13/315—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers the parallax barriers being time-variant
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/361—Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/371—Image reproducers using viewer tracking for tracking viewers with different interocular distances; for tracking rotational head movements around the vertical axis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Definitions
- the present application relates to a method and apparatus for auto-stereoscopic displays.
- the method and apparatus relate to auto-stereoscopic image displays and in particular, but not exclusively limited to, some further embodiments relate to auto-stereoscopic display for mobile apparatus.
- Stereoscopic image displays have the potential to significantly improve the user's experience of operating and interacting with modern electronic devices.
- So-called 3D display technology or stereoscopic display technology generates images to the left and right eye separately in order to fool the user into believing that say are viewing a three dimensional image.
- Traditional stereoscopic displays present images for the left and right eye and then use filters placed over each eye so that the left eye only views the image for the left eye and the right eye only views the image for the right eye.
- An example of such technology is polarization filtering where images for the left and right eye are modulated by a different polarisation. This technology is currently favoured in 3D cinemas.
- Such technology although capable of presenting 3D images of objects requires each user to be equipped with the required filters, typically in the form of a pair of over-spectacles in order to view the image.
- Auto-stereoscopic displays which do not require the user to wear any device to filter the left and right images but instead filters or directs the images directly to the correct eye, are rapidly becoming commercially realisable. These auto-stereoscopic devices remove a significant barrier to the use of 3D displays for everyday use. Such displays use a range of optical elements in combination with a display to focus or direct the left and right view to the left and right eye respectively.
- Auto-stereoscopic displays typically have a limited range of viewing angles before the auto-stereoscopic view becomes poor. For example the location of the user with reference to the display and the movement of the user with reference to the display will typically only cause the user to experience a non-optimal image in that the viewing angle range for displaying a three dimensional image is exceeded.
- Embodiments of the present application aim to address the above problems.
- a method comprising: detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; determining a surface viewable from the user viewpoint of at least one three dimensional object; and generating a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- Detecting the position and orientation of a user may comprise: capturing at least one image of the user; determining the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- Capturing at least one image of the user may comprise capturing at least one image of the user from each of at least two cameras, and detecting the position and orientation may comprise comparing the difference between the at least one image of the user from each of the at least two cameras.
- Determining a surface viewable from the user viewpoint may comprise: determining a model of the at least one three dimensional object; determining the distance and orientation from the at least one three dimensional object model to the user viewpoint; and generating a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- the method may further comprise: detecting an inter-pupil distance of a user; and controlling a parallax barrier dependent on at least one of: the position of the user viewpoint; the orientation of the user viewpoint; and the inter-pupil distance of the user.
- the method may further comprise: determining a projection surface viewable from the user viewpoint of at least one three dimensional object on a second display; and generating a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- the method may further comprise determining the projection surface viewable from the user viewpoint dependent on at least one of: at least one three dimensional object lighting angle and position; a second display surface model; and at least one three dimensional object surface model.
- the projection surface may comprise at least one of: a partial shadow of the at least one three dimensional object; a total shadow of the at least one three dimensional object; and a reflection of the at least one three dimensional object.
- the method may further comprise: detecting an object position with respect to either the auto-stereoscopic display and/or a second display; and determining an interaction by the detected object.
- Detecting an object position may comprise at least one of: detecting an capacitance value in a capacitance sensor of the object; and detecting a visual image of the object.
- Determining an interaction by the detected object may comprise determining an intersection between the detected image and a displayed image.
- the displayed image may comprise at least one of: the virtual image of the at least one three dimensional object; and a two dimensional image displayed on the second display.
- an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; determining a surface viewable from the user viewpoint of at least one three dimensional object; and generating a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- Detecting the position and orientation of a user may cause the apparatus at least to perform: capturing at least one image of the user; determining the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- Capturing at least one image of the user may cause the apparatus at least to perform capturing at least one image of the user from each of at least two cameras, and detecting the position and orientation may cause the apparatus at least to perform comparing the difference between the at least one image of the user from each of the at least two cameras.
- Determining a surface viewable from the user viewpoint may cause the apparatus at least to perform: determining a model of the at least one three dimensional object; determining the distance and orientation from the at least one three dimensional object model to the user viewpoint; and generating a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- the computer program code configured to with the at least one processor may further cause the apparatus at least to perform: detecting an inter-pupil distance of a user; and controlling a parallax barrier dependent on at least one of: the position of the user viewpoint; the orientation of the user viewpoint; and the inter-pupil distance of the user.
- the computer program code configured to with the at least one processor may further cause the apparatus at least to perform: determining a projection surface viewable from the user viewpoint of at least one three dimensional object on a second display; and generating a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- the computer program code configured to with the at least one processor may further cause the apparatus at least to perform determining the projection surface viewable from the user viewpoint dependent on at least one of: at least one three dimensional object lighting angle and position; a second display surface model; and at least one three dimensional object surface model.
- the projection surface may comprise at least one of: a partial shadow of the at least one three dimensional object; a total shadow of the at least one three dimensional object; and a reflection of the at least one three dimensional object.
- the computer program code configured to with the at least one processor may further cause the apparatus at least to perform: detecting an object position with respect to either the auto-stereoscopic display and/or a second display; and determining an interaction by the detected object.
- Detecting an object position may cause the apparatus at least to perform: detecting an capacitance value in a capacitance sensor of the object; and detecting a visual image of the object.
- Determining an interaction by the detected object may cause the apparatus at least to perform determining an intersection between the detected image and a displayed image.
- the displayed image may comprise at least one of: the virtual image of the at least one three dimensional object; and a two dimensional image displayed on the second display.
- an apparatus comprising: a sensor configured to detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; a processor configured to determine a surface viewable from the user viewpoint of at least one three dimensional object; and an image generator configured to generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- the sensor may comprise a camera configured to capture at least one image of the user and a face recognizer configured to determine the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- the processor may determine a model of the at least one three dimensional object; determine the distance and orientation from the at least one three dimensional object model to the user viewpoint; and generate a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- the processor may further detect an inter-pupil distance of a user; and provide control information to a parallax barrier dependent on at least one of: the position of the user viewpoint; the orientation of the user viewpoint; and the inter-pupil distance of the user.
- the processor may further determine a projection surface viewable from the user viewpoint of at least one three dimensional object on a second display; and the image generator may generate a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- the processor may determine the projection surface viewable from the user viewpoint dependent on at least one of: at least one three dimensional object lighting angle and position; a second display surface model; and at least one three dimensional object surface model.
- the projection surface may comprise at least one of: a partial shadow of the at least one three dimensional object; a total shadow of the at least one three dimensional object; and a reflection of the at least one three dimensional object.
- the sensor may be further configured to detect an object position with respect to either the auto-stereoscopic display and/or a second display; and determine an interaction by the detected object.
- the sensor may detect an object position by detecting at least one of a capacitance value in a capacitance sensor of the object; and a visual image of the object.
- the processor may further determine an intersection between the detected image and a displayed image.
- the displayed image may comprise the virtual image of the at least one three dimensional object.
- the displayed image may comprise a two dimensional image displayed on the second display.
- a computer-readable medium encoded with instructions that, when executed by a computer perform: detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; determine a surface viewable from the user viewpoint of at least one three dimensional object; and generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- an apparatus comprising: detecting means for detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; modelling means for determining a surface viewable from the user viewpoint of at least one three dimensional object; and image generating means for generating a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- An electronic device may comprise apparatus as described above.
- a chipset may comprise apparatus as described above.
- FIG. 1 shows a schematic representation of an apparatus suitable for implementing some embodiments of the application
- FIG. 2 shows a physical schematic representation of an apparatus as shown in FIG. 1 suitable for implementing some embodiments in further detail;
- FIG. 3 shows a schematic representation of the processing components in apparatus according to some embodiments of the application
- FIG. 4 a shows a schematic representation of head position tracking in some embodiments of the application
- FIG. 4 b shows a flow diagram of the processes carried out in head position tracking according to some embodiments
- FIG. 5 a shows a schematic representation of reflection/shadow generation for images according to some embodiments of the application
- FIG. 5 b shows a flow diagram of the process carried out in reflection/shadow generation according to some embodiments
- FIG. 6 a shows a physical schematic representation of user interface interaction according to some embodiments of the application
- FIG. 6 b shows a further physical schematic representation of user interface interaction according to some embodiments of the application.
- FIG. 6 c shows a flow diagram of the processes carried out by user interface interaction according to some embodiments of the application.
- FIG. 7 shows a further physical schematic representation of user interface interaction according to some embodiments of the application.
- the application describes apparatus and methods to generate more convincing and interactive 3D image displays and thus create a more immersive and interactive user experience than may be generated with just one stereoscopic display unit.
- the combination of two displays in a folding electronic device or apparatus with suitable sensors monitoring the user enables the apparatus to be configured such that the user experience of the 3D images displayed is greatly enhanced.
- the use of head tracking may be applied with the dual displays to enable further enhancement for the displayed images.
- the configuration of user interface apparatus with the dual display configuration further enhances the perception and the interaction with the displayed images.
- FIG. 1 discloses a schematic block diagram of an exemplary electronic device 10 or apparatus on which embodiments of the application may be implemented.
- the electronic device 10 is configured to provide improved auto-stereoscopic image display and interaction.
- the electronic device 10 is in some embodiments a mobile terminal, mobile phone or user equipment for operation in a wireless communication system.
- the electronic device is any suitable electronic device configured to provide a image display, such as for example a digital camera, a portable audio player (mp3 player), a portable video player (mp4 player).
- mp3 player portable audio player
- mp4 player portable video player
- the electronic device 10 comprises an integrated camera module 11 , which is linked to a processor 15 .
- the processor 15 is further linked to a first display (display A) 12 a, and a second display (display B) 12 b.
- the processor 15 is further linked to a transceiver (TX/RX) 13 , to a user interface (UI) 14 and to a memory 16 .
- TX/RX transceiver
- UI user interface
- the camera module 11 and/or the displays 12 a and 12 b are separate or separable from the electronic device and the processor receives signals from the camera module 11 and/or transmits and signals to the displays 12 a and 12 b via the transceiver 13 or another suitable interface.
- the processor 15 may be configured to execute various program codes 17 .
- the implemented program codes 17 in some embodiments, comprise image capture digital processing or configuration code, image displaying and image interaction code.
- the implemented program codes 17 in some embodiments further comprise additional code for further processing of images.
- the implemented program codes 17 may in some embodiments be stored for example in the memory 16 for retrieval by the processor 15 whenever needed.
- the memory 15 in some embodiments may further provide a section 18 for storing data, for example data that has been processed in accordance with the application.
- the camera module 11 comprises a camera 19 having a lens for focusing an image on to a digital image capture means such as a charged coupled device (CCD).
- the digital image capture means may be any suitable image capturing device such as complementary metal oxide semiconductor (CMOS) image sensor.
- the camera module 11 further comprises a flash lamp 20 for illuminating an object before capturing an image of the object.
- the flash lamp 20 is linked to the camera processor 21 .
- the camera 19 is also linked to a camera processor 21 for processing signals received from the camera.
- the camera processor 21 is linked to camera memory 22 which may store program codes for the camera processor 21 to execute when capturing an image.
- the implemented program codes (not shown) may in some embodiments be stored for example in the camera memory 22 for retrieval by the camera processor 21 whenever needed.
- the camera processor 21 and the camera memory 22 are implemented within the apparatus 10 processor 15 and memory 16 respectively.
- a user of the electronic device 10 may in some embodiments use the camera module 11 for capturing images to be used in controlling the displays 12 a and 12 b as is described in later detail with respect to FIGS. 4 a , 4 b , 5 a , and 5 b and described later.
- the camera module may capture images which may be transmitted to some other electronic device to be processed and provide the information required to interact with the display. For example where the processing power of the apparatus is not sufficient some of the processing operations may be implemented in further apparatus. Corresponding applications in some embodiments may be activated to this end by the user via the user interface 14 .
- the camera module comprises at least two cameras, wherein each camera is located on the same side of device.
- the camera processor 21 or the processor 15 may be configured to receive image data from the camera or multiple cameras and further process the image data to identify an object placed in front of the camera.
- Such objects capable of being identified by the camera processor 21 or processor 15 may be, for example, a face or head of the user, the eyes of the user, the finger or pointing device used by the user.
- the camera processor 21 or processor may determine or estimate the identified object's position in front of the device. The use of this identification process and position estimation process will be described in further detail later.
- the apparatus 10 may in embodiments be capable of implementing the processing techniques at least partially in hardware, in other words the processing carried out by the processor 15 and camera processor 21 may be implemented at least partially in hardware without the need of software or firmware to operate the hardware.
- the user interface 14 in some embodiments enables a user to input commands to the electronic device 10 .
- the user interface 14 may in some embodiments be implemented as, for example, a keypad, a user operated button, switches, or by a ‘touchscreen’ interface implemented on one or both of the displays 12 a and 12 b.
- some of the user interface functionality may be implemented from the camera 19 captured image information whereby the object identification and position information may be used to provide an input to the device.
- the transceiver 13 enables communication with other electronic devices, for example in some embodiments via a wireless communication network.
- FIG. 2 a physical representation as implemented in some embodiments of the apparatus shown in FIG. 1 is shown in further detail.
- the apparatus 10 as shown in FIG. 2 may be implemented in a folding configuration.
- the apparatus comprises a first case part 203 and a second case part 201 , both of which are connected together by a hinge 205 .
- the hinge 205 operates not only as a mechanical connection between the first case part 203 and the second case part 201 but also implements an electrical connection between components within the first case part 203 and the second case part 201 .
- the hinge is only a mechanical connection with the electrical connection being implemented separately.
- the electrical connection may be wired and provided for example by a flexible ribbon cable or in some other embodiments may be provided by a wireless connection between the first and second case parts.
- first case part 203 slides over the second case part 201
- first case part 203 is further configured to rotate and be angled with respect to the second case part 201 as the two parts slide so to produce a similar display orientation configuration.
- Such a sliding/rotating hinge may be similar to that seen currently on such user equipment as the Nokia N97.
- first case part 203 and the second case part 201 may be configured to operation with a twist and rotatable hinge similar to that employed by ‘tablet’ portable computers wherein the hinge connecting the first case part 203 and the second case part 201 can be folded and unfolded to open up the apparatus but may also be twisted to protect or display the inner surfaces when the hinge is refolded.
- the first case part 203 as shown in FIG. 2 is configured to implement on one surface the first display 12 a which may be an auto-stereoscopic display (also known as the 3D display) and a camera 19 .
- the 3D display 12 a may be any suitable auto-stereoscopic display.
- the 3D display 12 a may in some embodiments be implemented as a liquid crystal display with parallax barriers.
- the principle of parallax barriers is known whereby an optical aperture is aligned with columns of the liquid crystal display (LCD) pixels in order that alternate columns of LCD pixels can be seen by the left and right eyes separately.
- the parallax barriers operate so that in some embodiments the even columns of the LCD pixels may be viewed by the left eye and the odd columns of the LCD pixels may be viewed by the right eye.
- the parallax barriers may be, in some embodiments, controllable and capable of controlling the angle of image presentation.
- the auto-stereoscopic (3D) display 12 a may be implemented as a lenticular optical configured liquid crystal display where cylindrical lenses are aligned with columns of LCD pixels to produce a similar effect to that of the parallax barriers. In other words, alternate columns of LCD pixels, which go to construct the alternate images for the left and right eyes are directed to the left and right eyes only.
- 3D display may be implemented using micropolarisers.
- the 3D display may comprise a holographic display to create real images using a diffuse light source.
- the first display, the 3D display, 12 a may further comprise a touch input interface.
- the touch input interface may be a capacitive touch interface suitable for detecting either direct touch onto the display surface or detecting the capacitive effect between the display surface and a further object such as a finger in order to determine a position relative to the dimensions of the display surface and in some embodiments the position relative from the display surface.
- the first display 12 a as described above may be implemented as a LCD display it would be appreciated that any suitable display technology may be used to implement the display.
- the first display 12 a may be implemented using light emitting diodes (LED) or organic light emitting diode (OLED) configuration featuring apertures or lenses configured to generate two separate images directed at the left and right eye of the user.
- LED light emitting diodes
- OLED organic light emitting diode
- the first display 12 a may be configured to operate in a 2D mode of operation. For example by switching off the parallax barrier layer in the embodiments employing a parallax barrier to direct the alternate pixel layers to the left and right eyes all of the rows of the pixels will be visible to both eyes.
- the second case part 201 as shown in FIG. 2 is configured to implement on one surface the second display 12 b which may be a 2D display 12 b.
- the 2D display may be implemented using any suitable display technology, for example liquid crystal display (LCD), light emitting diodes (LED), or organic LED display technologies.
- the second display 12 b may comprise a second auto-stereoscopic (3D) display capable of being operable in a 2D mode—for example by switching off of the parallax barrier layer in a manner similar to that described above.
- 3D auto-stereoscopic
- the second or 2D display 12 b comprises a touch interface similar to the touch interface as described above with reference to the first display suitable for determining the position of an object both relative across the display surface and also relative from the display surface.
- the touch interface comprises a capacitive touch interface suitable of determining the position of the object relative across and from the surface of the display by the determination of the capacitance of the display surface.
- FIGS. 4 a , 5 a , 6 a , 6 b , and 7 and the method steps in FIGS. 4 b , 5 b , and 6 c represent only a part of the operation of a complete system comprising some embodiments of the application as shown implemented in the electronic device shown in FIG. 1 .
- the processor 15 is shown to comprise a camera/head location processor 101 which is configured to receive camera image information and identify objects and their position/orientation from the camera 19 image information.
- the camera processor 101 is configured to determine from the camera image information the location and orientation of the head of the user.
- the camera processor 101 may in some embodiments receive the head image taken by the camera 19 and project the 2D image data onto a surface such as a cylinder to provide a stabilised view of the face independent of the current orientation, position and scale of the surface model. From this projection the orientation and position may be estimated.
- any suitable head tracking algorithm may be used, for example eye tracking and registration algorithms described within “fast, reliable head tracking under varying illumination: an approach based on registration of texture mapped 3D models” by Kaskia et at as described within the Computer Science Technical Report, May 1999.
- eye tracking and registration algorithms described within “fast, reliable head tracking under varying illumination: an approach based on registration of texture mapped 3D models” by Kaskia et at as described within the Computer Science Technical Report, May 1999.
- the difference in between the images may be used to estimate the position and orientation of the face based on the knowledge of the differences between the cameras.
- the camera processor 101 may be configured to further determine the distance of separation of the eyes on the head of the user.
- the camera processor 101 may be configured to identify a finger or other pointing object and furthermore the relative position of the finger (or pointing object) to the auto-stereoscopic display.
- the camera/head location processor 101 outputs an indication of the object identified and furthermore the position and/or orientation of the object identified relative to the display to the image processor 105 .
- the processor 15 may further comprise a touch processor/user interface processor 103 configured to receive input from the user interface 14 such as the touch interface implemented within either of the first display 12 a or second display 12 b according to some embodiments.
- the user interface 14 may comprise other input means such as mice, keyboard, keypad or any other suitable user input apparatus.
- the user interface processor 103 processes the input from the user to determine whether a relevant input has been received with respect to the image display. For example the user interface processor may receive the values of capacitance from a display surface touch interface and from the capacitance value determiner a location along a display surface which has been touched. In further embodiments the user interface processor 103 may determine from the capacitance value the distance of the ‘touching’ object from the surface of the display.
- the touch interface may not require direct contact to detect an object in sensing range of the touch interface.
- a touch is described above and hereafter with respect to a physical object detected, such as a user finger, it would be understood that in some embodiments a the user interface processor detects and processes data about a virtual object, such as an image of a pointer displayed by the display and which may be controlled by use of the mouse, trackball, keys or any suitable control means.
- the user interface processor 103 may output to the image processor 105 an indication of what the object is and where the object (which as described above may be a physical object or a displayed ‘virtual’ object) used by the user is touching or pointing.
- the processor 15 may further comprise an image model processor 107 .
- the image model processor is configured in some embodiments to store a series of image models which may be used by the image processor 105 .
- the image model processor may store the three dimensional image data models in order to create the three dimensional display image generated by the image processor 105 .
- the image model processor stores a series of models of geometric shapes or meshes describing the elements which make up the environment displayed by the image processor.
- the processor 15 may further comprise an image processor 105 configured to receive the camera, user interface and image model information and generate the image to be displayed for both the two displays. Further examples are described hereafter.
- the image processor 105 outputs the images to be displayed on the displays to the 3D display driver 109 and the 2D display driver 111 .
- the 3D display driver 108 receives the display image data for the first display from the image processor 105 and generates the data to be supplied to the first (3D) display 12 a in the form of the left eye image data and the right eye image data in order to project the three dimensional image. Furthermore in some embodiments the 3D display driver may dependent on the distance of the separation of the eyes and/or position of the users head control the parallax barrier (or similar left-right eye separation display control) in order to produce a more optimal 3D display image.
- the 2D display driver receives the display image data from the image processor 105 for the second display 12 b and generates the data for the second (2D) display 12 b.
- FIGS. 4 a and 4 b the interaction in some embodiments between the camera processor 101 , image processor 105 , image model processor 107 and the 3D display driver 109 is described in further detail.
- the apparatus 10 is shown in FIG. 4 a being operated so to generate a 3D image using the 3D display 12 b.
- the 3D image generated in this example is a cuboid which appears to the user to be floating in front of the 3D display.
- the camera 19 in some embodiments is configured to capture image frame data and pass the image frame data to the camera processor 101 , The operation of the camera capturing image frame data is shown in FIG. 4 b by step 351 .
- the camera processor 101 may process the image frame data and from this data determine the head position and orientation and in some embodiments the eye position of the user.
- the determination of the head position/orientation and eye positions is by any suitable head modelling process such as described previously.
- This head and eye information may then be passed to the image processor 105 .
- the determination of the head location and eye position is shown in FIG. 4 b by step 353 .
- the image model processor 107 may on determining the object to be displayed retrieve or generate the object model information and furthermore pass this object model information to the image processor 105 .
- the image to be displayed may be a cuboid with non-textured, flat sides.
- the image model processor 107 may provide the dimensions and the orientation of the cuboid to the image processor 105 .
- the generation of the object model information is shown in FIG. 4 b by step 354 .
- the image processor 105 having received the head/eye information and the object model information (for example the dimensions and orientation of the object or objects to be displayed) may then in some embodiments determine the area or surface of the 3D object which is possible to be viewed by the user.
- the image processor 105 may use the geometric relationship between the position and orientation of the users head and the position and orientation of the object to be displayed to thus determine the area which could be seen from the detected viewpoint.
- the operation of determination of the image model surface to be displayed is shown in FIG. 4 b in step 355 .
- the image processor 105 may then output to the 3D display driver 109 the data for generating a left and right eye image to be output to the 3D display 12 a and thus to generate the images to the left and right eyes to provide the image which appears to be floating in front of the 3D display.
- the operation of displaying the image from the 3D display driver 109 is shown in FIG. 4 b by step 357 .
- the image processor 105 when the user is detected as looking directly face-on to the cuboid, the user is provided with left and right eye images which present only the images of the face of the cuboid.
- the image processor 105 outputs to the 3D display driver 109 information allowing the 3D display driver 109 to generate the left and right eye images to provide the 3D display the data to present only the face of the cuboid 301 a.
- the motion of the head/eyes and thus the position displacement is determined by the camera processor 101 which passes this information to the image processor 105 .
- the image processor may then in some embodiments determine that the user can from the detected viewpoint see at least two faces of the cuboid and thus the image processor sends information to the 3D display driver 109 the information permitting the generation of left and right eye images to enable the 3D display 12 a to present the cuboid surface with two faces which can be seen by the user 301 b.
- the same operations and apparatus may display more than a single object and furthermore the detection of the position of the head would permit different objects to be viewed at different angles.
- the image processor may determine that one object completely obscures a second object, in other words that the second object is in the shadow of the first from the detected viewpoint, whereas when the image processor 105 receives information that the head position has moved sufficiently the image processor 105 may then in some embodiments pass the information to the 3D display driver 109 information enabling the rendering of both objects.
- there may be a method comprising detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display, determining a surface viewable from the user viewpoint of at least one three dimensional object, and generating a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- the method of such embodiments may detect the position and orientation of a user by capturing at least one image of the user and determining the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- capturing at least one image of the user may comprise capturing at least one image of the user from each of at least two cameras, and detecting the position and orientation comprises comparing the difference between the at least one image of the user from each of the at least two cameras.
- determining a surface viewable from the user viewpoint comprises determining a model of the at least one three dimensional object, determining the distance and orientation from the at least one three dimensional object model to the user viewpoint and generating a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- the image processor 105 furthermore determines when the location and position of the user has moved so that the auto-stereoscopic image filters (such as the parallax barriers) are not optimal. In such embodiments the image processor 105 may generate a message to be displayed indicating that the user is moving out of the optimal viewing range. In other embodiments the image processor may generate information to be passed to the 3D display driver 109 to change the filters. For example where the 3D display 12 b has controllable parallax barriers to filter the left and right eye images, the image processor may pass to the 3D display driver information permitting the shifting of the parallax barrier to maintain the three dimensional image.
- the image processor 105 may further generate information to enable the display to be steered to maintain the 3D object presentation.
- the hinge may twist to enable the user to move round the apparatus but maintain the display of the 3D image.
- the camera processor 101 may determine the intra-pupil distance and pass this information to the image processor.
- This information may furthermore in some embodiments permit the image processor 105 to send information to the 3D display driver to permit optimisation of the parallax barrier so that the left and right eye images and the image filtering operation is more optimised for the specific user. For example as children have smaller inter-pupil distances the image processor may optimize the experience for children or adults as settings used for adults would produce poor 3D imaging results for children and vice versa.
- FIGS. 5 a and 5 b further embodiments of the application are shown whereby further interaction between the camera 19 , 3D display 12 a and 2D display 12 b is shown in further detail.
- the operation of the 3D display element is in some embodiments the same as described above whereby the camera monitors the users head/eye position and together with the knowledge of the object to be displayed generates left and right eye images to generate the 3D image of the object from the viewpoint of the user.
- Such images may be further improved by the addition and implementation of at least one 2D display 12 b as is described hereafter.
- the camera 19 in some embodiments is configured to capture image frame data and pass the image frame data to the camera processor 101 .
- the operation of the camera capturing image frame data is shown in FIG. 5 b by step 351 . It would be understood that the same image data captured and processed in the 3D interaction embodiments as described above may also be used in the 2D interaction embodiments described hereafter. Where similar or the same process is described then the same reference number is reused.
- the camera processor 101 as described previously may also process the image frame data and from this data determine the head position and orientation and in some embodiments the eye position of the user.
- the determination of the head position/orientation and eye positions is by any suitable head modelling process such as described previously.
- this eye information may comprise determining the eye level of the user relative to the display.
- This head and eye information may then be passed to the image processor 105 .
- the determination of the head location and eye position is also shown in FIG. 5 b by step 353 .
- the image model processor 107 may on determining the object to be displayed retrieve or generate the object model information and furthermore pass this object model information to the image processor 105 .
- the image to be displayed may be a cuboid with non-textured, flat sides.
- the object model may also comprise lighting information, surface reflectivity information, object location, and furthermore ground information, such as texture and reflectivity of the surface above with the object is ‘floating’.
- the image model processor 107 may provide this information to the image processor 105 .
- the generation of the object model information is shown in FIG. 5 b by step 354 .
- the image processor 105 having received the head/eye information and the object model information (for example the dimensions and orientation of the object or objects to be displayed) may then in some embodiments determine the area or surface of the 3D object which is possible to be viewed by the user within the 2D surface beneath the 3D object. In other words the image processor may determine a projection surface of the 3D object onto the 2D display. In some embodiments the object or surface which may be determined may be a shadow projected by the 3D object onto the ground where the ground is not reflective and the object light source is above the object. In some other embodiments the area or surface projected which may be determined may be a reflection of the 3D object seen in the ground as represented by the 2D display 12 b.
- step 655 The operation of determination of the image model surface to be displayed by the 2D display is shown in FIG. 5 b in step 655 .
- the image processor 105 may then output to the 2D display driver 111 the data for generating an image to be output to the 2D display 12 b and thus to generate the image of the surface (such as a shadow or reflection on the ground from the object appearing to be floating in front of the 3D display.
- the operation of displaying the image from the 2D display driver 111 is shown in FIG. 5 b by step 657 .
- the object to be displayed is a cuboid 608 similar to that shown in the example presented above.
- the camera processor 101 having received the image frame data from the camera 19 may in some embodiments determine a first eye level 601 and pass this information to the image processor.
- the image processor having received the object model information and the ground information may determine the first surface 603 which would be viewed from the first eye level 601 such as a shadow and/or reflection.
- the image processor 105 may determine this surface using a virtual image modelling process. This surface information may then be passed to the 2D display driver 111 to render the image for the 2D display 12 b.
- the camera may take further images to be processed by the camera processor 101 .
- the camera processor may thus supply updated information to the image processor 105 such as a change in the eye level to a second eye level 602 .
- This change in eye level to the second eye level would then dependent on the object and ground model information be processed by the image processor to generate an updated surface 604 which would be viewed from the second eye level 602 .
- This surface information may thus be passed to the 2D display driver 111 to render the image for the 2D display 12 b.
- the virtual image 603 , 604 may then be output to the display driver 111 to be output upon the 2D display 12 b.
- the implementation of the camera tracking the eye level and the display of the reflection and/or shadow images on the 2D display together with the 3D display images would in embodiments present a more immersive experience as cues such as depth would be more easily presented.
- a method which comprises detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display, determining a projection surface viewable from a user viewpoint of at least one three dimensional object on a second display; and generating a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- the method may further comprise determining the projection surface viewable from the user viewpoint dependent on at least one of at least one three dimensional object lighting angle and position, a second display surface model, and at least one three dimensional object surface model.
- the projection surface in such embodiments may comprise at least one of a partial shadow of the at least one three dimensional object, a total shadow of the at least one three dimensional object, and a reflection of the at least one three dimensional object.
- Embodiments of the application may further improve on conventional 3D display technology by further providing image interactivity.
- image interactivity With respect to FIGS. 5 a , 5 b and 5 c the operation of user-image interactivity is described in some embodiments where the user interface 14 , user interface processor 103 , image model processor 107 , and image processor 105 may produce an improved 3D object imaging experience.
- FIG. 6 a The operation of the user interface 14 , for example in some embodiments implementing a ‘touch’ interface on the 2D display 12 b can be shown in FIG. 6 a .
- the apparatus 10 shown is similar to the apparatus shown in previous figures where the apparatus 10 comprises the 3D display 12 a, the 2D display 12 b and camera 19 .
- the 2D display 12 b in these embodiments further comprises a capacitive user interface 14 .
- the capacitive user interface 14 may be configured to detect the presence or ‘touch’ of an object relative to the display.
- the user interface may detect the presence of a finger and furthermore generate information which may indicate that the finger tip is at a position over the 2D display 12 b with a relative X axis displacement 401 , and a relative Y axis displacement 403 , both of which within the plane of the 2D display 12 b. Furthermore in some embodiments the user interface may detect the presence of the fingertip at a distance from the surface of the 2D display 12 b or in other words determine the fingertip having a relative Z axis displacement 405 above the 2D display 12 b. The user interface 14 may further output the sensed values, such as capacitance array values to the user interface processor 103 .
- the capture of user interface input data is shown in FIG. 6 c by step 451 .
- the user interface processor 103 may in some embodiments receive the user interface data and determine the position of the detected object or tip of the detected object (for example the user's finger). It is understood that in some embodiments the finger may be replaced by any suitable pointing device such as a stylus.
- the above and following operations are described with respect to the data or information received from the user interface 14 and the user interface processor 103 it would be appreciated that similar information may be generated from the camera 19 and camera processor 101 where in some embodiments the camera is configured to detect the presence of the user attempting to interface with the apparatus.
- the difference in camera images may be used to determine the x-axis, y-axis, and z-axis displacement for a detected object as measured from the 2D display surface.
- user interface processor 103 may output the position of the detected object to the image processor 105 .
- step 453 The determination of the presence of the object is shown in FIG. 6 c by step 453 .
- the detection of the presence of the object and the determination of the object in three dimensional space relative to the apparatus may be used in embodiments of the application to interact with the modelled object image.
- the detection of the finger ‘touching’ the displayed object may be used to modify the displayed object.
- the image processor 105 may in some embodiments, as described previously, also receive the image model data from the image model processor 107 .
- the image model processor 107 may contain data such as the location and orientation and shape of the object being displayed. The generation and supply of the model object information is shown in FIG. 6 c by step 454 .
- This information may be passed to the image processor 105 where the image processor 105 in some embodiments may be configured to correlate the detected object position with the modelled object as displayed. For example the image processor 105 may detect that the modelled object location and the detected object are the same, or in other words the detected object ‘touches’ the displayed object. In some embodiments the type of touch may further determine how the displayed object is to be modified. For example the ‘touch’ may indicate a single press similar to the single click of a mouse or touchpad button and thus activate the function associated with the surface touched. In other embodiments the ‘touch’ may indicate a drag or impulse operation and cause the object to rotate or move according to the ‘touch’.
- step 455 The determination of the image part which is ‘touched’ and the action carried out by the determination of the ‘touch’ can be shown in FIG. 6 c by step 455 .
- the image processor 105 may in some embodiments process the interaction between the detected object and the three dimensional object. For example in some embodiment the image processor 105 may output to the 3D display driver 109 data indicating the location of the touch by passing information displaying the object a different colour at the point of touch. In other embodiments image processor may output the interaction of the touch to move either by displacement or rotation and so give the effect that the user has moved the object. In other embodiments the image processor may process the interaction in such a way to implement a deformation in the object at the point of contact. In such embodiments the image processor 105 may thus apply a suitable physics model to the interaction of the detected object and the displayed object and outputs the result of such interaction to the 3D display driver 109 thus enabling the rendering of the modified displayed object.
- the interaction with the 3D auto-stereoscopic image in other words the 3D object and the detected object (such as a fingertip or virtual fingertip) may be performed independently of the detection of the head or eye and the manipulation of the 3D object based on the detected head or eye position. So that in such embodiments, for example embodiments implemented on devices without camera modules or sufficient processing for head detection processing, may still be implemented giving the user an improved image interaction experience.
- the display of the touched part or the display of the interaction operation of the touch can be shown in FIG. 6 c by step 457 .
- the implementation of image interaction as sensed by the user interface 14 or camera 19 , detected by the user interface processor 103 or camera interface 101 , and applied by the image processor 105 may improve the interactivity of the 3D displayed object.
- the user's finger is shown being tracked on a 3D image and shown on the 3D image 400 .
- the further implementation of user interface elements on the 2D display in some further embodiments is also shown.
- the 2D display 12 b may display images such as user interface buttons which are capable of being “touched” by the user.
- the user interface elements 501 may in some embodiments also be implemented as additional three dimensional objects and as such displayed using the three dimensional display 12 a.
- Such an embodiment is shown in FIG. 7 where 2D user interface buttons 501 are shown displayed on the two dimensional display 12 b and are capable of being touched by the user and a further 3D user interface object is shown 503 on which the user interface may detect interaction with and perform appropriate actions and display changes.
- there may be a method comprising detecting an object position with respect to either the auto-stereoscopic display and/or a second display and determining an interaction by the detected object.
- detecting an object position may thus comprise at least one of detecting an capacitance value in a capacitance sensor of the object and detecting a visual image of the object.
- determining an interaction by the detected object may in some embodiments comprise determining an intersection between the detected image and a displayed image.
- the displayed image may in such embodiments comprise the virtual image of the at least one three dimensional object or a two dimensional image displayed on the second display.
- embodiments of the application permit the two displays to produce much more realistic or believable three dimensional image projection. Furthermore the interaction between the detection of the eye and/or head position/orientation and the object to be projected permits more efficient interaction with the displayed object or objects. Furthermore in further embodiments as described above, the implementation of user interface elements on the display permits more interactive three dimensional display configurations and experiences.
- user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
- user equipment, universal serial bus (USB) sticks, and modem data cards may comprise apparatus such as the apparatus described in embodiments above.
- the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
- some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- some embodiments may be implemented as apparatus comprising: a sensor configured to detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; a processor configured to determine a surface viewable from the user viewpoint of at least one three dimensional object; and an image generator configured to generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- the sensor may in some embodiments comprise a camera configured to capture at least one image of the user and a face recognizer configured to determine the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- the processor in some embodiments may determine a model of the at least one three dimensional object; determine the distance and orientation from the at least one three dimensional object model to the user viewpoint; and generate a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- the processor may in some embodiments further detect an inter-pupil distance of a user; and provide control information to a parallax barrier dependent on at least one of: the position of the user viewpoint; the orientation of the user viewpoint; and the inter-pupil distance of the user.
- the processor may in some embodiments further determine a projection surface viewable from the user viewpoint of at least one three dimensional object on a second display; and the image generator may generate a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- the processor may in some embodiments determine the projection surface viewable from the user viewpoint dependent on at least one of: at least one three dimensional object lighting angle and position; a second display surface model; and at least one three dimensional object surface model.
- the projection surface may comprise at least one of: a partial shadow of the at least one three dimensional object; a total shadow of the at least one three dimensional object; and a reflection of the at least one three dimensional object.
- the sensor may be further configured to detect an object position with respect to either the auto-stereoscopic display and/or a second display; and determine an interaction by the detected object.
- the sensor in some embodiments may detect an object position by detecting at least one of a capacitance value in a capacitance sensor of the object; and a visual image of the object.
- the processor may further determine an intersection between the detected image and a displayed image.
- the displayed image may comprise the virtual image of the at least one three dimensional object.
- the displayed image may comprise a two dimensional image displayed on the second display.
- the embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the
- Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
- the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, and CD and the data variants thereof.
- some embodiments may be implemented by a computer-readable medium encoded with instructions that, when executed by a computer perform: detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; determine a surface viewable from the user viewpoint of at least one three dimensional object; and generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
- the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
- Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
- the design of integrated circuits is by and large a highly automated process.
- Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
- Programs such as those provided by Synopsys, Inc. of Mountain View, Calif. and Cadence Design, of San Jose, Calif. automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules.
- the resultant design in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or “fab” for fabrication.
- circuitry may refer to all of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as and where applicable: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
- circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
- circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
- processor and memory may comprise but are not limited to in this application: (1) one or more microprocessors, (2) one or more processor(s) with accompanying digital signal processor(s), (3) one or more processor(s) without accompanying digital signal processor(s), (3) one or more special-purpose computer chips, (4) one or more field-programmable gate arrays (FPGAS), (5) one or more controllers, (6) one or more application-specific integrated circuits (ASICS), or detector(s), processor(s) (including dual-core and multiple-core processors), digital signal processor(s), controller(s), receiver, transmitter, encoder, decoder, memory (and memories), software, firmware, RAM, ROM, display, user interface, display circuitry, user interface circuitry, user interface software, display software, circuit(s), antenna, antenna circuitry, and circuitry.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- The present application relates to a method and apparatus for auto-stereoscopic displays. In some embodiments the method and apparatus relate to auto-stereoscopic image displays and in particular, but not exclusively limited to, some further embodiments relate to auto-stereoscopic display for mobile apparatus.
- Stereoscopic image displays have the potential to significantly improve the user's experience of operating and interacting with modern electronic devices. So-called 3D display technology or stereoscopic display technology generates images to the left and right eye separately in order to fool the user into believing that say are viewing a three dimensional image. Traditional stereoscopic displays present images for the left and right eye and then use filters placed over each eye so that the left eye only views the image for the left eye and the right eye only views the image for the right eye. An example of such technology is polarization filtering where images for the left and right eye are modulated by a different polarisation. This technology is currently favoured in 3D cinemas. Such technology, although capable of presenting 3D images of objects requires each user to be equipped with the required filters, typically in the form of a pair of over-spectacles in order to view the image.
- Auto-stereoscopic displays which do not require the user to wear any device to filter the left and right images but instead filters or directs the images directly to the correct eye, are rapidly becoming commercially realisable. These auto-stereoscopic devices remove a significant barrier to the use of 3D displays for everyday use. Such displays use a range of optical elements in combination with a display to focus or direct the left and right view to the left and right eye respectively.
- However such auto-stereoscopic systems can present images which appear distracting rather than immersive and can often lack the visual cues which the human visual system would expect such as reflections and shadowing.
- Furthermore interaction with the image displayed is typically limited. Auto-stereoscopic displays typically have a limited range of viewing angles before the auto-stereoscopic view becomes poor. For example the location of the user with reference to the display and the movement of the user with reference to the display will typically only cause the user to experience a non-optimal image in that the viewing angle range for displaying a three dimensional image is exceeded.
- Thus interaction with the image displayed on the 3D display and the illusion of 3D image presentation with regards to interaction is typically poorly implemented.
- This application proceeds from the consideration that whilst auto-stereoscopic display panels can assist in display representations it may be advantageous to improve on such displays by use of additional displays and control interfaces.
- Embodiments of the present application aim to address the above problems.
- There is provided according to a first aspect of the invention a method comprising: detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; determining a surface viewable from the user viewpoint of at least one three dimensional object; and generating a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- Detecting the position and orientation of a user may comprise: capturing at least one image of the user; determining the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- Capturing at least one image of the user may comprise capturing at least one image of the user from each of at least two cameras, and detecting the position and orientation may comprise comparing the difference between the at least one image of the user from each of the at least two cameras.
- Determining a surface viewable from the user viewpoint may comprise: determining a model of the at least one three dimensional object; determining the distance and orientation from the at least one three dimensional object model to the user viewpoint; and generating a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- The method may further comprise: detecting an inter-pupil distance of a user; and controlling a parallax barrier dependent on at least one of: the position of the user viewpoint; the orientation of the user viewpoint; and the inter-pupil distance of the user.
- The method may further comprise: determining a projection surface viewable from the user viewpoint of at least one three dimensional object on a second display; and generating a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- The method may further comprise determining the projection surface viewable from the user viewpoint dependent on at least one of: at least one three dimensional object lighting angle and position; a second display surface model; and at least one three dimensional object surface model.
- The projection surface may comprise at least one of: a partial shadow of the at least one three dimensional object; a total shadow of the at least one three dimensional object; and a reflection of the at least one three dimensional object.
- The method may further comprise: detecting an object position with respect to either the auto-stereoscopic display and/or a second display; and determining an interaction by the detected object.
- Detecting an object position may comprise at least one of: detecting an capacitance value in a capacitance sensor of the object; and detecting a visual image of the object.
- Determining an interaction by the detected object may comprise determining an intersection between the detected image and a displayed image.
- The displayed image may comprise at least one of: the virtual image of the at least one three dimensional object; and a two dimensional image displayed on the second display.
- According to a second aspect of the invention there is provided an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; determining a surface viewable from the user viewpoint of at least one three dimensional object; and generating a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- Detecting the position and orientation of a user may cause the apparatus at least to perform: capturing at least one image of the user; determining the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- Capturing at least one image of the user may cause the apparatus at least to perform capturing at least one image of the user from each of at least two cameras, and detecting the position and orientation may cause the apparatus at least to perform comparing the difference between the at least one image of the user from each of the at least two cameras.
- Determining a surface viewable from the user viewpoint may cause the apparatus at least to perform: determining a model of the at least one three dimensional object; determining the distance and orientation from the at least one three dimensional object model to the user viewpoint; and generating a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- The computer program code configured to with the at least one processor may further cause the apparatus at least to perform: detecting an inter-pupil distance of a user; and controlling a parallax barrier dependent on at least one of: the position of the user viewpoint; the orientation of the user viewpoint; and the inter-pupil distance of the user.
- The computer program code configured to with the at least one processor may further cause the apparatus at least to perform: determining a projection surface viewable from the user viewpoint of at least one three dimensional object on a second display; and generating a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- The computer program code configured to with the at least one processor may further cause the apparatus at least to perform determining the projection surface viewable from the user viewpoint dependent on at least one of: at least one three dimensional object lighting angle and position; a second display surface model; and at least one three dimensional object surface model.
- The projection surface may comprise at least one of: a partial shadow of the at least one three dimensional object; a total shadow of the at least one three dimensional object; and a reflection of the at least one three dimensional object.
- The computer program code configured to with the at least one processor may further cause the apparatus at least to perform: detecting an object position with respect to either the auto-stereoscopic display and/or a second display; and determining an interaction by the detected object.
- Detecting an object position may cause the apparatus at least to perform: detecting an capacitance value in a capacitance sensor of the object; and detecting a visual image of the object.
- Determining an interaction by the detected object may cause the apparatus at least to perform determining an intersection between the detected image and a displayed image.
- The displayed image may comprise at least one of: the virtual image of the at least one three dimensional object; and a two dimensional image displayed on the second display.
- According to a third aspect of the invention there is provided an apparatus comprising: a sensor configured to detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; a processor configured to determine a surface viewable from the user viewpoint of at least one three dimensional object; and an image generator configured to generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- The sensor may comprise a camera configured to capture at least one image of the user and a face recognizer configured to determine the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- There may be at least two cameras capturing at least one image of the user from each of the at least two cameras, and the face recognizer may compare the difference between the at least one image of the user from each of the at least two cameras.
- The processor may determine a model of the at least one three dimensional object; determine the distance and orientation from the at least one three dimensional object model to the user viewpoint; and generate a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- The processor may further detect an inter-pupil distance of a user; and provide control information to a parallax barrier dependent on at least one of: the position of the user viewpoint; the orientation of the user viewpoint; and the inter-pupil distance of the user.
- The processor may further determine a projection surface viewable from the user viewpoint of at least one three dimensional object on a second display; and the image generator may generate a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- The processor may determine the projection surface viewable from the user viewpoint dependent on at least one of: at least one three dimensional object lighting angle and position; a second display surface model; and at least one three dimensional object surface model.
- The projection surface may comprise at least one of: a partial shadow of the at least one three dimensional object; a total shadow of the at least one three dimensional object; and a reflection of the at least one three dimensional object.
- The sensor may be further configured to detect an object position with respect to either the auto-stereoscopic display and/or a second display; and determine an interaction by the detected object.
- The sensor may detect an object position by detecting at least one of a capacitance value in a capacitance sensor of the object; and a visual image of the object.
- The processor may further determine an intersection between the detected image and a displayed image.
- The displayed image may comprise the virtual image of the at least one three dimensional object.
- The displayed image may comprise a two dimensional image displayed on the second display.
- According to a fourth aspect of the invention there is provided a computer-readable medium encoded with instructions that, when executed by a computer perform: detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; determine a surface viewable from the user viewpoint of at least one three dimensional object; and generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- According to a fifth aspect of the invention there is provided an apparatus comprising: detecting means for detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; modelling means for determining a surface viewable from the user viewpoint of at least one three dimensional object; and image generating means for generating a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- An electronic device may comprise apparatus as described above.
- A chipset may comprise apparatus as described above.
- For a better understanding of the present application and as to how the same may be carried into effect, reference will now be made by way of example to the accompanying drawings in which:
-
FIG. 1 shows a schematic representation of an apparatus suitable for implementing some embodiments of the application; -
FIG. 2 shows a physical schematic representation of an apparatus as shown inFIG. 1 suitable for implementing some embodiments in further detail; -
FIG. 3 shows a schematic representation of the processing components in apparatus according to some embodiments of the application; -
FIG. 4 a shows a schematic representation of head position tracking in some embodiments of the application; -
FIG. 4 b shows a flow diagram of the processes carried out in head position tracking according to some embodiments; -
FIG. 5 a shows a schematic representation of reflection/shadow generation for images according to some embodiments of the application; -
FIG. 5 b shows a flow diagram of the process carried out in reflection/shadow generation according to some embodiments; -
FIG. 6 a shows a physical schematic representation of user interface interaction according to some embodiments of the application; -
FIG. 6 b shows a further physical schematic representation of user interface interaction according to some embodiments of the application; -
FIG. 6 c shows a flow diagram of the processes carried out by user interface interaction according to some embodiments of the application; and -
FIG. 7 shows a further physical schematic representation of user interface interaction according to some embodiments of the application. - The application describes apparatus and methods to generate more convincing and interactive 3D image displays and thus create a more immersive and interactive user experience than may be generated with just one stereoscopic display unit. Thus as described hereafter in embodiments of the application the combination of two displays in a folding electronic device or apparatus with suitable sensors monitoring the user enables the apparatus to be configured such that the user experience of the 3D images displayed is greatly enhanced. Furthermore, in some embodiments the use of head tracking may be applied with the dual displays to enable further enhancement for the displayed images. In some further embodiments the configuration of user interface apparatus with the dual display configuration further enhances the perception and the interaction with the displayed images.
- The following describes apparatus and methods for the provision of improved auto-stereoscopic image display and interaction. In this regard reference is first made to
FIG. 1 which discloses a schematic block diagram of an exemplaryelectronic device 10 or apparatus on which embodiments of the application may be implemented. Theelectronic device 10 is configured to provide improved auto-stereoscopic image display and interaction. - The
electronic device 10 is in some embodiments a mobile terminal, mobile phone or user equipment for operation in a wireless communication system. In other embodiments, the electronic device is any suitable electronic device configured to provide a image display, such as for example a digital camera, a portable audio player (mp3 player), a portable video player (mp4 player). - The
electronic device 10 comprises anintegrated camera module 11, which is linked to aprocessor 15. Theprocessor 15 is further linked to a first display (display A) 12 a, and a second display (display B) 12 b. Theprocessor 15 is further linked to a transceiver (TX/RX) 13, to a user interface (UI) 14 and to amemory 16. In some embodiments, thecamera module 11 and/or the displays 12 a and 12 b are separate or separable from the electronic device and the processor receives signals from thecamera module 11 and/or transmits and signals to the displays 12 a and 12 b via thetransceiver 13 or another suitable interface. - The
processor 15 may be configured to executevarious program codes 17. The implementedprogram codes 17, in some embodiments, comprise image capture digital processing or configuration code, image displaying and image interaction code. The implementedprogram codes 17 in some embodiments further comprise additional code for further processing of images. The implementedprogram codes 17 may in some embodiments be stored for example in thememory 16 for retrieval by theprocessor 15 whenever needed. Thememory 15 in some embodiments may further provide asection 18 for storing data, for example data that has been processed in accordance with the application. - The
camera module 11 comprises acamera 19 having a lens for focusing an image on to a digital image capture means such as a charged coupled device (CCD). In other embodiments the digital image capture means may be any suitable image capturing device such as complementary metal oxide semiconductor (CMOS) image sensor. Thecamera module 11 further comprises aflash lamp 20 for illuminating an object before capturing an image of the object. Theflash lamp 20 is linked to thecamera processor 21. Thecamera 19 is also linked to acamera processor 21 for processing signals received from the camera. Thecamera processor 21 is linked tocamera memory 22 which may store program codes for thecamera processor 21 to execute when capturing an image. The implemented program codes (not shown) may in some embodiments be stored for example in thecamera memory 22 for retrieval by thecamera processor 21 whenever needed. In some embodiments thecamera processor 21 and thecamera memory 22 are implemented within theapparatus 10processor 15 andmemory 16 respectively. - A user of the
electronic device 10 may in some embodiments use thecamera module 11 for capturing images to be used in controlling the displays 12 a and 12 b as is described in later detail with respect toFIGS. 4 a, 4 b, 5 a, and 5 b and described later. In some other embodiments the camera module may capture images which may be transmitted to some other electronic device to be processed and provide the information required to interact with the display. For example where the processing power of the apparatus is not sufficient some of the processing operations may be implemented in further apparatus. Corresponding applications in some embodiments may be activated to this end by the user via theuser interface 14. - In some embodiments the camera module comprises at least two cameras, wherein each camera is located on the same side of device.
- In some embodiments of the application the
camera processor 21 or theprocessor 15 may be configured to receive image data from the camera or multiple cameras and further process the image data to identify an object placed in front of the camera. Such objects capable of being identified by thecamera processor 21 orprocessor 15 may be, for example, a face or head of the user, the eyes of the user, the finger or pointing device used by the user. Furthermore in some embodiments thecamera processor 21 or processor may determine or estimate the identified object's position in front of the device. The use of this identification process and position estimation process will be described in further detail later. - The
apparatus 10 may in embodiments be capable of implementing the processing techniques at least partially in hardware, in other words the processing carried out by theprocessor 15 andcamera processor 21 may be implemented at least partially in hardware without the need of software or firmware to operate the hardware. - The
user interface 14 in some embodiments enables a user to input commands to theelectronic device 10. Theuser interface 14 may in some embodiments be implemented as, for example, a keypad, a user operated button, switches, or by a ‘touchscreen’ interface implemented on one or both of the displays 12 a and 12 b. Furthermore in some embodiments of the application some of the user interface functionality may be implemented from thecamera 19 captured image information whereby the object identification and position information may be used to provide an input to the device. - The
transceiver 13 enables communication with other electronic devices, for example in some embodiments via a wireless communication network. - With respect to
FIG. 2 a physical representation as implemented in some embodiments of the apparatus shown inFIG. 1 is shown in further detail. Theapparatus 10 as shown inFIG. 2 may be implemented in a folding configuration. In such folding configuration embodiments as shown inFIG. 2 , the apparatus comprises afirst case part 203 and asecond case part 201, both of which are connected together by ahinge 205. In some embodiments thehinge 205 operates not only as a mechanical connection between thefirst case part 203 and thesecond case part 201 but also implements an electrical connection between components within thefirst case part 203 and thesecond case part 201. In other embodiments the hinge is only a mechanical connection with the electrical connection being implemented separately. In some embodiments the electrical connection may be wired and provided for example by a flexible ribbon cable or in some other embodiments may be provided by a wireless connection between the first and second case parts. - Although the following embodiments are described with respect to a folding hinge configuration joining the
first case part 203 and thesecond case part 201 some other embodiments of the application may implement a sliding connection whereby thefirst case part 203 slides over thesecond case part 201, in some such embodiments thefirst case part 203 is further configured to rotate and be angled with respect to thesecond case part 201 as the two parts slide so to produce a similar display orientation configuration. Such a sliding/rotating hinge may be similar to that seen currently on such user equipment as the Nokia N97. - Furthermore in some embodiments the
first case part 203 and thesecond case part 201 may be configured to operation with a twist and rotatable hinge similar to that employed by ‘tablet’ portable computers wherein the hinge connecting thefirst case part 203 and thesecond case part 201 can be folded and unfolded to open up the apparatus but may also be twisted to protect or display the inner surfaces when the hinge is refolded. - The
first case part 203 as shown inFIG. 2 is configured to implement on one surface the first display 12 a which may be an auto-stereoscopic display (also known as the 3D display) and acamera 19. The 3D display 12 a may be any suitable auto-stereoscopic display. For example the 3D display 12 a may in some embodiments be implemented as a liquid crystal display with parallax barriers. The principle of parallax barriers is known whereby an optical aperture is aligned with columns of the liquid crystal display (LCD) pixels in order that alternate columns of LCD pixels can be seen by the left and right eyes separately. In other words the parallax barriers operate so that in some embodiments the even columns of the LCD pixels may be viewed by the left eye and the odd columns of the LCD pixels may be viewed by the right eye. In some embodiments the parallax barriers may be, in some embodiments, controllable and capable of controlling the angle of image presentation. - In some other embodiments of the application the auto-stereoscopic (3D) display 12 a may be implemented as a lenticular optical configured liquid crystal display where cylindrical lenses are aligned with columns of LCD pixels to produce a similar effect to that of the parallax barriers. In other words, alternate columns of LCD pixels, which go to construct the alternate images for the left and right eyes are directed to the left and right eyes only. In further embodiments of the
application 3D display may be implemented using micropolarisers. In further other embodiments, the 3D display may comprise a holographic display to create real images using a diffuse light source. - In some embodiments the first display, the 3D display, 12 a may further comprise a touch input interface. In some embodiments the touch input interface may be a capacitive touch interface suitable for detecting either direct touch onto the display surface or detecting the capacitive effect between the display surface and a further object such as a finger in order to determine a position relative to the dimensions of the display surface and in some embodiments the position relative from the display surface.
- Although the first display 12 a as described above may be implemented as a LCD display it would be appreciated that any suitable display technology may be used to implement the display. For example in some embodiments the first display 12 a may be implemented using light emitting diodes (LED) or organic light emitting diode (OLED) configuration featuring apertures or lenses configured to generate two separate images directed at the left and right eye of the user.
- In some embodiments the first display 12 a may be configured to operate in a 2D mode of operation. For example by switching off the parallax barrier layer in the embodiments employing a parallax barrier to direct the alternate pixel layers to the left and right eyes all of the rows of the pixels will be visible to both eyes.
- The
second case part 201 as shown inFIG. 2 is configured to implement on one surface the second display 12 b which may be a 2D display 12 b. The 2D display may be implemented using any suitable display technology, for example liquid crystal display (LCD), light emitting diodes (LED), or organic LED display technologies. - In further embodiments the second display 12 b may comprise a second auto-stereoscopic (3D) display capable of being operable in a 2D mode—for example by switching off of the parallax barrier layer in a manner similar to that described above.
- In some embodiments the second or 2D display 12 b comprises a touch interface similar to the touch interface as described above with reference to the first display suitable for determining the position of an object both relative across the display surface and also relative from the display surface. Thus in some embodiments the touch interface comprises a capacitive touch interface suitable of determining the position of the object relative across and from the surface of the display by the determination of the capacitance of the display surface.
- The operation of the touch interface is described further with reference to
FIGS. 6 a, 6 b, and 6 c. - It is to be understood again that the structure of the
electronic device 10 could be supplemented and varied in many ways. - It would be appreciated that the schematic structures described in
FIGS. 4 a, 5 a, 6 a, 6 b, and 7 and the method steps inFIGS. 4 b, 5 b, and 6 c represent only a part of the operation of a complete system comprising some embodiments of the application as shown implemented in the electronic device shown inFIG. 1 . - A schematic representation of the
processor 15 configuration for some embodiments is shown in further detail with respect toFIG. 3 . Theprocessor 15 is shown to comprise a camera/head location processor 101 which is configured to receive camera image information and identify objects and their position/orientation from thecamera 19 image information. In some embodiments thecamera processor 101 is configured to determine from the camera image information the location and orientation of the head of the user. Thecamera processor 101 may in some embodiments receive the head image taken by thecamera 19 and project the 2D image data onto a surface such as a cylinder to provide a stabilised view of the face independent of the current orientation, position and scale of the surface model. From this projection the orientation and position may be estimated. However it would be understood that any suitable head tracking algorithm may be used, for example eye tracking and registration algorithms described within “fast, reliable head tracking under varying illumination: an approach based on registration of texture mapped 3D models” by Kaskia et at as described within the Computer Science Technical Report, May 1999. In some further embodiments where two images are captured by two cameras located or orientated differently the difference in between the images may be used to estimate the position and orientation of the face based on the knowledge of the differences between the cameras. - In some further embodiments of the application the
camera processor 101 may be configured to further determine the distance of separation of the eyes on the head of the user. - In some further embodiments the
camera processor 101 may be configured to identify a finger or other pointing object and furthermore the relative position of the finger (or pointing object) to the auto-stereoscopic display. - The camera/
head location processor 101 outputs an indication of the object identified and furthermore the position and/or orientation of the object identified relative to the display to theimage processor 105. - The
processor 15 may further comprise a touch processor/user interface processor 103 configured to receive input from theuser interface 14 such as the touch interface implemented within either of the first display 12 a or second display 12 b according to some embodiments. In further embodiments theuser interface 14 may comprise other input means such as mice, keyboard, keypad or any other suitable user input apparatus. Theuser interface processor 103 processes the input from the user to determine whether a relevant input has been received with respect to the image display. For example the user interface processor may receive the values of capacitance from a display surface touch interface and from the capacitance value determiner a location along a display surface which has been touched. In further embodiments theuser interface processor 103 may determine from the capacitance value the distance of the ‘touching’ object from the surface of the display. Thus the touch interface may not require direct contact to detect an object in sensing range of the touch interface. Furthermore although the touch is described above and hereafter with respect to a physical object detected, such as a user finger, it would be understood that in some embodiments a the user interface processor detects and processes data about a virtual object, such as an image of a pointer displayed by the display and which may be controlled by use of the mouse, trackball, keys or any suitable control means. - The
user interface processor 103 may output to theimage processor 105 an indication of what the object is and where the object (which as described above may be a physical object or a displayed ‘virtual’ object) used by the user is touching or pointing. - The
processor 15 may further comprise animage model processor 107. The image model processor is configured in some embodiments to store a series of image models which may be used by theimage processor 105. For example the image model processor may store the three dimensional image data models in order to create the three dimensional display image generated by theimage processor 105. Thus in some embodiments the image model processor stores a series of models of geometric shapes or meshes describing the elements which make up the environment displayed by the image processor. - The
processor 15 may further comprise animage processor 105 configured to receive the camera, user interface and image model information and generate the image to be displayed for both the two displays. Further examples are described hereafter. Theimage processor 105 outputs the images to be displayed on the displays to the3D display driver 109 and the2D display driver 111. - The 3D display driver 108 receives the display image data for the first display from the
image processor 105 and generates the data to be supplied to the first (3D) display 12 a in the form of the left eye image data and the right eye image data in order to project the three dimensional image. Furthermore in some embodiments the 3D display driver may dependent on the distance of the separation of the eyes and/or position of the users head control the parallax barrier (or similar left-right eye separation display control) in order to produce a more optimal 3D display image. - Similarly the 2D display driver receives the display image data from the
image processor 105 for the second display 12 b and generates the data for the second (2D) display 12 b. - With respect to
FIGS. 4 a and 4 b the interaction in some embodiments between thecamera processor 101,image processor 105,image model processor 107 and the3D display driver 109 is described in further detail. Theapparatus 10 is shown inFIG. 4 a being operated so to generate a 3D image using the 3D display 12 b. The 3D image generated in this example is a cuboid which appears to the user to be floating in front of the 3D display. - The
camera 19 in some embodiments is configured to capture image frame data and pass the image frame data to thecamera processor 101, The operation of the camera capturing image frame data is shown inFIG. 4 b bystep 351. - The
camera processor 101 as described previously may process the image frame data and from this data determine the head position and orientation and in some embodiments the eye position of the user. The determination of the head position/orientation and eye positions is by any suitable head modelling process such as described previously. This head and eye information may then be passed to theimage processor 105. The determination of the head location and eye position is shown inFIG. 4 b bystep 353. - The
image model processor 107 may on determining the object to be displayed retrieve or generate the object model information and furthermore pass this object model information to theimage processor 105. For example as shown inFIG. 4 a, the image to be displayed may be a cuboid with non-textured, flat sides. Thus theimage model processor 107 may provide the dimensions and the orientation of the cuboid to theimage processor 105. The generation of the object model information is shown inFIG. 4 b bystep 354. - The
image processor 105 having received the head/eye information and the object model information (for example the dimensions and orientation of the object or objects to be displayed) may then in some embodiments determine the area or surface of the 3D object which is possible to be viewed by the user. Theimage processor 105 may use the geometric relationship between the position and orientation of the users head and the position and orientation of the object to be displayed to thus determine the area which could be seen from the detected viewpoint. The operation of determination of the image model surface to be displayed is shown inFIG. 4 b instep 355. - The
image processor 105 may then output to the3D display driver 109 the data for generating a left and right eye image to be output to the 3D display 12 a and thus to generate the images to the left and right eyes to provide the image which appears to be floating in front of the 3D display. The operation of displaying the image from the3D display driver 109 is shown inFIG. 4 b bystep 357. - Thus in some embodiments and as shown in
FIG. 4 a, when the user is detected as looking directly face-on to the cuboid, the user is provided with left and right eye images which present only the images of the face of the cuboid. In other words theimage processor 105 outputs to the3D display driver 109 information allowing the3D display driver 109 to generate the left and right eye images to provide the 3D display the data to present only the face of the cuboid 301 a. However when the user moves to one side of the cuboid as can be seen by the right hand part ofFIG. 4 a, the motion of the head/eyes and thus the position displacement is determined by thecamera processor 101 which passes this information to theimage processor 105. The image processor may then in some embodiments determine that the user can from the detected viewpoint see at least two faces of the cuboid and thus the image processor sends information to the3D display driver 109 the information permitting the generation of left and right eye images to enable the 3D display 12 a to present the cuboid surface with two faces which can be seen by the user 301 b. - Although the above description only details the interaction of the camera and a single 3D object image, it would be understood that the same operations and apparatus may display more than a single object and furthermore the detection of the position of the head would permit different objects to be viewed at different angles. Thus for example where two objects are modelled from a first point of view the image processor may determine that one object completely obscures a second object, in other words that the second object is in the shadow of the first from the detected viewpoint, whereas when the
image processor 105 receives information that the head position has moved sufficiently theimage processor 105 may then in some embodiments pass the information to the3D display driver 109 information enabling the rendering of both objects. - Thus in these embodiments there is a greater level of interactivity with the display than can be found in conventional auto-stereoscopic displays which do not react to the motion of the user.
- Thus in some embodiments there may be a method comprising detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display, determining a surface viewable from the user viewpoint of at least one three dimensional object, and generating a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- Furthermore the method of such embodiments may detect the position and orientation of a user by capturing at least one image of the user and determining the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- Also in some embodiments capturing at least one image of the user may comprise capturing at least one image of the user from each of at least two cameras, and detecting the position and orientation comprises comparing the difference between the at least one image of the user from each of the at least two cameras.
- In some embodiments determining a surface viewable from the user viewpoint comprises determining a model of the at least one three dimensional object, determining the distance and orientation from the at least one three dimensional object model to the user viewpoint and generating a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- In some embodiments of the invention, the
image processor 105 furthermore determines when the location and position of the user has moved so that the auto-stereoscopic image filters (such as the parallax barriers) are not optimal. In such embodiments theimage processor 105 may generate a message to be displayed indicating that the user is moving out of the optimal viewing range. In other embodiments the image processor may generate information to be passed to the3D display driver 109 to change the filters. For example where the 3D display 12 b has controllable parallax barriers to filter the left and right eye images, the image processor may pass to the 3D display driver information permitting the shifting of the parallax barrier to maintain the three dimensional image. In some other embodiments, for example where the display itself is steerable theimage processor 105 may further generate information to enable the display to be steered to maintain the 3D object presentation. For example where the display is implemented on a twisting hinge the hinge may twist to enable the user to move round the apparatus but maintain the display of the 3D image. - Furthermore in some embodiments the
camera processor 101 may determine the intra-pupil distance and pass this information to the image processor. This information may furthermore in some embodiments permit theimage processor 105 to send information to the 3D display driver to permit optimisation of the parallax barrier so that the left and right eye images and the image filtering operation is more optimised for the specific user. For example as children have smaller inter-pupil distances the image processor may optimize the experience for children or adults as settings used for adults would produce poor 3D imaging results for children and vice versa. - With respect to
FIGS. 5 a and 5 b, further embodiments of the application are shown whereby further interaction between the 19,camera 3D display 12 a and 2D display 12 b is shown in further detail. The operation of the 3D display element is in some embodiments the same as described above whereby the camera monitors the users head/eye position and together with the knowledge of the object to be displayed generates left and right eye images to generate the 3D image of the object from the viewpoint of the user. Such images may be further improved by the addition and implementation of at least one 2D display 12 b as is described hereafter. - As described in the embodiments describing the interaction of the camera and the 3D display 12 a the
camera 19 in some embodiments is configured to capture image frame data and pass the image frame data to thecamera processor 101. The operation of the camera capturing image frame data is shown inFIG. 5 b bystep 351. It would be understood that the same image data captured and processed in the 3D interaction embodiments as described above may also be used in the 2D interaction embodiments described hereafter. Where similar or the same process is described then the same reference number is reused. - The
camera processor 101 as described previously may also process the image frame data and from this data determine the head position and orientation and in some embodiments the eye position of the user. The determination of the head position/orientation and eye positions is by any suitable head modelling process such as described previously. In some embodiments this eye information may comprise determining the eye level of the user relative to the display. This head and eye information may then be passed to theimage processor 105. The determination of the head location and eye position is also shown inFIG. 5 b bystep 353. - The
image model processor 107 may on determining the object to be displayed retrieve or generate the object model information and furthermore pass this object model information to theimage processor 105. For example as shown inFIG. 5 a, the image to be displayed may be a cuboid with non-textured, flat sides. Furthermore in some embodiments the object model may also comprise lighting information, surface reflectivity information, object location, and furthermore ground information, such as texture and reflectivity of the surface above with the object is ‘floating’. Thus theimage model processor 107 may provide this information to theimage processor 105. The generation of the object model information is shown inFIG. 5 b bystep 354. - The
image processor 105 having received the head/eye information and the object model information (for example the dimensions and orientation of the object or objects to be displayed) may then in some embodiments determine the area or surface of the 3D object which is possible to be viewed by the user within the 2D surface beneath the 3D object. In other words the image processor may determine a projection surface of the 3D object onto the 2D display. In some embodiments the object or surface which may be determined may be a shadow projected by the 3D object onto the ground where the ground is not reflective and the object light source is above the object. In some other embodiments the area or surface projected which may be determined may be a reflection of the 3D object seen in the ground as represented by the 2D display 12 b. - The operation of determination of the image model surface to be displayed by the 2D display is shown in
FIG. 5 b instep 655. - The
image processor 105 may then output to the2D display driver 111 the data for generating an image to be output to the 2D display 12 b and thus to generate the image of the surface (such as a shadow or reflection on the ground from the object appearing to be floating in front of the 3D display. The operation of displaying the image from the2D display driver 111 is shown inFIG. 5 b bystep 657. - For example as shown on the left hand side of
FIG. 5 a the object to be displayed is a cuboid 608 similar to that shown in the example presented above. Thecamera processor 101 having received the image frame data from thecamera 19 may in some embodiments determine a first eye level 601 and pass this information to the image processor. The image processor having received the object model information and the ground information may determine thefirst surface 603 which would be viewed from the first eye level 601 such as a shadow and/or reflection. Theimage processor 105 may determine this surface using a virtual image modelling process. This surface information may then be passed to the2D display driver 111 to render the image for the 2D display 12 b. - Furthermore as shown by the right hand side of
FIG. 5 a the camera may take further images to be processed by thecamera processor 101. The camera processor may thus supply updated information to theimage processor 105 such as a change in the eye level to asecond eye level 602. This change in eye level to the second eye level would then dependent on the object and ground model information be processed by the image processor to generate an updatedsurface 604 which would be viewed from thesecond eye level 602. This surface information may thus be passed to the2D display driver 111 to render the image for the 2D display 12 b. The 603, 604 may then be output to thevirtual image display driver 111 to be output upon the 2D display 12 b. - The implementation of the camera tracking the eye level and the display of the reflection and/or shadow images on the 2D display together with the 3D display images would in embodiments present a more immersive experience as cues such as depth would be more easily presented.
- Thus in embodiments of the application there may be a method which comprises detecting the position and orientation of a user viewpoint with respect to an auto-stereoscopic display, determining a projection surface viewable from a user viewpoint of at least one three dimensional object on a second display; and generating a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- In such embodiments the method may further comprise determining the projection surface viewable from the user viewpoint dependent on at least one of at least one three dimensional object lighting angle and position, a second display surface model, and at least one three dimensional object surface model.
- The projection surface in such embodiments may comprise at least one of a partial shadow of the at least one three dimensional object, a total shadow of the at least one three dimensional object, and a reflection of the at least one three dimensional object.
- Embodiments of the application may further improve on conventional 3D display technology by further providing image interactivity. With respect to
FIGS. 5 a, 5 b and 5 c the operation of user-image interactivity is described in some embodiments where theuser interface 14,user interface processor 103,image model processor 107, andimage processor 105 may produce an improved 3D object imaging experience. - The operation of the
user interface 14, for example in some embodiments implementing a ‘touch’ interface on the 2D display 12 b can be shown inFIG. 6 a. InFIG. 6 a theapparatus 10 shown is similar to the apparatus shown in previous figures where theapparatus 10 comprises the 3D display 12 a, the 2D display 12 b andcamera 19. The 2D display 12 b in these embodiments further comprises acapacitive user interface 14. Thecapacitive user interface 14 may be configured to detect the presence or ‘touch’ of an object relative to the display. Thus as shown inFIG. 6 a the user interface may detect the presence of a finger and furthermore generate information which may indicate that the finger tip is at a position over the 2D display 12 b with a relativeX axis displacement 401, and a relativeY axis displacement 403, both of which within the plane of the 2D display 12 b. Furthermore in some embodiments the user interface may detect the presence of the fingertip at a distance from the surface of the 2D display 12 b or in other words determine the fingertip having a relativeZ axis displacement 405 above the 2D display 12 b. Theuser interface 14 may further output the sensed values, such as capacitance array values to theuser interface processor 103. - The capture of user interface input data is shown in
FIG. 6 c bystep 451. - The
user interface processor 103 may in some embodiments receive the user interface data and determine the position of the detected object or tip of the detected object (for example the user's finger). It is understood that in some embodiments the finger may be replaced by any suitable pointing device such as a stylus. - Although the above and following operations are described with respect to the data or information received from the
user interface 14 and theuser interface processor 103 it would be appreciated that similar information may be generated from thecamera 19 andcamera processor 101 where in some embodiments the camera is configured to detect the presence of the user attempting to interface with the apparatus. Thus for example in some embodiments using multiple cameras the difference in camera images may be used to determine the x-axis, y-axis, and z-axis displacement for a detected object as measured from the 2D display surface. - Thus
user interface processor 103 may output the position of the detected object to theimage processor 105. - The determination of the presence of the object is shown in
FIG. 6 c bystep 453. - The detection of the presence of the object and the determination of the object in three dimensional space relative to the apparatus may be used in embodiments of the application to interact with the modelled object image. For example the detection of the finger ‘touching’ the displayed object may be used to modify the displayed object.
- The
image processor 105 may in some embodiments, as described previously, also receive the image model data from theimage model processor 107. As has been described previously, theimage model processor 107 may contain data such as the location and orientation and shape of the object being displayed. The generation and supply of the model object information is shown inFIG. 6 c bystep 454. - This information may be passed to the
image processor 105 where theimage processor 105 in some embodiments may be configured to correlate the detected object position with the modelled object as displayed. For example theimage processor 105 may detect that the modelled object location and the detected object are the same, or in other words the detected object ‘touches’ the displayed object. In some embodiments the type of touch may further determine how the displayed object is to be modified. For example the ‘touch’ may indicate a single press similar to the single click of a mouse or touchpad button and thus activate the function associated with the surface touched. In other embodiments the ‘touch’ may indicate a drag or impulse operation and cause the object to rotate or move according to the ‘touch’. - The determination of the image part which is ‘touched’ and the action carried out by the determination of the ‘touch’ can be shown in
FIG. 6 c bystep 455. - The
image processor 105 may in some embodiments process the interaction between the detected object and the three dimensional object. For example in some embodiment theimage processor 105 may output to the3D display driver 109 data indicating the location of the touch by passing information displaying the object a different colour at the point of touch. In other embodiments image processor may output the interaction of the touch to move either by displacement or rotation and so give the effect that the user has moved the object. In other embodiments the image processor may process the interaction in such a way to implement a deformation in the object at the point of contact. In such embodiments theimage processor 105 may thus apply a suitable physics model to the interaction of the detected object and the displayed object and outputs the result of such interaction to the3D display driver 109 thus enabling the rendering of the modified displayed object. - It would be understood that in some embodiments the interaction with the 3D auto-stereoscopic image, in other words the 3D object and the detected object (such as a fingertip or virtual fingertip) may be performed independently of the detection of the head or eye and the manipulation of the 3D object based on the detected head or eye position. So that in such embodiments, for example embodiments implemented on devices without camera modules or sufficient processing for head detection processing, may still be implemented giving the user an improved image interaction experience.
- The display of the touched part or the display of the interaction operation of the touch can be shown in
FIG. 6 c bystep 457. - Thus in some embodiments of the application the implementation of image interaction as sensed by the
user interface 14 orcamera 19, detected by theuser interface processor 103 orcamera interface 101, and applied by theimage processor 105 may improve the interactivity of the 3D displayed object. - For example with regards to
FIG. 6 b the user's finger is shown being tracked on a 3D image and shown on the3D image 400. - With respect to
FIG. 7 the further implementation of user interface elements on the 2D display in some further embodiments is also shown. In these embodiments of the application the 2D display 12 b may display images such as user interface buttons which are capable of being “touched” by the user. Theuser interface elements 501 may in some embodiments also be implemented as additional three dimensional objects and as such displayed using the three dimensional display 12 a. Such an embodiment is shown inFIG. 7 where 2Duser interface buttons 501 are shown displayed on the two dimensional display 12 b and are capable of being touched by the user and a further 3D user interface object is shown 503 on which the user interface may detect interaction with and perform appropriate actions and display changes. - Therefore in some embodiments there may be a method comprising detecting an object position with respect to either the auto-stereoscopic display and/or a second display and determining an interaction by the detected object.
- In such embodiments detecting an object position may thus comprise at least one of detecting an capacitance value in a capacitance sensor of the object and detecting a visual image of the object.
- Furthermore determining an interaction by the detected object may in some embodiments comprise determining an intersection between the detected image and a displayed image.
- The displayed image may in such embodiments comprise the virtual image of the at least one three dimensional object or a two dimensional image displayed on the second display.
- Thus embodiments of the application permit the two displays to produce much more realistic or believable three dimensional image projection. Furthermore the interaction between the detection of the eye and/or head position/orientation and the object to be projected permits more efficient interaction with the displayed object or objects. Furthermore in further embodiments as described above, the implementation of user interface elements on the display permits more interactive three dimensional display configurations and experiences.
- It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers. Furthermore user equipment, universal serial bus (USB) sticks, and modem data cards may comprise apparatus such as the apparatus described in embodiments above.
- In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- Thus some embodiments may be implemented as apparatus comprising: a sensor configured to detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; a processor configured to determine a surface viewable from the user viewpoint of at least one three dimensional object; and an image generator configured to generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- The sensor may in some embodiments comprise a camera configured to capture at least one image of the user and a face recognizer configured to determine the position and orientation of the user eyes with respect to the auto-stereoscopic display.
- In some embodiments there may be at least two cameras capturing at least one image of the user from each of the at least two cameras, and the face recognizer may compare the difference between the at least one image of the user from each of the at least two cameras.
- The processor in some embodiments may determine a model of the at least one three dimensional object; determine the distance and orientation from the at least one three dimensional object model to the user viewpoint; and generate a surface of the at least one three dimensional object dependent on the model of the at least one three dimensional object and the distance and orientation from the at least one three dimensional object model to the user viewpoint.
- The processor may in some embodiments further detect an inter-pupil distance of a user; and provide control information to a parallax barrier dependent on at least one of: the position of the user viewpoint; the orientation of the user viewpoint; and the inter-pupil distance of the user.
- The processor may in some embodiments further determine a projection surface viewable from the user viewpoint of at least one three dimensional object on a second display; and the image generator may generate a projection image for display on the second display dependent on the projection surface viewable from the user viewpoint.
- The processor may in some embodiments determine the projection surface viewable from the user viewpoint dependent on at least one of: at least one three dimensional object lighting angle and position; a second display surface model; and at least one three dimensional object surface model.
- The projection surface may comprise at least one of: a partial shadow of the at least one three dimensional object; a total shadow of the at least one three dimensional object; and a reflection of the at least one three dimensional object.
- The sensor may be further configured to detect an object position with respect to either the auto-stereoscopic display and/or a second display; and determine an interaction by the detected object.
- The sensor in some embodiments may detect an object position by detecting at least one of a capacitance value in a capacitance sensor of the object; and a visual image of the object.
- The processor may further determine an intersection between the detected image and a displayed image.
- The displayed image may comprise the virtual image of the at least one three dimensional object.
- The displayed image may comprise a two dimensional image displayed on the second display.
- The embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the
- Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, and CD and the data variants thereof.
- Thus some embodiments may be implemented by a computer-readable medium encoded with instructions that, when executed by a computer perform: detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; determine a surface viewable from the user viewpoint of at least one three dimensional object; and generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.
- The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
- Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
- Programs, such as those provided by Synopsys, Inc. of Mountain View, Calif. and Cadence Design, of San Jose, Calif. automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or “fab” for fabrication.
- The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention as defined in the appended claims.
- As used in this application, the term circuitry may refer to all of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as and where applicable: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
- This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
- The term processor and memory may comprise but are not limited to in this application: (1) one or more microprocessors, (2) one or more processor(s) with accompanying digital signal processor(s), (3) one or more processor(s) without accompanying digital signal processor(s), (3) one or more special-purpose computer chips, (4) one or more field-programmable gate arrays (FPGAS), (5) one or more controllers, (6) one or more application-specific integrated circuits (ASICS), or detector(s), processor(s) (including dual-core and multiple-core processors), digital signal processor(s), controller(s), receiver, transmitter, encoder, decoder, memory (and memories), software, firmware, RAM, ROM, display, user interface, display circuitry, user interface circuitry, user interface software, display software, circuit(s), antenna, antenna circuitry, and circuitry.
Claims (21)
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/EP2009/063420 WO2011044936A1 (en) | 2009-10-14 | 2009-10-14 | Autostereoscopic rendering and display apparatus |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20120200495A1 true US20120200495A1 (en) | 2012-08-09 |
| US8970478B2 US8970478B2 (en) | 2015-03-03 |
Family
ID=42173831
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/501,732 Expired - Fee Related US8970478B2 (en) | 2009-10-14 | 2009-10-14 | Autostereoscopic rendering and display apparatus |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US8970478B2 (en) |
| EP (1) | EP2489195A1 (en) |
| CN (1) | CN102640502B (en) |
| RU (1) | RU2524834C2 (en) |
| WO (1) | WO2011044936A1 (en) |
Cited By (71)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120019516A1 (en) * | 2010-07-26 | 2012-01-26 | Samsung Electronics Co., Ltd. | Multi-view display system and method using color consistent selective sub-pixel rendering |
| US20120050154A1 (en) * | 2010-08-31 | 2012-03-01 | Adil Jagmag | Method and system for providing 3d user interface in 3d televisions |
| US20120066726A1 (en) * | 2010-09-10 | 2012-03-15 | Mondragon Christopher K | Video Display Units for Aircraft In-Flight Entertainment Systems and Methods of Adapting the Same |
| US20120090005A1 (en) * | 2010-10-11 | 2012-04-12 | Eldon Technology Limited | Holographic 3D Display |
| US20120314934A1 (en) * | 2011-06-09 | 2012-12-13 | Yusuke Kudo | Information processing device, information processing method and program |
| US20120317510A1 (en) * | 2011-06-07 | 2012-12-13 | Takuro Noda | Information processing apparatus, information processing method, and program |
| US20130033459A1 (en) * | 2010-04-13 | 2013-02-07 | Nokia Corporation | Apparatus, method, computer program and user interface |
| US20130047112A1 (en) * | 2010-03-11 | 2013-02-21 | X | Method and device for operating a user interface |
| US20130050202A1 (en) * | 2011-08-23 | 2013-02-28 | Kyocera Corporation | Display device |
| US20130286152A1 (en) * | 2012-04-26 | 2013-10-31 | Sony Mobile Communications Ab | Screen camera |
| US20130307948A1 (en) * | 2012-05-16 | 2013-11-21 | Samsung Display Co., Ltd. | 3-dimensional image display device and display method thereof |
| CN103440036A (en) * | 2013-08-23 | 2013-12-11 | Tcl集团股份有限公司 | Three-dimensional image display and interactive operation method and device |
| US20140015794A1 (en) * | 2011-03-25 | 2014-01-16 | Kyocera Corporation | Electronic device, control method, and control program |
| US20140078045A1 (en) * | 2012-03-14 | 2014-03-20 | Chengdu Boe Optoelectronics Technology Co., Ltd. | Display Apparatus And Terminal |
| US20140098197A1 (en) * | 2012-10-05 | 2014-04-10 | Research In Motion Limited | Methods and devices for generating a stereoscopic image |
| US20140176528A1 (en) * | 2012-12-20 | 2014-06-26 | Microsoft Corporation | Auto-stereoscopic augmented reality display |
| US20140204079A1 (en) * | 2011-06-17 | 2014-07-24 | Immersion | System for colocating a touch screen and a virtual object, and device for manipulating virtual objects implementing such a system |
| US20150181197A1 (en) * | 2011-10-05 | 2015-06-25 | Amazon Technologies, Inc. | Stereo imaging using disparate imaging devices |
| US20150223769A1 (en) * | 2012-12-12 | 2015-08-13 | Kabushiki Kaisha Toshiba | Medical image display apparatus and x-ray diagnosis apparatus |
| US20150227112A1 (en) * | 2013-03-22 | 2015-08-13 | Shenzhen Cloud Cube Information Tech Co., Ltd. | Display apparatus and visual displaying method for simulating a holographic 3d scene |
| US20150237334A1 (en) * | 2012-09-27 | 2015-08-20 | Sharp Kabushiki Kaisha | Stereoscopic display device |
| WO2015151799A1 (en) * | 2014-03-31 | 2015-10-08 | ソニー株式会社 | Electronic device |
| US9223138B2 (en) | 2011-12-23 | 2015-12-29 | Microsoft Technology Licensing, Llc | Pixel opacity for augmented reality |
| US9297996B2 (en) | 2012-02-15 | 2016-03-29 | Microsoft Technology Licensing, Llc | Laser illumination scanning |
| US9298012B2 (en) | 2012-01-04 | 2016-03-29 | Microsoft Technology Licensing, Llc | Eyebox adjustment for interpupillary distance |
| US9304235B2 (en) | 2014-07-30 | 2016-04-05 | Microsoft Technology Licensing, Llc | Microfabrication |
| US20160156896A1 (en) * | 2014-12-01 | 2016-06-02 | Samsung Electronics Co., Ltd. | Apparatus for recognizing pupillary distance for 3d display |
| US9368546B2 (en) | 2012-02-15 | 2016-06-14 | Microsoft Technology Licensing, Llc | Imaging structure with embedded light sources |
| DE102015103276A1 (en) * | 2014-12-15 | 2016-06-16 | Lenovo (Beijing) Co., Ltd. | Electronic device, display device and display control method |
| US9372347B1 (en) | 2015-02-09 | 2016-06-21 | Microsoft Technology Licensing, Llc | Display system |
| US9423360B1 (en) | 2015-02-09 | 2016-08-23 | Microsoft Technology Licensing, Llc | Optical components |
| US9429692B1 (en) | 2015-02-09 | 2016-08-30 | Microsoft Technology Licensing, Llc | Optical components |
| US9513480B2 (en) | 2015-02-09 | 2016-12-06 | Microsoft Technology Licensing, Llc | Waveguide |
| US9535253B2 (en) | 2015-02-09 | 2017-01-03 | Microsoft Technology Licensing, Llc | Display system |
| US9578318B2 (en) | 2012-03-14 | 2017-02-21 | Microsoft Technology Licensing, Llc | Imaging structure emitter calibration |
| US9581820B2 (en) | 2012-06-04 | 2017-02-28 | Microsoft Technology Licensing, Llc | Multiple waveguide imaging structure |
| US9717981B2 (en) | 2012-04-05 | 2017-08-01 | Microsoft Technology Licensing, Llc | Augmented reality and physical games |
| US9726887B2 (en) | 2012-02-15 | 2017-08-08 | Microsoft Technology Licensing, Llc | Imaging structure color conversion |
| US9779643B2 (en) | 2012-02-15 | 2017-10-03 | Microsoft Technology Licensing, Llc | Imaging structure emitter configurations |
| US9827209B2 (en) | 2015-02-09 | 2017-11-28 | Microsoft Technology Licensing, Llc | Display system |
| US9930313B2 (en) * | 2013-04-01 | 2018-03-27 | Lg Electronics Inc. | Image display device for providing function of changing screen display direction and method thereof |
| US20180152698A1 (en) * | 2016-11-29 | 2018-05-31 | Samsung Electronics Co., Ltd. | Method and apparatus for determining interpupillary distance (ipd) |
| US10018844B2 (en) | 2015-02-09 | 2018-07-10 | Microsoft Technology Licensing, Llc | Wearable image display system |
| US20180225860A1 (en) * | 2015-02-26 | 2018-08-09 | Rovi Guides, Inc. | Methods and systems for generating holographic animations |
| US10191515B2 (en) | 2012-03-28 | 2019-01-29 | Microsoft Technology Licensing, Llc | Mobile device light guide display |
| US20190035364A1 (en) * | 2016-02-26 | 2019-01-31 | Sony Corporation | Display apparatus, method of driving display apparatus, and electronic apparatus |
| US10216357B2 (en) | 2014-07-16 | 2019-02-26 | Sony Corporation | Apparatus and method for controlling the apparatus |
| US10254942B2 (en) | 2014-07-31 | 2019-04-09 | Microsoft Technology Licensing, Llc | Adaptive sizing and positioning of application windows |
| US10317677B2 (en) | 2015-02-09 | 2019-06-11 | Microsoft Technology Licensing, Llc | Display system |
| US20190187875A1 (en) * | 2017-12-15 | 2019-06-20 | International Business Machines Corporation | Remote control incorporating holographic displays |
| US20190235737A1 (en) * | 2016-06-28 | 2019-08-01 | Nikon Corporation | Display device, program, display method and control device |
| US10388073B2 (en) | 2012-03-28 | 2019-08-20 | Microsoft Technology Licensing, Llc | Augmented reality light guide display |
| US10502876B2 (en) | 2012-05-22 | 2019-12-10 | Microsoft Technology Licensing, Llc | Waveguide optics focus elements |
| US10592080B2 (en) | 2014-07-31 | 2020-03-17 | Microsoft Technology Licensing, Llc | Assisted presentation of application windows |
| US10606350B2 (en) | 2015-12-24 | 2020-03-31 | Samsung Electronics Co., Ltd. | Deformable display device and image display method using same |
| US10678412B2 (en) | 2014-07-31 | 2020-06-09 | Microsoft Technology Licensing, Llc | Dynamic joint dividers for application windows |
| US20200241721A1 (en) * | 2019-01-16 | 2020-07-30 | Shenzhen Vistandard Digital Technology Co., Ltd. | Interactive display apparatus and method |
| WO2021025241A1 (en) * | 2019-08-07 | 2021-02-11 | Samsung Electronics Co., Ltd. | Method and bendable device for constructing 3d data item |
| US11068049B2 (en) | 2012-03-23 | 2021-07-20 | Microsoft Technology Licensing, Llc | Light guide display and field of view |
| US11086216B2 (en) | 2015-02-09 | 2021-08-10 | Microsoft Technology Licensing, Llc | Generating electronic components |
| CN113273184A (en) * | 2018-11-30 | 2021-08-17 | Pcms控股公司 | Method of mirroring 3D objects to a light field display |
| US11099709B1 (en) | 2021-04-13 | 2021-08-24 | Dapper Labs Inc. | System and method for creating, managing, and displaying an interactive display for 3D digital collectibles |
| US11170582B1 (en) | 2021-05-04 | 2021-11-09 | Dapper Labs Inc. | System and method for creating, managing, and displaying limited edition, serialized 3D digital collectibles with visual indicators of rarity classifications |
| US11210844B1 (en) | 2021-04-13 | 2021-12-28 | Dapper Labs Inc. | System and method for creating, managing, and displaying 3D digital collectibles |
| US11227010B1 (en) | 2021-05-03 | 2022-01-18 | Dapper Labs Inc. | System and method for creating, managing, and displaying user owned collections of 3D digital collectibles |
| US11343634B2 (en) | 2018-04-24 | 2022-05-24 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for rendering an audio signal for a playback to a user |
| US20220360761A1 (en) * | 2021-05-04 | 2022-11-10 | Dapper Labs Inc. | System and method for creating, managing, and displaying 3d digital collectibles with overlay display elements and surrounding structure display elements |
| USD991271S1 (en) | 2021-04-30 | 2023-07-04 | Dapper Labs, Inc. | Display screen with an animated graphical user interface |
| US11707671B2 (en) * | 2017-03-07 | 2023-07-25 | Ahmed ABDELKARIM | Anamorphic display device |
| US11870928B2 (en) * | 2020-03-19 | 2024-01-09 | Samsung Electronics Co., Ltd. | Mounting apparatus for displaying screen of electronic apparatus through hologram |
| JP2024137998A (en) * | 2021-06-25 | 2024-10-07 | 京セラ株式会社 | Wearable terminal device, program, and display method |
Families Citing this family (47)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101732131B1 (en) | 2010-11-12 | 2017-05-04 | 삼성전자주식회사 | Image providing apparatus and image providng method based on user's location |
| US9285586B2 (en) | 2011-05-13 | 2016-03-15 | Sony Corporation | Adjusting parallax barriers |
| JP5926500B2 (en) * | 2011-06-07 | 2016-05-25 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
| DE102011112620B3 (en) * | 2011-09-08 | 2013-02-21 | Eads Deutschland Gmbh | Angled display for the three-dimensional representation of a scenario |
| DE102011112618A1 (en) * | 2011-09-08 | 2013-03-14 | Eads Deutschland Gmbh | Interaction with a three-dimensional virtual scenario |
| CN104054044A (en) * | 2011-11-21 | 2014-09-17 | 株式会社尼康 | Display device and display control program |
| CN102595172A (en) * | 2011-12-06 | 2012-07-18 | 四川长虹电器股份有限公司 | Displaying method of 3D (three-dimensional) image |
| JP2013121031A (en) * | 2011-12-07 | 2013-06-17 | Sony Corp | Display device, method and program |
| JP6200328B2 (en) * | 2011-12-21 | 2017-09-20 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Display device |
| EP2805517A1 (en) * | 2012-01-17 | 2014-11-26 | Sony Ericsson Mobile Communications AB | Portable electronic equipment and method of controlling an autostereoscopic display |
| CN102915115A (en) * | 2012-09-25 | 2013-02-06 | 上海华勤通讯技术有限公司 | Method for adjusting display frame by eye |
| CN104871236B (en) * | 2012-12-21 | 2018-02-02 | 索尼公司 | Display control apparatus and method |
| US8988343B2 (en) | 2013-03-29 | 2015-03-24 | Panasonic Intellectual Property Management Co., Ltd. | Method of automatically forming one three-dimensional space with multiple screens |
| US9348411B2 (en) * | 2013-05-24 | 2016-05-24 | Microsoft Technology Licensing, Llc | Object display with visual verisimilitude |
| US10025378B2 (en) | 2013-06-25 | 2018-07-17 | Microsoft Technology Licensing, Llc | Selecting user interface elements via position signal |
| RU2582852C1 (en) * | 2015-01-21 | 2016-04-27 | Общество с ограниченной ответственностью "Вокорд СофтЛаб" (ООО "Вокорд СофтЛаб") | Automatic construction of 3d model of face based on series of 2d images or movie |
| US9773022B2 (en) | 2015-10-07 | 2017-09-26 | Google Inc. | Displaying objects based on a plurality of models |
| US20170171535A1 (en) * | 2015-12-09 | 2017-06-15 | Hyundai Motor Company | Three-dimensional display apparatus and method for controlling the same |
| JP6126195B2 (en) * | 2015-12-24 | 2017-05-10 | 京セラ株式会社 | Display device |
| US10021373B2 (en) | 2016-01-11 | 2018-07-10 | Microsoft Technology Licensing, Llc | Distributing video among multiple display zones |
| CN107193372B (en) * | 2017-05-15 | 2020-06-19 | 杭州一隅千象科技有限公司 | Projection method from multiple rectangular planes at arbitrary positions to variable projection center |
| CN109144393B (en) * | 2018-08-28 | 2020-08-07 | 维沃移动通信有限公司 | Image display method and mobile terminal |
| DE102019202462A1 (en) * | 2019-02-22 | 2020-08-27 | Volkswagen Aktiengesellschaft | Portable terminal |
| CN110035270A (en) * | 2019-02-28 | 2019-07-19 | 努比亚技术有限公司 | A kind of 3D rendering display methods, terminal and computer readable storage medium |
| CN110049308A (en) * | 2019-02-28 | 2019-07-23 | 努比亚技术有限公司 | A kind of method of shuangping san, terminal and computer readable storage medium |
| CN110488970A (en) * | 2019-07-03 | 2019-11-22 | 努比亚技术有限公司 | Graphic display method, terminal and the computer readable storage medium of arc-shaped display screen |
| JP2023543799A (en) | 2020-09-25 | 2023-10-18 | アップル インコーポレイテッド | How to navigate the user interface |
| KR20230054733A (en) | 2020-09-25 | 2023-04-25 | 애플 인크. | Methods for interacting with virtual controls and/or affordance for moving virtual objects in virtual environments |
| CN117555417B (en) | 2020-09-25 | 2024-07-19 | 苹果公司 | Method for adjusting and/or controlling immersion associated with a user interface |
| AU2021347112B2 (en) | 2020-09-25 | 2023-11-23 | Apple Inc. | Methods for manipulating objects in an environment |
| CN116670627A (en) | 2020-12-31 | 2023-08-29 | 苹果公司 | Methods for Grouping User Interfaces in Environments |
| US11995230B2 (en) | 2021-02-11 | 2024-05-28 | Apple Inc. | Methods for presenting and sharing content in an environment |
| EP4323852A1 (en) | 2021-04-13 | 2024-02-21 | Apple Inc. | Methods for providing an immersive experience in an environment |
| CN115840546A (en) * | 2021-09-18 | 2023-03-24 | 华为技术有限公司 | Method, electronic equipment and device for displaying image on display screen |
| CN117980962A (en) | 2021-09-23 | 2024-05-03 | 苹果公司 | Apparatus, method and graphical user interface for content application |
| EP4388397A1 (en) | 2021-09-25 | 2024-06-26 | Apple Inc. | Devices, methods, and graphical user interfaces for presenting virtual objects in virtual environments |
| US12272005B2 (en) | 2022-02-28 | 2025-04-08 | Apple Inc. | System and method of three-dimensional immersive applications in multi-user communication sessions |
| WO2023196258A1 (en) | 2022-04-04 | 2023-10-12 | Apple Inc. | Methods for quick message response and dictation in a three-dimensional environment |
| US12394167B1 (en) | 2022-06-30 | 2025-08-19 | Apple Inc. | Window resizing and virtual object rearrangement in 3D environments |
| US12112011B2 (en) | 2022-09-16 | 2024-10-08 | Apple Inc. | System and method of application-based three-dimensional refinement in multi-user communication sessions |
| US12148078B2 (en) | 2022-09-16 | 2024-11-19 | Apple Inc. | System and method of spatial groups in multi-user communication sessions |
| US12099653B2 (en) | 2022-09-22 | 2024-09-24 | Apple Inc. | User interface response based on gaze-holding event assessment |
| US12405704B1 (en) | 2022-09-23 | 2025-09-02 | Apple Inc. | Interpreting user movement as direct touch user interface interactions |
| KR20240085348A (en) * | 2022-12-07 | 2024-06-17 | 삼성디스플레이 주식회사 | Display device |
| US12108012B2 (en) | 2023-02-27 | 2024-10-01 | Apple Inc. | System and method of managing spatial states and display modes in multi-user communication sessions |
| US12118200B1 (en) | 2023-06-02 | 2024-10-15 | Apple Inc. | Fuzzy hit testing |
| US12113948B1 (en) | 2023-06-04 | 2024-10-08 | Apple Inc. | Systems and methods of managing spatial groups in multi-user communication sessions |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5959664A (en) * | 1994-12-29 | 1999-09-28 | Sharp Kabushiki Kaisha | Observer tracking autostereoscopic display and method of tracking an observer |
| US6876362B1 (en) * | 2002-07-10 | 2005-04-05 | Nvidia Corporation | Omnidirectional shadow texture mapping |
| US20050185276A1 (en) * | 2004-02-19 | 2005-08-25 | Pioneer Corporation | Stereoscopic two-dimensional image display apparatus and stereoscopic two-dimensional image display method |
| US20050264559A1 (en) * | 2004-06-01 | 2005-12-01 | Vesely Michael A | Multi-plane horizontal perspective hands-on simulator |
| US20060139447A1 (en) * | 2004-12-23 | 2006-06-29 | Unkrich Mark A | Eye detection system and method for control of a three-dimensional display |
| US20090128900A1 (en) * | 2007-11-15 | 2009-05-21 | Idyllic Spectrum Sdn Bhd | Autostereoscopic display |
| US20100266176A1 (en) * | 2009-04-16 | 2010-10-21 | Fujifilm Corporation | Diagnosis assisting apparatus, diagnosis assisting method, and storage medium having a diagnosis assisting program recorded therein |
| US20100309296A1 (en) * | 2009-06-03 | 2010-12-09 | Au Optronics Corporation | Autostereoscopic Display Apparatus |
| US20100328306A1 (en) * | 2008-02-19 | 2010-12-30 | The Board Of Trustees Of The Univer Of Illinois | Large format high resolution interactive display |
| US8334867B1 (en) * | 2008-11-25 | 2012-12-18 | Perceptive Pixel Inc. | Volumetric data exploration using multi-point input controls |
| US8416268B2 (en) * | 2007-10-01 | 2013-04-09 | Pioneer Corporation | Image display device |
Family Cites Families (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE69432283T2 (en) | 1993-12-01 | 2004-01-22 | Sharp K.K. | Display for three-dimensional images |
| JP3397602B2 (en) * | 1996-11-11 | 2003-04-21 | 富士通株式会社 | Image display apparatus and method |
| US6157382A (en) | 1996-11-29 | 2000-12-05 | Canon Kabushiki Kaisha | Image display method and apparatus therefor |
| JP3619063B2 (en) * | 1999-07-08 | 2005-02-09 | キヤノン株式会社 | Stereoscopic image processing apparatus, method thereof, stereoscopic parameter setting apparatus, method thereof and computer program storage medium |
| US6618054B2 (en) | 2000-05-16 | 2003-09-09 | Sun Microsystems, Inc. | Dynamic depth-of-field emulation based on eye-tracking |
| US20050207486A1 (en) | 2004-03-18 | 2005-09-22 | Sony Corporation | Three dimensional acquisition and visualization system for personal electronic devices |
| US7787009B2 (en) | 2004-05-10 | 2010-08-31 | University Of Southern California | Three dimensional interaction with autostereoscopic displays |
| CN101002253A (en) * | 2004-06-01 | 2007-07-18 | 迈克尔·A.·韦塞利 | Horizontal perspective simulator |
| US20060001596A1 (en) | 2004-06-30 | 2006-01-05 | Interdigital Technology Corporation | Method and system for displaying holographic images in mobile devices |
| US20060152580A1 (en) | 2005-01-07 | 2006-07-13 | Synthosys, Llc | Auto-stereoscopic volumetric imaging system and method |
| US20070064199A1 (en) | 2005-09-19 | 2007-03-22 | Schindler Jon L | Projection display device |
| RU2306678C1 (en) * | 2006-02-07 | 2007-09-20 | Василий Александрович ЕЖОВ | Auto-stereoscopic display with quasi-uninterruptible angle spectrum |
| JP4880693B2 (en) * | 2006-10-02 | 2012-02-22 | パイオニア株式会社 | Image display device |
| WO2008132724A1 (en) | 2007-04-26 | 2008-11-06 | Mantisvision Ltd. | A method and apparatus for three dimensional interaction with autosteroscopic displays |
| CN201060370Y (en) | 2007-05-29 | 2008-05-14 | 南昌大学 | A naked-eye stereoscopic display with surround-view function |
-
2009
- 2009-10-14 WO PCT/EP2009/063420 patent/WO2011044936A1/en active Application Filing
- 2009-10-14 US US13/501,732 patent/US8970478B2/en not_active Expired - Fee Related
- 2009-10-14 RU RU2012118591/08A patent/RU2524834C2/en not_active IP Right Cessation
- 2009-10-14 EP EP09752313A patent/EP2489195A1/en not_active Ceased
- 2009-10-14 CN CN200980162526.7A patent/CN102640502B/en not_active Expired - Fee Related
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5959664A (en) * | 1994-12-29 | 1999-09-28 | Sharp Kabushiki Kaisha | Observer tracking autostereoscopic display and method of tracking an observer |
| US6876362B1 (en) * | 2002-07-10 | 2005-04-05 | Nvidia Corporation | Omnidirectional shadow texture mapping |
| US20050185276A1 (en) * | 2004-02-19 | 2005-08-25 | Pioneer Corporation | Stereoscopic two-dimensional image display apparatus and stereoscopic two-dimensional image display method |
| US20050264559A1 (en) * | 2004-06-01 | 2005-12-01 | Vesely Michael A | Multi-plane horizontal perspective hands-on simulator |
| US20060139447A1 (en) * | 2004-12-23 | 2006-06-29 | Unkrich Mark A | Eye detection system and method for control of a three-dimensional display |
| US8416268B2 (en) * | 2007-10-01 | 2013-04-09 | Pioneer Corporation | Image display device |
| US20090128900A1 (en) * | 2007-11-15 | 2009-05-21 | Idyllic Spectrum Sdn Bhd | Autostereoscopic display |
| US20100328306A1 (en) * | 2008-02-19 | 2010-12-30 | The Board Of Trustees Of The Univer Of Illinois | Large format high resolution interactive display |
| US8334867B1 (en) * | 2008-11-25 | 2012-12-18 | Perceptive Pixel Inc. | Volumetric data exploration using multi-point input controls |
| US20100266176A1 (en) * | 2009-04-16 | 2010-10-21 | Fujifilm Corporation | Diagnosis assisting apparatus, diagnosis assisting method, and storage medium having a diagnosis assisting program recorded therein |
| US20100309296A1 (en) * | 2009-06-03 | 2010-12-09 | Au Optronics Corporation | Autostereoscopic Display Apparatus |
Cited By (112)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130047112A1 (en) * | 2010-03-11 | 2013-02-21 | X | Method and device for operating a user interface |
| US9283829B2 (en) * | 2010-03-11 | 2016-03-15 | Volkswagen Ag | Process and device for displaying different information for driver and passenger of a vehicle |
| US9535493B2 (en) * | 2010-04-13 | 2017-01-03 | Nokia Technologies Oy | Apparatus, method, computer program and user interface |
| US20130033459A1 (en) * | 2010-04-13 | 2013-02-07 | Nokia Corporation | Apparatus, method, computer program and user interface |
| US20120019516A1 (en) * | 2010-07-26 | 2012-01-26 | Samsung Electronics Co., Ltd. | Multi-view display system and method using color consistent selective sub-pixel rendering |
| US20120050154A1 (en) * | 2010-08-31 | 2012-03-01 | Adil Jagmag | Method and system for providing 3d user interface in 3d televisions |
| US20120066726A1 (en) * | 2010-09-10 | 2012-03-15 | Mondragon Christopher K | Video Display Units for Aircraft In-Flight Entertainment Systems and Methods of Adapting the Same |
| US20120090005A1 (en) * | 2010-10-11 | 2012-04-12 | Eldon Technology Limited | Holographic 3D Display |
| US8943541B2 (en) * | 2010-10-11 | 2015-01-27 | Eldon Technology Limited | Holographic 3D display |
| US20140015794A1 (en) * | 2011-03-25 | 2014-01-16 | Kyocera Corporation | Electronic device, control method, and control program |
| US9507428B2 (en) * | 2011-03-25 | 2016-11-29 | Kyocera Corporation | Electronic device, control method, and control program |
| US20120317510A1 (en) * | 2011-06-07 | 2012-12-13 | Takuro Noda | Information processing apparatus, information processing method, and program |
| US9766793B2 (en) * | 2011-06-09 | 2017-09-19 | Sony Corporation | Information processing device, information processing method and program |
| US20160110078A1 (en) * | 2011-06-09 | 2016-04-21 | C/O Sony Corporation | Information processing device, information processing method and program |
| US9218113B2 (en) * | 2011-06-09 | 2015-12-22 | Sony Corporation | Information processing device, information processing method and program |
| US20120314934A1 (en) * | 2011-06-09 | 2012-12-13 | Yusuke Kudo | Information processing device, information processing method and program |
| US9786090B2 (en) * | 2011-06-17 | 2017-10-10 | INRIA—Institut National de Recherche en Informatique et en Automatique | System for colocating a touch screen and a virtual object, and device for manipulating virtual objects implementing such a system |
| US20140204079A1 (en) * | 2011-06-17 | 2014-07-24 | Immersion | System for colocating a touch screen and a virtual object, and device for manipulating virtual objects implementing such a system |
| US9467683B2 (en) * | 2011-08-23 | 2016-10-11 | Kyocera Corporation | Display device having three-dimensional display function |
| US20130050202A1 (en) * | 2011-08-23 | 2013-02-28 | Kyocera Corporation | Display device |
| US20150181197A1 (en) * | 2011-10-05 | 2015-06-25 | Amazon Technologies, Inc. | Stereo imaging using disparate imaging devices |
| US9325968B2 (en) * | 2011-10-05 | 2016-04-26 | Amazon Technologies, Inc. | Stereo imaging using disparate imaging devices |
| US9223138B2 (en) | 2011-12-23 | 2015-12-29 | Microsoft Technology Licensing, Llc | Pixel opacity for augmented reality |
| US9298012B2 (en) | 2012-01-04 | 2016-03-29 | Microsoft Technology Licensing, Llc | Eyebox adjustment for interpupillary distance |
| US9684174B2 (en) | 2012-02-15 | 2017-06-20 | Microsoft Technology Licensing, Llc | Imaging structure with embedded light sources |
| US9297996B2 (en) | 2012-02-15 | 2016-03-29 | Microsoft Technology Licensing, Llc | Laser illumination scanning |
| US9368546B2 (en) | 2012-02-15 | 2016-06-14 | Microsoft Technology Licensing, Llc | Imaging structure with embedded light sources |
| US9726887B2 (en) | 2012-02-15 | 2017-08-08 | Microsoft Technology Licensing, Llc | Imaging structure color conversion |
| US9779643B2 (en) | 2012-02-15 | 2017-10-03 | Microsoft Technology Licensing, Llc | Imaging structure emitter configurations |
| US20140078045A1 (en) * | 2012-03-14 | 2014-03-20 | Chengdu Boe Optoelectronics Technology Co., Ltd. | Display Apparatus And Terminal |
| US9807381B2 (en) | 2012-03-14 | 2017-10-31 | Microsoft Technology Licensing, Llc | Imaging structure emitter calibration |
| US9578318B2 (en) | 2012-03-14 | 2017-02-21 | Microsoft Technology Licensing, Llc | Imaging structure emitter calibration |
| US11068049B2 (en) | 2012-03-23 | 2021-07-20 | Microsoft Technology Licensing, Llc | Light guide display and field of view |
| US10388073B2 (en) | 2012-03-28 | 2019-08-20 | Microsoft Technology Licensing, Llc | Augmented reality light guide display |
| US10191515B2 (en) | 2012-03-28 | 2019-01-29 | Microsoft Technology Licensing, Llc | Mobile device light guide display |
| US9717981B2 (en) | 2012-04-05 | 2017-08-01 | Microsoft Technology Licensing, Llc | Augmented reality and physical games |
| US10478717B2 (en) | 2012-04-05 | 2019-11-19 | Microsoft Technology Licensing, Llc | Augmented reality and physical games |
| US20130286152A1 (en) * | 2012-04-26 | 2013-10-31 | Sony Mobile Communications Ab | Screen camera |
| US9055188B2 (en) * | 2012-04-26 | 2015-06-09 | Sony Corporation | Screen camera |
| US9113160B2 (en) * | 2012-05-16 | 2015-08-18 | Samsung Display Co., Ltd. | 3-dimensional image display device and display method thereof |
| US20130307948A1 (en) * | 2012-05-16 | 2013-11-21 | Samsung Display Co., Ltd. | 3-dimensional image display device and display method thereof |
| US10502876B2 (en) | 2012-05-22 | 2019-12-10 | Microsoft Technology Licensing, Llc | Waveguide optics focus elements |
| US9581820B2 (en) | 2012-06-04 | 2017-02-28 | Microsoft Technology Licensing, Llc | Multiple waveguide imaging structure |
| US20150237334A1 (en) * | 2012-09-27 | 2015-08-20 | Sharp Kabushiki Kaisha | Stereoscopic display device |
| US20140098197A1 (en) * | 2012-10-05 | 2014-04-10 | Research In Motion Limited | Methods and devices for generating a stereoscopic image |
| US9148651B2 (en) * | 2012-10-05 | 2015-09-29 | Blackberry Limited | Methods and devices for generating a stereoscopic image |
| US20150223769A1 (en) * | 2012-12-12 | 2015-08-13 | Kabushiki Kaisha Toshiba | Medical image display apparatus and x-ray diagnosis apparatus |
| US9986962B2 (en) * | 2012-12-12 | 2018-06-05 | Toshiba Medical Systems Corporation | Medical image display apparatus and X-ray diagnosis apparatus |
| US20140176528A1 (en) * | 2012-12-20 | 2014-06-26 | Microsoft Corporation | Auto-stereoscopic augmented reality display |
| RU2651611C2 (en) * | 2012-12-20 | 2018-04-23 | МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи | Auto-stereoscopic augmented reality display |
| AU2013361148B2 (en) * | 2012-12-20 | 2017-07-27 | Microsoft Technology Licensing, Llc | Auto-stereoscopic augmented reality display |
| US10192358B2 (en) * | 2012-12-20 | 2019-01-29 | Microsoft Technology Licensing, Llc | Auto-stereoscopic augmented reality display |
| US20150227112A1 (en) * | 2013-03-22 | 2015-08-13 | Shenzhen Cloud Cube Information Tech Co., Ltd. | Display apparatus and visual displaying method for simulating a holographic 3d scene |
| US9983546B2 (en) * | 2013-03-22 | 2018-05-29 | Shenzhen Magic Eye Technology Co., Ltd. | Display apparatus and visual displaying method for simulating a holographic 3D scene |
| US9930313B2 (en) * | 2013-04-01 | 2018-03-27 | Lg Electronics Inc. | Image display device for providing function of changing screen display direction and method thereof |
| CN103440036A (en) * | 2013-08-23 | 2013-12-11 | Tcl集团股份有限公司 | Three-dimensional image display and interactive operation method and device |
| WO2015151799A1 (en) * | 2014-03-31 | 2015-10-08 | ソニー株式会社 | Electronic device |
| US10216357B2 (en) | 2014-07-16 | 2019-02-26 | Sony Corporation | Apparatus and method for controlling the apparatus |
| US9304235B2 (en) | 2014-07-30 | 2016-04-05 | Microsoft Technology Licensing, Llc | Microfabrication |
| US10678412B2 (en) | 2014-07-31 | 2020-06-09 | Microsoft Technology Licensing, Llc | Dynamic joint dividers for application windows |
| US10592080B2 (en) | 2014-07-31 | 2020-03-17 | Microsoft Technology Licensing, Llc | Assisted presentation of application windows |
| US10254942B2 (en) | 2014-07-31 | 2019-04-09 | Microsoft Technology Licensing, Llc | Adaptive sizing and positioning of application windows |
| KR20160065686A (en) * | 2014-12-01 | 2016-06-09 | 삼성전자주식회사 | Pupilometer for 3d display |
| US10742968B2 (en) * | 2014-12-01 | 2020-08-11 | Samsung Electronics Co., Ltd. | Apparatus for recognizing pupillary distance for 3D display |
| KR102329814B1 (en) * | 2014-12-01 | 2021-11-22 | 삼성전자주식회사 | Pupilometer for 3d display |
| US20160156896A1 (en) * | 2014-12-01 | 2016-06-02 | Samsung Electronics Co., Ltd. | Apparatus for recognizing pupillary distance for 3d display |
| US10007365B2 (en) | 2014-12-15 | 2018-06-26 | Lenove (Beijing) Co., Ltd. | Electronic apparatus, display device and display control method for rotating display content when sharing |
| DE102015103276A1 (en) * | 2014-12-15 | 2016-06-16 | Lenovo (Beijing) Co., Ltd. | Electronic device, display device and display control method |
| DE102015103276B4 (en) | 2014-12-15 | 2021-12-02 | Lenovo (Beijing) Co., Ltd. | Electronic device, display device and display control method |
| US9429692B1 (en) | 2015-02-09 | 2016-08-30 | Microsoft Technology Licensing, Llc | Optical components |
| US9535253B2 (en) | 2015-02-09 | 2017-01-03 | Microsoft Technology Licensing, Llc | Display system |
| US10317677B2 (en) | 2015-02-09 | 2019-06-11 | Microsoft Technology Licensing, Llc | Display system |
| US11086216B2 (en) | 2015-02-09 | 2021-08-10 | Microsoft Technology Licensing, Llc | Generating electronic components |
| US9827209B2 (en) | 2015-02-09 | 2017-11-28 | Microsoft Technology Licensing, Llc | Display system |
| US10018844B2 (en) | 2015-02-09 | 2018-07-10 | Microsoft Technology Licensing, Llc | Wearable image display system |
| US9372347B1 (en) | 2015-02-09 | 2016-06-21 | Microsoft Technology Licensing, Llc | Display system |
| US9513480B2 (en) | 2015-02-09 | 2016-12-06 | Microsoft Technology Licensing, Llc | Waveguide |
| US9423360B1 (en) | 2015-02-09 | 2016-08-23 | Microsoft Technology Licensing, Llc | Optical components |
| US12217348B2 (en) | 2015-02-26 | 2025-02-04 | Adeia Guides Inc. | Methods and systems for generating holographic animations |
| US10600227B2 (en) * | 2015-02-26 | 2020-03-24 | Rovi Guides, Inc. | Methods and systems for generating holographic animations |
| US11663766B2 (en) | 2015-02-26 | 2023-05-30 | Rovi Guides, Inc. | Methods and systems for generating holographic animations |
| US20180225860A1 (en) * | 2015-02-26 | 2018-08-09 | Rovi Guides, Inc. | Methods and systems for generating holographic animations |
| US10606350B2 (en) | 2015-12-24 | 2020-03-31 | Samsung Electronics Co., Ltd. | Deformable display device and image display method using same |
| US20190035364A1 (en) * | 2016-02-26 | 2019-01-31 | Sony Corporation | Display apparatus, method of driving display apparatus, and electronic apparatus |
| US20190235737A1 (en) * | 2016-06-28 | 2019-08-01 | Nikon Corporation | Display device, program, display method and control device |
| US10983680B2 (en) * | 2016-06-28 | 2021-04-20 | Nikon Corporation | Display device, program, display method and control device |
| US10979696B2 (en) * | 2016-11-29 | 2021-04-13 | Samsung Electronics Co., Ltd. | Method and apparatus for determining interpupillary distance (IPD) |
| US20180152698A1 (en) * | 2016-11-29 | 2018-05-31 | Samsung Electronics Co., Ltd. | Method and apparatus for determining interpupillary distance (ipd) |
| US10506219B2 (en) * | 2016-11-29 | 2019-12-10 | Samsung Electronics Co., Ltd. | Method and apparatus for determining interpupillary distance (IPD) |
| US11707671B2 (en) * | 2017-03-07 | 2023-07-25 | Ahmed ABDELKARIM | Anamorphic display device |
| US20190187875A1 (en) * | 2017-12-15 | 2019-06-20 | International Business Machines Corporation | Remote control incorporating holographic displays |
| US11343634B2 (en) | 2018-04-24 | 2022-05-24 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for rendering an audio signal for a playback to a user |
| CN113273184A (en) * | 2018-11-30 | 2021-08-17 | Pcms控股公司 | Method of mirroring 3D objects to a light field display |
| US20200241721A1 (en) * | 2019-01-16 | 2020-07-30 | Shenzhen Vistandard Digital Technology Co., Ltd. | Interactive display apparatus and method |
| US11029522B2 (en) | 2019-08-07 | 2021-06-08 | Samsung Electronics Co., Ltd. | Method and bendable device for constructing 3D data item |
| WO2021025241A1 (en) * | 2019-08-07 | 2021-02-11 | Samsung Electronics Co., Ltd. | Method and bendable device for constructing 3d data item |
| US11870928B2 (en) * | 2020-03-19 | 2024-01-09 | Samsung Electronics Co., Ltd. | Mounting apparatus for displaying screen of electronic apparatus through hologram |
| US11526251B2 (en) | 2021-04-13 | 2022-12-13 | Dapper Labs, Inc. | System and method for creating, managing, and displaying an interactive display for 3D digital collectibles |
| US11899902B2 (en) | 2021-04-13 | 2024-02-13 | Dapper Labs, Inc. | System and method for creating, managing, and displaying an interactive display for 3D digital collectibles |
| US11393162B1 (en) | 2021-04-13 | 2022-07-19 | Dapper Labs, Inc. | System and method for creating, managing, and displaying 3D digital collectibles |
| US11099709B1 (en) | 2021-04-13 | 2021-08-24 | Dapper Labs Inc. | System and method for creating, managing, and displaying an interactive display for 3D digital collectibles |
| US11922563B2 (en) | 2021-04-13 | 2024-03-05 | Dapper Labs, Inc. | System and method for creating, managing, and displaying 3D digital collectibles |
| US11210844B1 (en) | 2021-04-13 | 2021-12-28 | Dapper Labs Inc. | System and method for creating, managing, and displaying 3D digital collectibles |
| USD991271S1 (en) | 2021-04-30 | 2023-07-04 | Dapper Labs, Inc. | Display screen with an animated graphical user interface |
| US11227010B1 (en) | 2021-05-03 | 2022-01-18 | Dapper Labs Inc. | System and method for creating, managing, and displaying user owned collections of 3D digital collectibles |
| US11734346B2 (en) | 2021-05-03 | 2023-08-22 | Dapper Labs, Inc. | System and method for creating, managing, and displaying user owned collections of 3D digital collectibles |
| US11605208B2 (en) | 2021-05-04 | 2023-03-14 | Dapper Labs, Inc. | System and method for creating, managing, and displaying limited edition, serialized 3D digital collectibles with visual indicators of rarity classifications |
| US11170582B1 (en) | 2021-05-04 | 2021-11-09 | Dapper Labs Inc. | System and method for creating, managing, and displaying limited edition, serialized 3D digital collectibles with visual indicators of rarity classifications |
| US11792385B2 (en) | 2021-05-04 | 2023-10-17 | Dapper Labs, Inc. | System and method for creating, managing, and displaying 3D digital collectibles with overlay display elements and surrounding structure display elements |
| US20220360761A1 (en) * | 2021-05-04 | 2022-11-10 | Dapper Labs Inc. | System and method for creating, managing, and displaying 3d digital collectibles with overlay display elements and surrounding structure display elements |
| US11533467B2 (en) * | 2021-05-04 | 2022-12-20 | Dapper Labs, Inc. | System and method for creating, managing, and displaying 3D digital collectibles with overlay display elements and surrounding structure display elements |
| JP2024137998A (en) * | 2021-06-25 | 2024-10-07 | 京セラ株式会社 | Wearable terminal device, program, and display method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN102640502A (en) | 2012-08-15 |
| RU2012118591A (en) | 2013-11-20 |
| EP2489195A1 (en) | 2012-08-22 |
| RU2524834C2 (en) | 2014-08-10 |
| US8970478B2 (en) | 2015-03-03 |
| WO2011044936A1 (en) | 2011-04-21 |
| CN102640502B (en) | 2015-09-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8970478B2 (en) | Autostereoscopic rendering and display apparatus | |
| US9046962B2 (en) | Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region | |
| EP2638461B1 (en) | Apparatus and method for user input for controlling displayed information | |
| US11714540B2 (en) | Remote touch detection enabled by peripheral device | |
| US9176628B2 (en) | Display with an optical sensor | |
| US9423876B2 (en) | Omni-spatial gesture input | |
| US9454260B2 (en) | System and method for enabling multi-display input | |
| TWI559174B (en) | Gesture based manipulation of three-dimensional images | |
| JP6404120B2 (en) | Full 3D interaction on mobile devices | |
| US20130257736A1 (en) | Gesture sensing apparatus, electronic system having gesture input function, and gesture determining method | |
| CN101995943B (en) | Stereo image interactive system | |
| KR20100027976A (en) | Gesture and motion-based navigation and interaction with three-dimensional virtual content on a mobile device | |
| WO2013082760A1 (en) | Method and system for responding to user's selection gesture of object displayed in three dimensions | |
| US20120120029A1 (en) | Display to determine gestures | |
| US20150033157A1 (en) | 3d displaying apparatus and the method thereof | |
| KR20190027079A (en) | Electronic apparatus, method for controlling thereof and the computer readable recording medium | |
| US9122346B2 (en) | Methods for input-output calibration and image rendering | |
| JP6686319B2 (en) | Image projection device and image display system | |
| EP3088991B1 (en) | Wearable device and method for enabling user interaction | |
| WO2013105041A1 (en) | Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region | |
| WO2011011024A1 (en) | Display with an optical sensor | |
| US9274547B2 (en) | Display with an optical sensor | |
| KR20160002620U (en) | Holography touch method and Projector touch method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: NOKIA CORPORATION, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JOHANSSON, PANU MARTEN JESPER;REEL/FRAME:028236/0741 Effective date: 20120411 |
|
| FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: NOKIA TECHNOLOGIES OY, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOKIA CORPORATION;REEL/FRAME:035512/0432 Effective date: 20150116 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20230303 |