GB2609473A - Three-dimensional display apparatus - Google Patents

Three-dimensional display apparatus Download PDF

Info

Publication number
GB2609473A
GB2609473A GB2111235.4A GB202111235A GB2609473A GB 2609473 A GB2609473 A GB 2609473A GB 202111235 A GB202111235 A GB 202111235A GB 2609473 A GB2609473 A GB 2609473A
Authority
GB
United Kingdom
Prior art keywords
image
user
screen
input
display apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB2111235.4A
Other versions
GB202111235D0 (en
Inventor
Hamilton Keith
Green Jeremy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PUFFERFISH Ltd
Original Assignee
PUFFERFISH Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PUFFERFISH Ltd filed Critical PUFFERFISH Ltd
Priority to GB2111235.4A priority Critical patent/GB2609473A/en
Publication of GB202111235D0 publication Critical patent/GB202111235D0/en
Priority to PCT/GB2022/052002 priority patent/WO2023012463A1/en
Publication of GB2609473A publication Critical patent/GB2609473A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Abstract

A three-dimensional display apparatus 1 which includes a three-dimensional screen 2 for displaying at least one image thereon, as well as a display driver device 4 operable to display at least one first image 6 on the screen, and a user-input sensor 10 configured to detect one or more user-input commands 12. When the user input sensor detects one or more user-input commands 12a, the display driver device displays at least one second image 8 on the screen. The three-dimensional screen may be configured as a touch or multi-touch screen, and the user-input sensor may be a touch-screen sensor. The second image may be a portion of the first image which may be modified is some way consistent with the detected user-input commands. The second image may be a combination of a modified portion of the first image and an unmodified portion of the first image.

Description

Three-Dimensional Display Apparatus
Field of the invention
The present invention relates to three-dimensional display apparatus and methods of use thereof.
Background to the invention
Three-dimensional display apparatus are useful for displaying three-dimensional images in order to portray three-dimensional objects. Three-dimensional display apparatus typically include a projector operable to project images onto a three-dimensional screen.
Known three-dimensional display apparatus are often used to display information at conferences or other gatherings. However, known three-dimensional display apparatus are not thought to be easy and intuitive for a user to interact with.
The inventors have appreciated the shortcomings in known three-dimensional display apparatus.
Statements of Invention
According to a first aspect of the present invention there is provided a three-dimensional display apparatus comprising: a three-dimensional screen for displaying at least one image thereon a display driver device operable to display at least one first image on the screen; and a user-input sensor configured to detect one or more user-input 10 commands; wherein the display driver device is configured to display at least one second image on the screen in response to one or more user-input commands being detected by the user-input sensor.
The at least one first image may comprise an image, a series of images, a video, a plurality of videos, a series of videos, an animation, a plurality of animations, and/or a series of animations, or the like. The at least one second image may comprise an image, a series of images, a video, a plurality of videos, a series of videos, an animation, a plurality of animations, and/or a series of animations, or the like.
The display driver device may be operable to display one or more first images, a plurality of first images, two or more first images, or any suitable number of first images. The display driver device may be operable to display one or more second images, a plurality of second images, two or more second images, or any suitable number of second images.
The three-dimensional screen may be configured as a touch screen, or multi-touch screen. The three-dimensional screen may be a touch screen or multi-touch screen. The user-input sensor may configure the three-dimensional screen as a touch screen or multi-touch screen.
The display driver device may be electrically coupled with the user input sensor, either directly or via any suitable electronic circuitry.
The user-input sensor may be a touch-screen sensor. The user-input sensor may be configured to detect one or more user-input commands provided to the screen, or provided adjacent to the screen, or provided in proximity to the screen. The user-input sensor may be configured to detect one or more touch events from the user's fingers provided to the screen, or towards the screen. The user-input sensor may be configured to detect one or more user-input commands provided in front of the screen. The user-input sensor may be configured to detect one or more user-input commands provided at a distance from the screen. The user-input sensor may be configured to detect any suitable user-input command.
The user-input sensor may be configured to detect a plurality of different types of user-input commands. The user-input sensor may be configured to detect a plurality of different types of user-input commands and to provide a signal to the display driver device indicative of the type of detected user-input command. In this arrangement, the user-input sensor can detect at least two different user-input commands. The user-input sensor may be configured to detect a plurality of user-input touch events.
The user-input sensor may be configured to detect a plurality of user-input gestures.
The one or more user-input commands may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
The display driver device may be configured to display the at least one second image in response to a first user-input command detected by the user-input sensor. The first user-input command may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
The three-dimensional display apparatus may comprise a plurality of user-input sensors, each user input sensor being configured to detect one or 15 more user-input commands.
The three-dimensional screen may comprise one or more display surfaces. The one or more display surfaces may be substantially three-dimensional surfaces. The one or more display surfaces may include one or more curved display surfaces, one or more arcuate display surfaces, one or more spherical display surfaces, one or more ellipsoidal display surfaces, one or more hem i-spherical display surfaces and/or one or more lenticular display surfaces.
The three-dimensional display apparatus may be a spherical, ellipsoidal, hem i-spherical or lenticular display apparatus.
The three-dimensional screen may be a spherical screen. The three-dimensional screen may be at least partially spherical. The three-dimensional screen may be ellipsoidal. The three-dimensional screen may be at least partially ellipsoidal. The three-dimensional screen may be hemi-spherical. The three-dimensional screen may be at least partially hemi-spherical. The three-dimensional screen may be lenticular. The three-dimensional screen may be at least partially lenticular. The three-dimensional screen may be at least partially ellipsoidal, at least partially spherical, at least partially hemi-spherical and/or at least partially lenticular.
The display driver device may be operable to project one or more images to the screen. The display driver device may be operable to project the at least one first image and/or the at least one second image to the screen. The display driver device may comprise a projector device operable to project one or more images to the screen. The display driver device may be operable to transmit light to the screen. The display driver device may be a spherical, ellipsoidal, hemi-spherical or lenticular projector device.
The display driver device may be an at least partially spherical, at least partially ellipsoidal, at least partially hemi-spherical or at least partially lenticular projector device. The display driver device may be a three-dimensional display driver device. The display driver device may be a three-dimensional projector device.
The display driver device may be spaced apart from the screen.
The display driver device may be operable to display visible light images 25 on the screen.
The display driver device may be operable to project an image to substantially all of the screen. The display driver device may be operable to project an image to the majority of the screen.
The at least one second image may be identical in size to the at least one first image.
The at least one second image may include at least a portion of the at least one first image. The at least one second image may include at least 50% of the at least one first image, optionally at least 60% of the at least one first image, optionally at least 70% of the at least one first image, optionally at least 80% of the at least one first image, optionally at least 90% of the at least one first image.
The at least one second image may comprise a first part and a second part.
The first part of the second image may be an unmagnified view of a portion of the at least one first image. The first part of the second image may be an unmagnified view of a portion of the at least one second image.
The at least one second image may be a modified version of at least a portion of the at least one first image. The at least one second image may be a modified version of the at least one first image. The at least one second image may include a modified portion of the at least one first image and an unmodified portion of the at least one first image. The second part of the second image may be a modified portion of the at least one first image. The first part of the second image may be an unmodified portion of the at least one first image.
The at least one second image may include a portion having an increased or decreased magnification level relative to the at least one first image. The second part of the at least one second image may include a portion having an increased or decreased magnification level relative to the at least one first image.
The at least one second image may include one or more magnified views of at least a portion of the at least one first image. The second part of the at least one second image may include one or more magnified views of at least a portion of the at least one first image.
The at least one second image may include one or more reduced, or demagnified, views of at least a portion of the at least one first image. The second part of the at least one second image may include one or more reduced, or demagnified, views of at least a portion of the at least one first image.
The at least one second image may include one or more zoomed views, or zoomed-in views, or zoomed-out views of at least a portion of the at least one image. The second part of the at least one second image may include one or more zoomed views, or zoomed-in views, or zoomed-out views of at least a portion of the at least one image.
The at least one second image may include a zoomed-in view of at least a portion of the at least one first image and/or a zoomed-out view of at least a portion of the at least one first image.
The at least one second image may include an enlarged view of at least a portion of the at least one first image. The at least one second image may include an enlarged view of at least a portion of the at least one first image and/or a reduced view of at least a portion of the at least one first image. The second part of the at least one second image may include an enlarged view of at least a portion of the at least one first image and/or a reduced view of at least a portion of the at least one first image.
The at least one second image may include one or more unmodified portions of the at least one first image. The first part of the at least one second image may include one or more unmodified portions of the at least one first image. At least a portion of the at least one second image may be identical to at least a portion of the at least one first image. At least a portion of the first part of the at least one second image may be identical to at least a portion of the at least one first image. A portion of the at least one second image may be identical to a portion of the at least one first image.
The at least one second image may include an unmodified portion of the at least one first image and a modified portion of the at least one first image. The at least one second image may include an unmodified substantial portion of the at least one first image. The at least one second image may include an unmodified portion of at least 50% of the at least one first image, optionally at least 60% of the at least one first image, optionally at least 70% of the at least one first image, optionally at least 80% of the at least one first image, optionally at least 90% of the at least one first image.
The at least one first image may be a single image. The at least one first 25 image may be a single continuous image. The at least one second image may be a single image. The at least one second image may be a single continuous image.
The display driver device may be configured to maintain the display of the at least one first image on the screen. The display driver device may be configured to maintain the display of the at least one first image on the screen in the absence of the detection of any user-input commands.
The display driver device may be configured to maintain the display of the 5 at least one second image on the screen. The display driver device may be configured to maintain the display of the at least one second image on the display in the absence of the detection of any user-input commands.
The display driver device may be configured to display one or more intermediate images between displaying the at least one first image and the at least one second image on the screen.
The, or each, intermediate image may comprise a modified portion of the preceding image. The, or each, intermediate image may comprise a magnified version of at least a portion of the preceding image.
The display driver device may be configured to continuously display the at least one first image or the first part of the at least one second image. In this arrangement, either the first image or the first part of the at least one second image can be continuously displayed on the screen, while the second part of the at least one second image is selectively displayed.
The display driver device may be configured to continuously display the first image or the second image on the screen.
The display driver device may be configured to continuously display the at least one first image; or the first part and the second part of the at least one second image.
The display driver device may be operable to increase the size of the second part of the at least one second image relative to the first part of the at least one second image and/or to decrease the size of the second part of the at least one second image relative to the first part of the at least one second image.
The at least one second image may comprise one or more borders. The, or each, border may divide the first part and the second part of the at least one second image. The border may be a continuous border. The border may be a circular, or elliptical border. The border may surround the second part of the at least one second image.
The display driver device may be configured to modify the at least one second image displayed on the screen in response to one or more user-input commands being detected by the user-input sensor. The modified at least one second image may include modified first and/or second parts of the second image. The modified at least one second image may result in a different portion of the at least one first image being modified for display.
The display driver device may be operable between at least two pan positions. In the first pan position, the second pad of the at least one second image may be a modified version of a first portion of the at least one first image. In the second pan position, the second part of the at least one second image may be a modified version of a second portion of the at least one first image.
In the first pan position, the second part of the at least one second image may be a magnified or demagnified version of the first portion of the at least one first image. In the second pan position, the second part of the at least one second image may be a magnified or demagnified version of the second portion of the at least one first image.
In the first pan position, the first part of the at least one second image may be identical to a portion of the at least one first image. In the second pan position, the first part of the at least one second image may be identical to a portion of the at least one first image.
The display driver device may be operable to permit continuous panning of the second part of the at least one second image relative to the first part of the at least one second image.
The display driver device may be configured to modify the at least one second image displayed on the screen in response to one or more second user-input commands being detected by the user-input sensor. The second user-input command may be different to the first user-input command. The second user-input command may be a pan command. The second user-input command may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
The display driver device may be configured to change the location of the second part of the at least one second image relative to the first part of the at least one second image in response to one or more user-input commands being detected by the user-input sensor.
The display driver device may be operable to move the second part of the 30 at least one second image between at least two positions on the screen in response to one or more user-input commands being detected by the user-input sensor.
The display driver device may be operable to move the second part of the 5 at least one second image between at least two positions on the screen at one or more speed settings.
The speed setting may be determined, at least in pad, by the location of a touch event relative to a region or point of the second part of the at least one second image. The speed setting may be determined relative to a central region or central point of the second part of the at least one second image.
The speed setting may be determined, at least in pad, by the degree of modification applied to the at least one first image. The speed setting may be determined, at least in pad, by the degree of zoom or magnification used in the second part of the at least one second image.
The speed setting may be determined, at least in part, by the degree of 20 zoom or magnification used in the second pad of the at least one second image and by the location of a touch event relative to a region or point of the second part of the at least one second image.
The display driver device may be operable to display the second pad of the at least one second image on substantially any portion of the screen.
The display driver device may be operable to display the second part of the at least one second image at any location within the majority of the screen.
The user-input sensor may be configured to associate a user-input region of the screen with a user-input command. The user-input sensor may be configured to associate a user-input region of the screen with a position of the user-input command relative to the screen. The user-input sensor may be configured to associate a user-input region of the screen with a position of the user-input gesture relative to the screen. The user-input sensor may be configured to associate a user-input region of the screen with a location of the user-input command on the screen. The user-input sensor may be configured to associate a user-input region of the screen with a position of one or more touch events on the screen.
The location of the second part of the at least one second image on the screen may be determined based on the user-input region. The location of the second part of the at least one second image on the screen may be centred on the user-input region. The second part of the at least one second image may be larger than the user-input region. The location of a central portion of the second part of the at least one second image on the screen may be centred on the user-input region or may be offset from the user-input region.
The user-input sensor may be configured to associate at least one user input region from a plurality of possible user-input regions or user input positions.
The three-dimensional display apparatus may be configured to allocate a user input region as a selected region in the event of two or more user input regions being selected at substantially the same time.
The display apparatus may be operable to modify the at least one second 30 image in response to one or more user-input commands being detected by the user-input sensor when the at least one second image is displayed on the screen. The display driver device may be configured to modify the at least one second image on the screen in response to one or more third user-input commands being detected by the user-input sensor when the at least one second image is displayed on the screen. The third user-input command may be the same as the first user-input command or may be different to the first user-input command. The third user-input command may be different to the second user-input command or may be the same as the second user-input command. The third user-input command may be a zoom command. The third user-input command may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-fingergesture, navigate, browse, open, close, and any suitable touch-screen command.
The display apparatus may be operable to continuously or continually modify the at least one second image in response to continuous or continual detection of user-input commands by the user-input sensor when the at least one second image is displayed on the screen. The user-input commands may be first user-input commands, second user-input commands and/or third user-input commands.
The display apparatus may be operable to modify the at least one second image between two or more magnification levels. The display apparatus may be operable to modify the second part of the at least one second image between two or more magnification levels. The display apparatus may be operable to modify the at least one second image between two or more magnification levels in response to the detection of one or more user-input commands. The user-input commands may be third user-input commands. The display apparatus may be operable to modify the second part of the at least one second image between two or more magnification levels in response to the detection of one or more user-input commands. The user-input commands may be third user-input commands.
The display apparatus may be operable to modify the at least one second image between three or more magnification levels, or four or more magnification levels. The display apparatus may be operable to modify the second part of the at least one second image between three or more magnification levels, or four or more magnification levels.
The display driver device may be operable to select at least one second image to display from a plurality of possible second images to display based on the location of the user-input region or user input position.
The modification of the at least one second image may be carried out by selecting from stored images for display.
The display driver device may be operable to centre the second part of the at least one second image at any one or more of a plurality of possible user-input regions. The plurality of user-input regions may be defined at adjacent regions of the screen.
The user-input regions may be defined as an array on the screen. The user-input regions may be defined as a grid on the screen.
The user-input sensor may be configured to select one or more user-input regions from a plurality of user-input regions. The user-input sensor may be configured to select one or more active user-input regions from a plurality of user-input regions. The possible user-input regions may be defined to cover substantially all of the screen. The possible user-input regions may be defined to cover substantially all of the screen and each pf the possible user-input regions may be adjacent to at least one other possible user-input region, or adjacent to at least two other possible user-input regions, or adjacent to at least three other user-input regions, or adjacent to at least four other user-input regions.
The display driver device may be operable to centre the second part of the at least one second image at substantially any position of the screen. The display driver device may be operable to centre the second part of the at least one second image at substantially any region of the screen.
The display driver device may be operable to centre the second part of the at least one second image at substantially any user-input region of the 15 screen.
The, or each, user-input region may be associated with a region of the at least one first image. At least one portion of the first image may be associated with a user-input region, which may be adjacent to at least one further portion of the first image, the further portion being associated with a user-input region.
The possible user-input regions may be defined to cover substantially all of the at least one first image when displayed on the screen. The possible user-input regions may be defined to cover substantially all of the at least one first image when displayed on the screen and each user-input region may be adjacent to at least one other user-input region, or adjacent to at least two other user-input regions, or adjacent to at least three other user-input regions, or adjacent to at least four other user-input regions.
The possible user-input regions may be defined to cover substantially all of the at least one second image when displayed on the screen. The possible user-input regions may be defined to cover substantially all of the at least one second image when displayed on the screen and each user-input region may be adjacent to at least one other user-input region, or adjacent to at least two other user-input regions, or adjacent to at least three other user-input regions, or adjacent to at least four other user-input regions.
The at least one first image and/or the at least one second image may be derived from bitmap images. The three-dimensional display apparatus may be configured to store one or more stored images. The display driver device may be operable to display images on the screen based, at least in part, on the one or more stored images.
The at least one first image may be based on, or derived from, the one or more stored images. The at least one second image may be based on, or derived from, the one or more stored images.
The display driver device may be configured to refresh, update, or modify the at least one first image and/or the at least one second image. The display driver device may be operable to display at least one video formed from one or more first images and/or one or more second images, or modified versions thereof The three-dimensional display apparatus may be configurable to display the at least one first image as a home image, or default image.
The at least one first image may be an image of the Earth, or part of an image of the Earth, any other planet, any spherical or ellipsoidal object, or partially spherical or partially ellipsoidal object. The at least one second image may include a magnified view of a part of the Earth.
The at least one first image and/or the at least one second image may be created from an initial image using a mesh. The mesh may be a circular or elliptical mesh. This may be used to convert a square or rectangular image to a circular or elliptical image for display on the three-dimensional screen.
The display driver device may be configured to switch from the display of the at least one second image to display the at least one first image on the screen. The switch from the at least one second image to the at least one first image may be carried out in response to one or more reset user-input commands being detected by the user-input sensor. The switch from the at least one second image to the at least one first image may be carried out in response to a time-out event in which no user-input is detected. The switch from the at least one second image to the at least one first image may be carried out in response to one or more reset user-input commands being detected by the user-input sensor and/or in response to a time-out event in which no user-input is detected. The reset user-input command may be the same, or different to the first user-input command. The reset user-input command may be the same, or different to the second-user input command. The reset user-input command may be the same, or different to the third user-input command. The reset user-input command may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
The display driver device may be configured to stop the display of the at least one second image in response to the one or more reset user-input commands being detected and to display a new at least one second image on the screen. In this arrangement, the reset user-input command may be a command that causes an at least one second image to be displayed on the screen.
The display driver device may be configured to move the at least one first image relative to the screen. The display driver device may be configured to at least partially rotate the at least one first image around the screen.
The display driver device may be configured to move or rotate the at least one first image relative to the screen in response to a fourth user-input command detected by the user-input sensor. The fourth user-input command may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
The display driver device may be configured to move the at least one second image relative to the screen. The display driver device may be configured to at least partially rotate the at least one second image around the screen. The display driver device may be configured to move or rotate the at least one second image relative to the screen in response to a fourth user-input command detected by the user-input sensor. The fourth user-input command may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
According to a second aspect of the invention, there is provided a display driver device operable to display at least one first image on a three-dimensional screen; and wherein the display driver device is operable to display at least one second image on the screen in response to one or more user-input commands being detected by a user-input sensor.
Embodiments of the second aspect of the present invention may include one or more features of the first aspect of the present invention or its embodiments. Similarly, embodiments of the first aspect of the present invention may include one or more features of the second aspect of the present invention or its embodiments.
According to a third aspect of the present invention there is provided a method of using a three-dimensional display apparatus, the method comprising the steps of: providing a three-dimensional display apparatus comprising: a three-dimensional screen for displaying at least one image thereon a display driver device operable to display at least one first image on the screen; and a user-input sensor configured to detect one or more user-input commands; wherein the display driver device is configured to display at least one second image on the screen in response to one or more user-input commands being detected by the user-input sensor; using the display driver to display at least one first image on the screen; and using the display driver to display at least one second image on the screen in response to one or more user-input commands being detected by the user-input sensor.
Embodiments of the third aspect of the present invention may include one or more features of the first and/or second aspects of the present invention or their embodiments. Similarly, embodiments of the first and/or second aspects of the present invention may include one or more features of the third aspect of the present invention or its embodiments.
According to a fourth aspect of the present invention there is provided a kit of parts for assembling a three-dimensional display apparatus, the kit of parts comprising: a three-dimensional screen for displaying at least one image 15 thereon; a display driver device operable to display at least one first image on the screen; and a user-input sensor configured to detect one or more user-input commands; wherein the display driver device is configured to display at least one second image on the screen in response to one or more user-input commands being detected by the user-input sensor.
Embodiments of the fourth aspect of the present invention may include one or more features of the first, second and/or third aspects of the present invention or their embodiments. Similarly, embodiments of the first, second, and/or third aspects of the present invention may include one or more features of the fourth aspect of the present invention or its embodiments.
Brief description of the drawings
Embodiments of the invention will now be described, by way of example, with reference to the drawings, in which: Fig. 1 shows a three-dimensional display apparatus in accordance with an embodiment of the invention; Fig. 2 shows the three-dimensional display apparatus of Fig. 1, in which a user-input command is applied while the display shows a first image; Fig. 3 shows the three-dimensional display apparatus of Fig. 2 after the user input-command has been applied, with a second image now shown on the screen; Fig. 4 shows the display apparatus of Fig 1, showing a second image displayed on the screen; Fig. 5 shows the display apparatus of Fig 1, in which a third user-input command is applied to the screen; Fig. 6 shows the display apparatus of Fig. 1, with a second user-input command being used to pan the second image; and Fig. 7 shows the display apparatus of Fig. 1 after a pan command 20 has been executed.
Description of preferred embodiments
With reference to Figs. 1 to 7 a three-dimensional display apparatus 1 comprising a three-dimensional screen 2 for displaying at least one image thereon is shown. In the embodiments illustrated in described here, the three-dimensional display apparatus 1 is a spherical display apparatus, which is advantageous for depicting the Earth, as shown in the accompanying figures.
The apparatus 1 includes a display driver device 4 operable to display at least one first image 6 (shown in Fig. 1 and Fig. 2) on the screen 2, and a touch-screen sensor 10 (an example of a user-input sensor) configured to detect one or more user-input commands 12. The display driver device 4 is configured to display at least one second image 8 (Figs. 3 to 7) on the screen 2 in response to one or more first user-input commands 12a (Fig. 3) being detected by the user-input sensor 10.
In this embodiment, the three-dimensional screen 2 is configured as a multi-touch screen, with the touch-screen sensor 10 being able to detect multiple types of input-command.
The display driver device 4 is electrically coupled with the touch-screen sensor 10, either directly or via any suitable electronic circuitry.
In the embodiments illustrated and described here, the touch-screen sensor 10 is configured to detect one or more touch events from the user's fingers provided to the screen 2, or towards the screen 2, and the sensor 10 can detect a plurality of different types of user-input commands 12.
The touch-screen sensor 10 provides a signal to the display driver device 4 indicative of the type of detected user-input command 12, which is used by the apparatus 1 to determine which image to display on the screen 2. In this embodiment, the touch-screen sensor can detect at least two different user-input commands 12, which are user-input touch events or gestures.
The one or more user-input commands 12 can include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
As shown in Fig. 3, the display driver device 4 is configured to display the at least one second image 8 in response to a first user-input command 12a detected by the touch-screen sensor 10. In this embodiment, the first user-input command 12a is a pinch to zoom command. However, the first user-input command 12a could include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
As will be described in more detail below, the touch-screen sensor 10 detects user-input commands 12 at substantially any point on the screen 2. In other embodiments, the three-dimensional display apparatus 1 could comprise a plurality of user-input sensors 10, each user input sensor 10 being configured to detect one or more user-input commands 12. For example, it might be desirable to include different types of sensor 10, such as for detecting touch-events and for detecting gestures in front of the screen 2.
The three-dimensional screen 2 comprises a single display surface 2a, and the display surface 2a is a substantially three-dimensional surface.
The display surface 2a is a curved display surface. In this embodiment, the display surface 2a is a spherical display surface. However, it should be understood that the display surface 2a could be ellipsoidal, hemispherical, or lenticular.
The display driver device 4 is a spherical projector device operable to project one or more images to the screen 2. The first image 6 and the second image 8 are projected onto the screen 2 from the display driver device 4. In other embodiments, the display driver device 4 may electronically drive the screen as is known in the field of electronic displays, as an alternative to a projector implementation of the invention. When a projector is used, as is the case here, the display driver device 4 is spaced apart from the screen 2.
The screen 2 is not a complete sphere, as there is an opening in the base of the screen 2 for the display driver device to project images to the screen 2, although the screen 2 represents a sufficient portion of a sphere to represent a large area of interest of the Earth, when used for that purpose.
The display driver device 4 displays visible light images on the screen 2. It will be understood that in other embodiments, other wavelengths of light may be used.
The display driver device 4 is operable to project an image to substantially all of the screen 2, although in some embodiments projecting images to the majority of the screen 2 may be sufficient.
Turning now to the images displayed on the screen 2, the at least one second image 8 is substantially identical in size to the at least one first image 6. As will be described in more detail below, the invention is particularly well suited to displaying magnified portions of the first image 6. When a spherical screen 2 is used, it can be disorientating for the user to magnify the entirety of the first image 6, or to modify the entirety of the first image 6 (e.g. if the first image 6 were to be panned in a particular direction). It is advantageous to maintain a portion of the first image 6 as essentially unmodified, and to focus any modification on a portion of the first image 6. The second image 8 is therefore typically made up of a small, magnified portion of the first image 6, and a large portion that is identical to the original first image 6. An excellent advantage of this is for zooming or magnifying a part of the Earth, as the user can focus on any area of interest, without magnifying or modifying the entire surface of the Earth (as viewed by the user). Likewise, as will be described in more detail below, the user can navigate and browse by panning the small area of detail in the second image in a much more user-friendly manner than if the entire first image 6 were to be modified each time. Some of the aspects of the image manipulation and display will be described in more detail below.
The second image 8 includes a portion of the first image 6. Depending on the input commands 12 used by the user, the second image 8 includes at least 50% of the first image 6, optionally at least 60% of the first image 6, optionally at least 70% of the first image 6, optionally at least 80% of the first image 6, optionally at least 90% of the first image 6.
The second image 8 comprises a first part 8a and a second part 8b.
The second image 8 is a modified version of at least a portion of the at least one first image 6. The second part 8b of the second image 8 is a modified portion of the at least one first image 6. The first part 8a of the second image 8 may be an unmodified portion of the first image 6.
The second part 8b of the second image 8 includes a portion having an increased or decreased magnification level relative to the first image 6. In this embodiment, increased magnification is shown, but in other embodiments demagnification, or reduction, could occur.
The first part 8a of the second image 8 is an unmagnified view of the second part 8b of the at least one second image 8.
The second image 8 includes an enlarged view of a portion of the first image 6. In other embodiments, the second image 8 could include a reduced view of at least a portion of the first image 6.
In the embodiments illustrated and described here, at least a portion of the first part 8a of the second image 8 is identical to a portion of the first image 6.
The second image 8 includes an unmodified substantial portion of the first image 6.
The at least one first image 6 is a single continuous image and the at least one second image 8 is a single continuous image. In other embodiments, the first image 6 and/or the second image 8 could comprise a plurality of images.
The display driver device 4 is configured to maintain the display of the first image 6 on the screen 2 (when displayed) in the absence of the detection of any user-input commands 12. The display driver device 4 is configured to maintain the display of the second image 8 on the screen 2 in the absence of the detection of any user-input commands 12. The display driver device 4 is also configured to reset to the display of the at least one first image 6 on the screen 2 if no user-input is detected after a period of time. The period of time can be set according to design preferences.
The display driver device 4 is configured to continuously display the first image 6 or the first part 8a of the at least one second image S. In this arrangement, either the first image 6 or the first part 8a of the second image 8 is continuously displayed on the screen 2, while the second part 8b of the second image 8 is selectively displayed.
The display driver device 4 is configured to continuously display the first image 6 or the second image 8 on the screen 2.
The display driver device 4 is operable to increase the size of the second part 8b of the second image 8 relative to the first part 8a of the second image 8 or to decrease the size of the second part 8b of the second image 8 relative to the first part 8a of the second image B. The second image comprises a continuous, circular border 14, which divides the first part 8a and the second part 8b of the second image 8. The border 14 surrounds the second part 8b of the second image 8. In this embodiment, the border 14 improves the clarity of the second image 8, as the user can easily see which area has been magnified.
The display driver device 4 is configured to modify the second image 8 displayed on the screen 2 in response to one or more second user-input commands 12 being detected by the touch-screen sensor 10. Panning the second part 8b of the second image 8 around the screen 2 to magnify different portions of the first image 6 is an example of such modification of the second image 8. The modified second image 8 includes modified first 8a and second parts 8b of the second image S. The modified second image 8 results in a different portion of the first image 6 being modified for display.
The display driver device 4 is operable between at least two pan positions. In the first pan position (e.g. Fig. 6), the second part 8b of the second image 8 is a modified version of a first portion of the first image 6. In the second pan position (e.g. Fig. 7), the second part 8b of the second image 8 is a modified version of a second portion of the first image 6.
In the first pan position, the second part 8b of the second image 8 is a magnified version of the first portion of the first image 6. In the second pan position, the second part 8b of the second image 8 is a magnified version of the second portion of the first image 6.
In the first pan position, the first part 8a of the second image 8 is identical to a portion of the first image 6. In the second pan position, the first part 8a of the second image 8 is identical to a portion of the first image 6.
The display driver device 4 is operable to permit continuous panning of the second part 8b of the second image 8 relative to the first part 8a of the second image 8.
The display driver device 4 is configured to modify the second image 8 displayed on the screen 2 in response to one or more second user-input commands 12b being detected by the touch-screen sensor 2. The second user-input command 12b is a pan command, such as pressing a single finger onto the screen 2 and dragging the finger along the screen 2. The second user-input command 12b could include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
The second user-input command 12b is different to the first user-input command 12a.
The display driver device 4 is configured to change the location of the second part 8b of the second image 8 relative to the first part 8a of the second image 8 in response to one or more second user-input commands 12b being detected by the touch-screen sensor 10.
The display driver device 4 is operable to move the second part 8b of the 10 second image 8 between at least two positions on the screen 2 in response to one or more second user-input commands 12b being detected by the touch-screen sensor 10.
The display driver device 4 is operable to move the second part 8b of the second image 8 between at least two positions on the screen 2 at one or more speed settings. The speed setting is determined, at least in part, by the location of a touch event 16 relative to a central region Sc of the second part 8b of the second image 8. In this embodiment, the pan speed increases as the user presses the screen 2 further from the central region Sc. Furthermore, the speed setting is also determined, at least in part, by the degree of modification applied to the at least one first image 6, and in particular, the degree of magnification applied. For higher magnification, a slower speed setting is typically employed, and vice versa. It will be appreciated that the speed setting can be preconfigured depending on design requirements The display driver device 4 is operable to display the second part 8b of the second image 8 on substantially any portion of the screen 2. The user is therefore free to magnify any portion of the first image 6, and can pan to substantially any area on the screen 2. The user is not limited to predefined zones of the screen 2 (e.g. if only certain cities on a map were able to be the focus of magnification). This is advantageous in some applications, such as in viewing a live flight tracker of active aircraft, as the user can magnify any point of the Earth to inspect the flights occurring at that region. Likewise, if a user wishes to inspect a particular geographic feature, such as a river or mountain range, the user can magnify any point of that feature to create a second image 8, and can pan the second part 8b thereof to trace the feature, or can pan to another feature in close proximity. Further discussion of this functionality is provided below.
The touch-screen sensor 10 is configured to associate a user-input region 18 of the screen 2 with a user-input command 12 provided to the screen 2. The location of the second part 8b of the second image 8 on the screen 2 is determined based on the user-input region 18. The location of the central region 8c of the second part 8b of the second image Son the screen 2 is centred on the user-input region 18, although it will be understood that it is not necessary for the centring to be exact. The second part 2b of the second image 8 is larger than the user-input region 18 The touch-screen sensor 10 is configured to associate at least one user input region 18 from a plurality of possible user-input regions with the positioning of the second part 8b of the second image 8. The three-dimensional display apparatus 1 is configured to allocate a user input region 18 as a selected region in the event of two or more user input regions being selected at substantially the same time.
The three-dimensional display apparatus 1 is operable to modify the second image 8 in response to one or more third user-input commands 12c being detected by the touch-screen sensor 10 when the second image 8 is displayed on the screen 2. In this embodiment, the third user-input command 12c is the same as the first user-input command 12a, which is a pinch to zoom command. However, the third user-input command 12c could include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
The display apparatus 1 is operable to continuously modify the second image 8 in response to continuous detection of user-input commands 12 by the touch-screen sensor 10 when the second image 8 is displayed on the screen 2. The user-input commands 12 are any of the first, second or third user-input commands 12a, 12b, 12c.
In the embodiments illustrated and described here, the display apparatus 1 is operable to modify the second image 8 between two or more magnification levels. The display apparatus 1 is operable to modify the second part 8b of the second image 8 between two or more magnification levels. The display apparatus 1 is operable to modify the second image 8 between two or more magnification levels in response to the detection of one or more third user-input commands 12c.
In the embodiments illustrated and described here, the display apparatus 25 1 is operable to modify the second image 8 between three or more magnification levels, or four or more magnification levels, or any suitable number of magnification levels.
The display driver device 4 is operable to select a second image 8 to display from a plurality of possible second images 8 to display based on the location of the user-input region 18.
The modification of the second image 8 is carried out by selecting from stored images for display. In other embodiments, the images could be derived or created as required.
The display driver device 4 is operable to centre the second part 8b of the second image 8 at any one or more of a plurality of possible user-input regions 18. The plurality of user-input regions 18 are defined at adjacent regions of the screen 2, such that any region of the screen 2 can be selected by the user.
The user-input regions 18 are defined as an array on the screen 2. The user-input regions 18 are defined as a grid on the screen 2.
The possible user-input regions 18 are defined to cover substantially all of the screen 2 and each user-input region 18 may be adjacent to at least one other user-input region 18, or adjacent to at least two other user-input regions 18, or adjacent to at least three other user-input regions 18, or adjacent to at least four other user-input regions 18.
The display driver device 4 is operable to centre the second part 8b of the second image 8 at substantially any position of the screen 2. The display driver device 4 is operable to centre the second part 8b of the second image 8 at substantially any region of the screen 2.
The display driver device 4 is operable to centre the second part 8b of the second image 8 at substantially any user-input region 18 of the screen 2.
The, or each, user-input region 18 is associated with a region of the first image 6. At least one portion of the first image 6 may be associated with a user-input region 18, which is adjacent to at least one further portion of the first image 6, the further portion being associated with another user-input region 18.
The possible user-input regions 18 are defined to cover substantially all of the first image 6 when the first image 6 is displayed on the screen 2. The possible user-input regions 18 are defined to cover substantially all of the first image 6 when displayed on the screen 2 and each user-input region 18 is adjacent to at least one other user-input region 18, or adjacent to at least two other user-input regions 18, or adjacent to at least three other user-input regions 18, or adjacent to at least four other user-input regions 18.
The possible user-input regions 18 are defined to cover substantially all of the second image 8 when displayed on the screen 2. The possible user-input regions 18 are defined to cover substantially all of the second image 8 when displayed on the screen 2 and each user-input region 18 is adjacent to at least one other user-input region 18, or adjacent to at least two other user-input regions 18, or adjacent to at least three other user-input regions 18, or adjacent to at least four other user-input regions 18.
The first image 6 and the second image 8 (including any modifications made thereto) are derived from bitmap images. The three-dimensional display apparatus 1 is configured to store one or more stored images and the display driver device 4 is operable to display images on the screen 2 based, at least in part, on the one or more stored images.
The display driver device 4 is configured to refresh, update, or modify the first image 6 and the at least one second image 8. The display driver device 4 is operable to display at least one video formed from one or more first images 6 and/or the one or more second images 6, or modified versions thereof.
The three-dimensional display apparatus 1 is configured to display the first image 6 as a home image, or default image.
The first image 6 and the second image 8 are created from an initial image using a circular mesh technique, used to convert a square or rectangular image to a circular image for display on the three-dimensional screen 2.
The display driver device 4 is configured to switch from the display of the second image 8 to display a first image 6 on the screen 2. The switch from the second image 8 back to the first image 6 is carried out in response to one or more reset user-input commands being detected by the touch-screen sensor 10. The reset user-input command could be any one or all of the following: a pinch to zoom out command, a close command, such as a close button on the edge of the second part 8b of the second image 8, by a time-out event in which no user-input is detected, and by the user entering a first user-input command 12a on the screen 2, which results in a new second image 8 being displayed, and the previous second image 8 being closed. The reset user-input command may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a twofinger-gesture, navigate, browse, open, close, and any suitable touch-screen command.
The display driver device 4 is configured to move the first image 6 relative to the screen 2. The display driver device 4 is configured to at least partially rotate the first image 6 around the screen 2, which allows the user to "spin" the earth around the screen to locate a point of interest. The display driver device 4 is configured to move or rotate the first image 6 relative to the screen 2 in response to a fourth user-input command detected by the touch-screen sensor 10. The fourth user-input command is a pan command. The fourth user-input command may include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-fingergesture, navigate, browse, open, close, and any suitable touch-screen command.
The display driver device 4 is configured to display one or more intermediate images between displaying the first image 6 and the second image 8 on the screen 2. The, or each, intermediate image comprises a modified portion of the preceding image.
An example of how the invention is used will now be provided.
Beginning with Fig. 1, the first image 6 is displayed on the screen 2.
Next, shown in Fig. 2, the user applies a first user-input command 12a, 25 which results in the second image 8 being displayed on screen 2, as shown in Fig. 3.
The user can further magnify the second image 8 by executing a third user-input command 12c, and the resulting second image 8 is shown in Fig. 4.
With reference to Fig. 5, the user can magnify an area of the second image 8, and as shown in Figs. 6 and 7, panning this area can be carried out using a second user-input command 12b to navigate around the area of interest.
Modifications and improvements may be made to the foregoing embodiments without departing from the scope of the present invention.

Claims (25)

  1. Claims 1. A three-dimensional display apparatus comprising: a three-dimensional screen for displaying at least one image thereon a display driver device operable to display at least one first image on the screen; and a user-input sensor configured to detect one or more user-input commands; wherein the display driver device is configured to display at least one 10 second image on the screen in response to one or more user-input commands being detected by the user-input sensor.
  2. 2. The three-dimensional display apparatus of claim 1, wherein the three-dimensional screen is configured as a touch screen, or multi-touch 15 screen.
  3. 3. The three-dimensional display apparatus of claim 1 or claim 2, wherein the user-input sensor is a touch-screen sensor.
  4. 4. The three-dimensional display apparatus of any preceding claim, wherein the one or more user-input commands include at least one of the following commands: drag, translate, move, magnify, zoom in, zoom out, pinch, pinch to zoom, pinch to zoom in, pinch to zoom out, panning, tap, double tap, press and tap, a one-finger gesture, a two-finger-gesture, navigate, browse, open, and close.
  5. 5. The three-dimensional display apparatus of any preceding claim, wherein the three-dimensional screen is at least partially ellipsoidal, at least partially spherical, at least partially hemi-spherical and/or at least partially lenticular.
  6. 6. The three-dimensional display apparatus of any preceding claim, wherein the display driver device comprises a projector device operable to project one or more images to the screen.
  7. 7. The three-dimensional display apparatus of any preceding claim, wherein the at least one second image includes at least a portion of the at least one first image.
  8. 8. The three-dimensional display apparatus of any preceding claim, wherein the at least one second image is a modified version of at least a portion of the at least one first image.
  9. 9. The three-dimensional display apparatus of claim 8, wherein the at least one second image includes a modified portion of the at least one first image and an unmodified portion of the at least one first image.
  10. 10. The three-dimensional display apparatus of any of claims 7 to 9, wherein the at least one second image comprises a first part and a second 20 part.
  11. 11. The three-dimensional display apparatus of claim 10, wherein the first part of the second image is an unmagnified view of a portion of the at least one first image.
  12. 12. The three-dimensional display apparatus of claim 10 or claim 11, wherein the second part of the at least one second image includes one or more magnified views of at least a portion of the at least one first image.
  13. 13. The three-dimensional display apparatus of any preceding claim, wherein the display driver device is configured to modify the at least one second image displayed on the screen in response to one or more user-input commands being detected by the user-input sensor.
  14. 14. The three-dimensional display apparatus of claim 13, when dependent on claims 10 to 12, wherein the modified at least one second image includes a modified first part and/or a modified second part of the second image.
  15. 15. The three-dimensional display apparatus of any of claims 10 to 14, wherein the display driver device is operable between at least two pan positions, wherein in the first pan position, the second part of the at least one second image is a modified version of a first portion of the at least one first image, and wherein in the second pan position, the second part of the at least one second image is a modified version of a second portion of the at least one first image.
  16. 16. The three-dimensional display apparatus of claim 15, wherein in the first pan position, the second part of the at least one second image is a magnified or demagnified version of the first portion of the at least one first image and wherein in the second pan position, the second part of the at least one second image is a magnified or demagnified version of the second portion of the at least one first image.
  17. 17. The three-dimensional display apparatus of claim 15 or claim 16, wherein the display driver device is operable to permit continuous panning of the second part of the at least one second image relative to the first part of the at least one second image.
  18. 18. The three-dimensional display apparatus of any of claims 10 to 17, wherein the display driver device is operable to display the second part of the at least one second image on substantially any portion of the screen.
  19. 19. The three-dimensional display apparatus of any of claims 10 to 18, wherein the user-input sensor is configured to associate a user-input region of the screen with a position of the user-input command relative to the screen, and wherein the location of the second part of the at least one second image on the screen is determined based on the user-input region.
  20. 20. The three-dimensional display apparatus of claim 19, wherein the location of the second part of the at least one second image on the screen is centred on the user-input region.
  21. 21. The three-dimensional display apparatus of claim 19 or claim 20, wherein the user-input sensor is configured to select one or more user-input regions from a plurality of possible user-input regions, and wherein the possible user-input regions are defined to cover substantially all of the screen.
  22. 22. The three-dimensional display apparatus of claim 21, wherein each possible user-input region is adjacent to at least one other possible user-input region, or adjacent to at least two other possible user-input regions, or adjacent to at least three other possible user-input regions, or adjacent to at least four other possible user-input regions.
  23. 23. The method of any preceding claim, wherein the display apparatus is operable to modify the at least one second image between two or more magnification levels.
  24. 24. A kit of parts for assembling a three-dimensional display apparatus, the kit of parts comprising: a three-dimensional screen for displaying at least one image thereon a display driver device operable to display at least one first image on the screen; and a user-input sensor configured to detect one or more user-input commands; wherein the display driver device is configured to display at least one 10 second image on the screen in response to one or more user-input commands being detected by the user-input sensor.
  25. 25. A method of using a three-dimensional display apparatus, the method comprising the steps of: providing a three-dimensional display apparatus comprising: a three-dimensional screen for displaying at least one image thereon a display driver device operable to display at least one first image on the screen; and a user-input sensor configured to detect one or more user-input commands; wherein the display driver device is configured to display at least one second image on the screen in response to one or more user-input commands being detected by the user-input sensor; using the display driver to display at least one first image on the screen; and using the display driver to display at least one second image on the screen in response to one or more user-input commands being detected by the user-input sensor.
GB2111235.4A 2021-08-04 2021-08-04 Three-dimensional display apparatus Withdrawn GB2609473A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2111235.4A GB2609473A (en) 2021-08-04 2021-08-04 Three-dimensional display apparatus
PCT/GB2022/052002 WO2023012463A1 (en) 2021-08-04 2022-07-28 Three-dimensional display apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2111235.4A GB2609473A (en) 2021-08-04 2021-08-04 Three-dimensional display apparatus

Publications (2)

Publication Number Publication Date
GB202111235D0 GB202111235D0 (en) 2021-09-15
GB2609473A true GB2609473A (en) 2023-02-08

Family

ID=77651407

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2111235.4A Withdrawn GB2609473A (en) 2021-08-04 2021-08-04 Three-dimensional display apparatus

Country Status (2)

Country Link
GB (1) GB2609473A (en)
WO (1) WO2023012463A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045705A1 (en) * 2006-03-30 2010-02-25 Roel Vertegaal Interaction techniques for flexible displays
KR20170076471A (en) * 2015-12-24 2017-07-04 삼성전자주식회사 Deforming display apparatus and method for displaying image by using the same

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PufferfishDisplays, 19/03/2020, "Pufferfish: Space & Earth Observation Solutions", YouTube, [online], Available from: https://www.youtube.com/watch?v=csoaozdqzt8 [Accessed date: 14/01/2022] *
PufferfishDisplays, 28/08/2020, "Pufferfish Solutions for Unparalleled Learning & Discovery", YouTube, [online], Available from: https://www.youtube.com/watch?v=M2z4hPemV_Y [Accessed date 14/01/2022] *

Also Published As

Publication number Publication date
GB202111235D0 (en) 2021-09-15
WO2023012463A1 (en) 2023-02-09

Similar Documents

Publication Publication Date Title
KR102274977B1 (en) The effect of user interface camera
DK180032B8 (en) User Interface for manipulating user interface objects
US8416266B2 (en) Interacting with detail-in-context presentations
US7486302B2 (en) Fisheye lens graphical user interfaces
US11550420B2 (en) Quick review of captured image data
US20060082901A1 (en) Interacting with detail-in-context presentations
US7411610B2 (en) Method and system for generating detail-in-context video presentations using a graphical user interface
US9026938B2 (en) Dynamic detail-in-context user interface for application access and content access on electronic displays
US9400586B2 (en) Graphical user interface having an attached toolbar for drag and drop editing in detail-in-context lens presentations
US7084886B2 (en) Using detail-in-context lenses for accurate digital image cropping and measurement
US9880727B2 (en) Gesture manipulations for configuring system settings
US8106927B2 (en) Graphical user interfaces and occlusion prevention for fisheye lenses with line segment foci
US20130169579A1 (en) User interactions
CN108073432B (en) User interface display method of head-mounted display equipment
TW200535633A (en) Key-based advanced navigation techniques
KR20080082354A (en) Apparatus and method for providing items based on scrolling
WO2002101534A1 (en) Graphical user interface with zoom for detail-in-context presentations
US9792012B2 (en) Method relating to digital images
JPWO2018198703A1 (en) Display device
WO2021056998A1 (en) Double-picture display method and device, terminal and storage medium
GB2609473A (en) Three-dimensional display apparatus
EP2791773B1 (en) Remote display area including input lenses each depicting a region of a graphical user interface
Bezerianos et al. Interaction and visualization techniques for very large scale high resolution displays
Bezerianos Designs for single user, up-close interaction with wall-sized displays
Bezerianos Copyright (C) 2007 by Anastasia Bezerianos

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)