GB2487039A - Visualizing Illustrated Books And Comics On Digital Devices - Google Patents

Visualizing Illustrated Books And Comics On Digital Devices Download PDF

Info

Publication number
GB2487039A
GB2487039A GB1017028.0A GB201017028A GB2487039A GB 2487039 A GB2487039 A GB 2487039A GB 201017028 A GB201017028 A GB 201017028A GB 2487039 A GB2487039 A GB 2487039A
Authority
GB
United Kingdom
Prior art keywords
illustration
user
motion
virtual camera
digital
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1017028.0A
Other versions
GB201017028D0 (en
Inventor
Michele Sciolette
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB1017028.0A priority Critical patent/GB2487039A/en
Publication of GB201017028D0 publication Critical patent/GB201017028D0/en
Publication of GB2487039A publication Critical patent/GB2487039A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A system, method and software to visualize illustrated books and comics on digital devices where the input of specific sensors captures the user's motion and drives a real-time change in perspective and parallax on the comic's content. In a preferred embodiment of a mobile device such as mobile phone or tablet with an accelerometer or gyroscope sensor, as the user rotates the device in his hands the sensors data drives the motion of a virtual camera inside a 3d representation of the comic creating the illusion that the screen is a window over the world of the comic. Another embodiment may include a digital device with front facing camera (such as a computer equipped with a webcam or a mobile device with a front facing camera) where the motion of the user's head is tracked in real-time and is used to drive the movements of the virtual camera inside a 3d representation of the comic, (Fig. 2 not shown).

Description

VISUALIZATION OF ILLUSTRATED BOOKS AND COMICS ON
DIGITAL DEVICES
FIELD OF THE INVENTION
The current invention relates to the field of presenting illustrated works such as comics or illustrated books (works that originated as two dimensional works of art) on digital devices, more particularly on digital devices equipped with appropriate sensors such as gyroscopes, accelerometers or digital cameras.
BACKGROUND OF THE INVENTION
Illustrated works such as comics have been published in digital format for many years. Recently, with the increased popularity of portable digital devices such as phones, laptops and tablet computers, digital publishing of comics has become a widely accepted way to display information to the end users.
Though many different software applications are available to deliver these contents to the user, in all current implementations, the work of art is presented to the user as a sequence of two dimensional images.
The user experience is very similar to the one of traditional printed comics, with a user interaction limited to the selection of what image to see (with controls allowing the user to move to next image, previous image and image of choice) and what part of each image to see in detail (with controls allowing the user to zoom into the image by scaling up the content and to translate the image allowing the user to look in more detail particular areas of an illustration).
The introduction of touchscreen devices brought the introduction of gestures to simplify navigation but the user interaction is still limited to the functions described above and the illustrations are presented as 2d images offering an overall user experience similar to a slideshow of the original artwork.
The use of gyroscopes and accelerometers on digital devices is used in video games to drive a virtual camera in 3d environment of the video game, but this has never been extended to works of art that are conceived as two dimensional artworks.
Subject of this invention is a system, method and software that allows to visualize illustrated content on digital devices taking advantage of appropriate sensors on the device to provide a novel immersive, engaging and interactive experience to the readers.
RELATED APPLICATIONS
In other fields various techniques has been used to convert 2d material to 3d information. For instance US Patent 6208348 relates to the process of converting motion picture footage from 2d images to 3d stereoscopic.
BRIEF DESCRIPTION OF DRAWINGS
Fig. 1. Preferred embodiment. Digital tablet device with an accelerometer and or gyroscope. On the left side the device is in the default position and the comics is presenting the image as it was created by the artist. On the right, as the user rotates the device, the perspective in the image is updated in realtime.
Fig.2. Additional Embodiment. Laptop computer with front facing camera. The user is facing a laptop computer monitor where a webcam is positioned at the top of the screen. As the user moves his head relatively to the screen, the elements on the screen move to compensate for the change in point of view.
Fig. 3. Schematic view of the process.
Fig. 4. Detailed view of the Offline Process for traditional illustrations.
Fig. 5. Detailed view of the Offline Process for digital illustrations.
Fig. 6. Detailed view of the Realtime Rendering process.
DETAILED DESCRIPTION
Subject of this invention a technology that brings to the user an immersive, engaging and interactive visualization on digital devices of an illustrated work of art such as a comic (for the remainder of this description the word comic is also used to represent other illustrated works of art such as illustrated books or any other form of illustrated content that was conceived as a two dimensional art work).
The effect is achieved taking advantage of appropriate sensors on the device so that perspective and parallax interactively respond the motion of the user as captured by the sensors.
In a preferred embodiment, Fig. 1, the digital device is a mobile device, such as a phone or tablet computer, equipped with a accelerometers or gyroscopes. As the user rotates the device in his hands, the perspective within the illustration presented on the screen is updated in realtime providing the illusion that the digital device is a "window" on the world of the illustration.
In an alternative embodiment, Fig. 2, the digital device (a desktop or laptop computer, a phone or tablet computer) is equipped with a digital camera pointed towards the face of a user looking at the screen. As the user moves his head in front of the camera, a tracking system based on facial recognition techniques, detects the position of the user's head and updates the perspective within the illustration presented on the screen, again to create the illusion of looking at the world of the illustration.
Fig. 3. Presents an overview of the entire process.
The whole method requires two main components: an initial Editing Process (30) and a Viewing Application (31) that is run on the digital device. The initial Editing Process is run once before the art work is distributed to the end users (36); in this phase the original illustration (32) is converted to a format that can be distributed to the end users (34). The second component of the method is a software Viewing Application (31), based around a realtime rendering engine, that is run on the digital device to display the illustrated work to the end user.
During the initial editing process of the work of art, an initial illustration (32), represented as an image data file, is run through an offline processing (33) where the illustration is separated into different elements (or layers) corresponding to separate objects represented in the illustration.
Each element is assigned a specific shape (or geometry), position and orientation in a virtual 3d environment. A virtual camera setup is also defined for the illustration such that each separate element is aligned with the corresponding geometry when seen from the point of view of the virtual camera. This ensures that, during subsequent stages, the 3d environment will appear an exact copy of the initial artwork when rendered from the position of the associated virtual camera. Such offline processing is only performed once as part of the editing of the final product. The output of the offline processing is is a Processed Illustration Data (34) that is distributed (36) to the end user for the fruition of the work.
Processed Illustration Data is a collection of multiple image data (corresponding to different layers in the illusration) with associated geometry and virtual camera information. Geometry information can be stored as explicit geometry data such as polygon meshes, or as depth maps where geometry information is provided as a set of distance values between corresponding image pixels and virtual camera position.
The user loads the Processed Illustration Data (34) in a Viewing Application (31) that is run on the Digital Device.
Tn the Viewing Application (31), all the elements that are part of the Processed Illustration Data (34), are loaded in memory together with their shape, position and orientation and rendered in realtime via a virtual camera. In a default position (for instance when the user is frilly frontal to the device screen) the virtual camera coincides with the virtual camera position assigned to the illustration during the Offline Processing (33) so that the output of the reahime rendering that is sent to the device screen appears to the user just like the original illustration (32) as initially conceived by the authoii As the user activates the sensors (35), for instance rotating the digital device in his hands or moving his head in front of the device camera, the position of the virtual camera is updated to reflect the new relative position between the user and the device. This causes the elements in the illustration to move and distort according the the perspective defined by the new virtual camera position and their position in the virtual 3d environment. This provides the user a completely novel user experience, as if the screen of the digital device was a window on the 3d environment of the illustration that the user is free to navigate and explore.
The user is also interacting directly with the Viewing Application (31) with more traditional input method such as mouse or touchscreen display for additional novel interactions with the illustration.
For instance, in the case of a digital device with a touchscreen interface, the user can use touchscreen gestures to perform additional navigation inside the virtual 3d illustration environment.
In an example embodiment, the user may use a "pinch" gesture (touching the screen with two fingers and moving them further apart from one another) to move the virtual camera inside the illustration. This produces a much more compelling and immersive result than the standard scaling effect used in traditional digital illustration viewers.
Similarly, transitions between different illustrations can also be driven by the user using a paradigm that is familiar to many viewing applications, but with an innovative result and user experience. For instance, on a toucbscreen device, swiping the finger on the screen could trigger the transition between two illustrations which can be performed as a virtual camera animation between the positions of the virtual cameras associated to each individual illustration. As the camera position is interpolated between different position, the elements corresponding to the different illustrations could be cross faded to provide a new innovative transition effect between illustrations.
The technology applies to content created directly in digital form but also to more traditional illustrated content that has been originally created on paper.
Fig. 4. Presents an overview of the Offline Process described above for the scenarios where the initial illustration or comic was created in a traditional form for instance, was created on paper.
The process begins with a Data Acquision stage (41) where the original work of art is scanned and converted to a digital image file. The image is then separated into multiple different elements, or layers, that correspond to separate objects that are positioned at different depth, as part of a Layers Separation (42) process. Any background element that is occluded or partially occluded, by another foreground element, must be treated to restore the occluded part as part of a Background Replacement process (43). Due to the extreme variety in graphic styles used in the illustrated comics art form, these processes are supervised by a user who also makes creatives decisions in the following Layout stage (44) where each element is assigned a shape, position and orientation within a virtual 3d environment. During the Layout stage the virtual camera corresponding to the illustration is also defined. Once the geometry of each element is defined together, with the virtual camera associated to the illustration, an optional Image Projection (45) stage can be applied. During this process the image data associated to each element is projected on the corresponding geometry and transformed to a traditional LIV texturing space as supported by most graphics hardware.
Alternatively the Image Projection step can be left out of the Offline Process and integrated into the Viewing Application.
Fig. 5. Presents an overview of the Offline Process for the scenarios where the initial illustration has been generated in digital form. In this case, a key question is whether each illustration is already available as a multitude of separate elements. If this is true, the multiple digital layers can be used directly in the Layout (44) process. Any illustration that is not already split into muhiple layers, needs to go through the same additional processes as for traditional illustrations: Layers Separation (42) and Background Replacement (43). All separated and background replaced elements are then run through the same Layout stage (44) where each element is associated a geometric shape and a virtual camera position is established for the illustration. As in the traditional illustration case an additional Image Projection stage can be added as part of the Offline Processing or can be left for the Viewing Application.
The output product of the Offline Processing is a set of Processed Illustration Data (34), stored as one more more digital files, containing information regarding virtual camera settings associated to each illustration and geometry and image data for each element of the original illustration.
Fig. 6. Shows a more detailed view of the real time rendering system, a software application running on the digital device.
The Processed Illustration Data (34), comprising geometry and image data for each element of the original illustration, is loaded into the memory and the image data is mapped on the geometry via a projection from the virtual camera position associated to the illustration (61). If image projection has already been performed as part of the Offline Processing step, the corresponding element can be mapped directly on the geometry. Once Image Mapping is complete, all elements of the original illustration are mapped on corresponding geometry and are ready to be rendered. The realtime engine enters its rendering loop. For each frame the sensors capture the relative position between the user and the screen. This data is sent to a function of the Realtime Engine that updates the current position (63) of the virtual camera that is used to render the 3d environment (64) based on input from the device sensors and any other direct user input. The resulting rendered image is directly sent to the screen the loop continues.
The visualization can optionally be combined with a stereoscopic rendering of the illustrations, also computed in realtime, to be used in conjunction with any stereoscopic display technology such as anaglyph, auto-stereoscopic displays, polarized displays, or shutter glasses.

Claims (10)

  1. CLAIMSWhat is claimed is: 1. A visualization technique for illustrated content, that is content that originated as two dimensional work of art, on digital devices where data from appropriate sensors drives a realtime change in perspective to the elements in the illustration.
  2. 2. The technique of claim 1 where the digital device may be a mobile device such as a phone or a tablet with appropriate sensors such as accelerometer or gyroscope, a combination of the two or other motion sensors.
  3. 3.The technique of claim 1 on a device of claim 2, where the motion of the device may be used to directly compute a standard change in perspective under the assumption that the motion of the device approximates the motion between the user and the screen of the device.
  4. 4. A technique of claim 1 where the digital device, mobile or not, is equipped with a front facing camera that may be used to track the change in relative position between the user and the device.
  5. 5.A method such as described in claim 1 where the user may activate a stereoscopic visualization of the comic or illustration.
  6. 6.A method to process and distribute comics or illustrated books that have been conceived as two dimensional works of art, such that the end user interacts with the content by moving a virtual camera inside the virtual 3d representation of the illustration.
  7. 7.A method such as described in claim 6 where the motion of the virtual camera inside the environment depicted in the illustration includes camera translation and rotation in all axis as well as camera zooming.
  8. 8. A method as described in claim 6 where multiple layers of an original illustration together with their geometry and a matching virtual camera position are distributed to the end users to allow interaction such as described in claim 6.
  9. 9. A visualization technique for content comprising a sequence of illustrations, such as a comic, where a transition effect between different illustrations is implemented as a virtual camera move through the environments depicted by the illustrations.
  10. 10. The technique of claims 1, 6, where every time a transition to an illustration is complete, the corresponding image will be presented as drawn by the original author, and only updated to accommodate subsequent changes in the user/display relation (for instance in the preferred embodiment, regardless of the device orientation and hence regardless of the data being provided by the accelerometer or gyroscopes every time a transition to a new illustration is complete, the corresponding illustration will appear as intended by the author).
GB1017028.0A 2010-10-11 2010-10-11 Visualizing Illustrated Books And Comics On Digital Devices Withdrawn GB2487039A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1017028.0A GB2487039A (en) 2010-10-11 2010-10-11 Visualizing Illustrated Books And Comics On Digital Devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1017028.0A GB2487039A (en) 2010-10-11 2010-10-11 Visualizing Illustrated Books And Comics On Digital Devices

Publications (2)

Publication Number Publication Date
GB201017028D0 GB201017028D0 (en) 2010-11-24
GB2487039A true GB2487039A (en) 2012-07-11

Family

ID=43304303

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1017028.0A Withdrawn GB2487039A (en) 2010-10-11 2010-10-11 Visualizing Illustrated Books And Comics On Digital Devices

Country Status (1)

Country Link
GB (1) GB2487039A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9007404B2 (en) 2013-03-15 2015-04-14 Legend3D, Inc. Tilt-based look around effect image enhancement method
US9241147B2 (en) 2013-05-01 2016-01-19 Legend3D, Inc. External depth map transformation method for conversion of two-dimensional images to stereoscopic images
US9282321B2 (en) 2011-02-17 2016-03-08 Legend3D, Inc. 3D model multi-reviewer system
US9286941B2 (en) 2001-05-04 2016-03-15 Legend3D, Inc. Image sequence enhancement and motion picture project management system
US9288476B2 (en) 2011-02-17 2016-03-15 Legend3D, Inc. System and method for real-time depth modification of stereo images of a virtual reality environment
US9407904B2 (en) 2013-05-01 2016-08-02 Legend3D, Inc. Method for creating 3D virtual reality from 2D images
US9438878B2 (en) 2013-05-01 2016-09-06 Legend3D, Inc. Method of converting 2D video to 3D video using 3D object models
US9609307B1 (en) 2015-09-17 2017-03-28 Legend3D, Inc. Method of converting 2D video to 3D video using machine learning
EP3130994A4 (en) * 2014-04-07 2018-01-03 Sony Corporation Display control device, display control method, and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030216176A1 (en) * 2002-05-20 2003-11-20 Takao Shimizu Game system and game program
EP1369822A2 (en) * 2002-05-31 2003-12-10 eIT Co., Ltd Apparatus and method for controlling the shift of the viewpoint in a virtual space
US20090303204A1 (en) * 2007-01-05 2009-12-10 Invensense Inc. Controlling and accessing content using motion processing on mobile devices
US20100045666A1 (en) * 2008-08-22 2010-02-25 Google Inc. Anchored Navigation In A Three Dimensional Environment On A Mobile Device
US20100128007A1 (en) * 2008-11-21 2010-05-27 Terry Lynn Cole Accelerometer Guided Processing Unit
EP2218485A1 (en) * 2007-11-30 2010-08-18 Kabushiki Kaisha Square Enix (also Trading As Square Enix Co. Ltd.) Image generation device, image generation program, image generation program recording medium, and image generation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030216176A1 (en) * 2002-05-20 2003-11-20 Takao Shimizu Game system and game program
EP1369822A2 (en) * 2002-05-31 2003-12-10 eIT Co., Ltd Apparatus and method for controlling the shift of the viewpoint in a virtual space
US20090303204A1 (en) * 2007-01-05 2009-12-10 Invensense Inc. Controlling and accessing content using motion processing on mobile devices
EP2218485A1 (en) * 2007-11-30 2010-08-18 Kabushiki Kaisha Square Enix (also Trading As Square Enix Co. Ltd.) Image generation device, image generation program, image generation program recording medium, and image generation method
US20100045666A1 (en) * 2008-08-22 2010-02-25 Google Inc. Anchored Navigation In A Three Dimensional Environment On A Mobile Device
US20100128007A1 (en) * 2008-11-21 2010-05-27 Terry Lynn Cole Accelerometer Guided Processing Unit

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286941B2 (en) 2001-05-04 2016-03-15 Legend3D, Inc. Image sequence enhancement and motion picture project management system
US9282321B2 (en) 2011-02-17 2016-03-08 Legend3D, Inc. 3D model multi-reviewer system
US9288476B2 (en) 2011-02-17 2016-03-15 Legend3D, Inc. System and method for real-time depth modification of stereo images of a virtual reality environment
US9007404B2 (en) 2013-03-15 2015-04-14 Legend3D, Inc. Tilt-based look around effect image enhancement method
US9241147B2 (en) 2013-05-01 2016-01-19 Legend3D, Inc. External depth map transformation method for conversion of two-dimensional images to stereoscopic images
US9407904B2 (en) 2013-05-01 2016-08-02 Legend3D, Inc. Method for creating 3D virtual reality from 2D images
US9438878B2 (en) 2013-05-01 2016-09-06 Legend3D, Inc. Method of converting 2D video to 3D video using 3D object models
EP3130994A4 (en) * 2014-04-07 2018-01-03 Sony Corporation Display control device, display control method, and program
US9609307B1 (en) 2015-09-17 2017-03-28 Legend3D, Inc. Method of converting 2D video to 3D video using machine learning

Also Published As

Publication number Publication date
GB201017028D0 (en) 2010-11-24

Similar Documents

Publication Publication Date Title
GB2487039A (en) Visualizing Illustrated Books And Comics On Digital Devices
US9619105B1 (en) Systems and methods for gesture based interaction with viewpoint dependent user interfaces
US9886102B2 (en) Three dimensional display system and use
US9939914B2 (en) System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
US9595127B2 (en) Three-dimensional collaboration
US20160358383A1 (en) Systems and methods for augmented reality-based remote collaboration
CN109271029B (en) Touchless gesture recognition system, touchless gesture recognition method, and medium
US8514264B2 (en) Remote workspace sharing
CN101408800B (en) Method for performing three-dimensional model display control by CCD camera
CN109905592B (en) Method and apparatus for providing content controlled or synthesized according to user interaction
US9294670B2 (en) Lenticular image capture
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
US11044398B2 (en) Panoramic light field capture, processing, and display
US20150213784A1 (en) Motion-based lenticular image display
EP3839699A1 (en) Augmented virtuality self view
EP3871037B1 (en) Efficiency enhancements to construction of virtual reality environments
US11995776B2 (en) Extended reality interaction in synchronous virtual spaces using heterogeneous devices
US20230368432A1 (en) Synthesized Camera Arrays for Rendering Novel Viewpoints
CN111880652A (en) Method, apparatus and storage medium for moving position of AR object
CN111598996A (en) Article 3D model display method and system based on AR technology
Nor’a et al. Fingertips interaction method in handheld augmented reality for 3d manipulation
EP3652704B1 (en) Systems and methods for creating and displaying interactive 3d representations of real objects
McNamara Enhancing art history education through mobile augmented reality
Narducci et al. Enabling consistent hand-based interaction in mixed reality by occlusions handling
WO2019130183A1 (en) Multi-camera display

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)