US10748331B2 - 3D lighting - Google Patents

3D lighting Download PDF

Info

Publication number
US10748331B2
US10748331B2 US16/291,590 US201916291590A US10748331B2 US 10748331 B2 US10748331 B2 US 10748331B2 US 201916291590 A US201916291590 A US 201916291590A US 10748331 B2 US10748331 B2 US 10748331B2
Authority
US
United States
Prior art keywords
image
electronic device
images
orientation information
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/291,590
Other versions
US20190206120A1 (en
Inventor
Ricardo Motta
Lynn R. Youngs
Minwoong Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority to US16/291,590 priority Critical patent/US10748331B2/en
Publication of US20190206120A1 publication Critical patent/US20190206120A1/en
Application granted granted Critical
Publication of US10748331B2 publication Critical patent/US10748331B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • G06K9/00228
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Definitions

  • the realistic display of three-dimensional (3D) objects on a two-dimensional (2D) surface has been a long-time goal in the image processing field.
  • One approach to simulating a 3D object is to take a large number of images each illuminated from a different position. A specific image may then be selected and displayed based on a detected location of a light source (e.g., through an ambient or color light sensor). Another approach is to take a large number of images each with the 3D object in a different location relative to a fixed light source. Again, a specific image may be selected and displayed based on a determined orientation of the 3D object (e.g., through use of an accelerometer). Another method would be to combine these two prior approaches so that both lighting location and object orientation may be accounted for. It should be relatively easy to grasp that the number of images needed for either of the first two approaches can become very large-making it difficult to implement in low-memory devices.
  • the disclosed concepts provide a method to display three dimensional (3D) representations of an object based on orientation information.
  • the method includes displaying a first image of the object on a display unit of an electronic device, wherein the first image is indicative of a first 3D presentation of the object; determining (based on output from one or more sensors integral to the electronic device), orientation information of the electronic device; determining a second image to display based on a light model of the object and the orientation information; adding synthetic shadows, based on the orientation information, to the second image to generate a third image; and displaying the third image of the object on the display unit, wherein the third image is indicative of a second 3D presentation of the object—the second 3D presentation being different from the first 3D presentation.
  • orientation information may be determined relative to a gravity field using, for example, an accelerometer or a gyroscope.
  • orientation information may be based on a direction of light.
  • an image may be captured coincident in time with display of the first image (an in a direction of light emitted from the display unit). The image may then be analyzed to identify certain types of objects and, from there, an orientation of the electronic device may be determined. By way of example, if the captured image includes a face, then the angle of the face within the captured frame may provide some orientation information.
  • the light model may be a polynomial texture map (PTM) model.
  • the model may encode or predict the angle of light and therefore the presentation of the object based on the orientation information.
  • parallax information may be incorporated into the model or added like the synthetic shadows.
  • a computer executable program to implement the disclosed methods may be stored in any media that is readable and executable by a computer system.
  • FIG. 1 illustrates a two-phase operation in accordance with one embodiment.
  • FIGS. 2A and 2B illustrate two baseline image capture operations in accordance with one embodiment.
  • FIG. 3 shows a light model system in accordance with one embodiment.
  • FIG. 4 shows a light model system in accordance with another embodiment.
  • FIG. 5 shows a system in accordance with yet another embodiment.
  • FIG. 6 shows a computer system in accordance with one embodiment.
  • This disclosure pertains to systems, methods, and computer readable media to display a graphical element exhibiting three-dimensional (3D) behavior.
  • techniques are disclosed for displaying a graphical element in a manner that simulates full 3D visibility (including parallax and shadowing). More particularly, a number of images, each captured with a known spatial relationship to a target object, may be used to construct a lighting model of the target object.
  • PTM polynomial texture maps
  • spherical or hemispherical harmonics may be used to do this.
  • Using PTM techniques a relatively small number of basis images may be identified.
  • orientation information may be used to generate a combination of the basis images so as to simulate the 3D presentation of the target object-including, in some embodiments, the use of shadows and parallax artifacts.
  • Orientation information may be obtained from, for example, from an accelerometer or a light sensor.
  • Model development phase 100 may include the capture of baseline images (block 110 ) and the development of a model from those images (block 115 ).
  • model 120 may include multiple images of the target object captured at different viewing positions and/or lighting angles.
  • model 120 may include the development of a PTM model based on the captured baseline images.
  • model 120 may include a combination of captured images and one or more PTM models. Once generated, model 120 may be deployed to electronic device 125 .
  • electronic device 125 in accordance with one embodiment may include communication interface 130 , one or more processors 135 , graphics hardware 140 , display element or unit 145 , device sensors 150 , memory 155 , image capture system 160 , and audio system 165 all of which may be coupled via system bus or backplane 170 which may be comprised of one or more continuous (as shown) or discontinuous communication links.
  • Communication interface 130 may be used to connect electronic device 125 to one or more networks.
  • Illustrative networks include, but are not limited to, a local network such as a USB or Bluetooth network, a cellular network, an organization's local area network, and a wide area network such as the Internet.
  • Communication interface 130 may use any suitable technology (e.g., wired or wireless) and protocol (e.g., Transmission Control Protocol (TCP), Internet Protocol (IP), User Datagram Protocol (UDP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol (HTTP), Post Office Protocol (POP), File Transfer Protocol (FTP), and Internet Message Access Protocol (IMAP)).
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • UDP User Datagram Protocol
  • ICMP Internet Control Message Protocol
  • HTTP Hypertext Transfer Protocol
  • POP Post Office Protocol
  • FTP File Transfer Protocol
  • IMAP Internet Message Access Protocol
  • Processor(s) 135 may be a system-on-chip such as those found in mobile devices and include one or more dedicated graphics processing units (GPUs).
  • Processor 135 may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and each processor may include one or more processing cores.
  • Graphics hardware 140 may be special purpose computational hardware for processing graphics and/or assisting processor(s) 135 perform computational tasks.
  • graphics hardware 140 may include one or more programmable GPUs and each such unit may include one or more processing cores.
  • Display 145 may use any type of display technology such as, for example, light emitting diode (LED) technology. Display 145 may provide a means of both input and output for device 125 .
  • LED light emitting diode
  • Device sensors 150 may include, by way of example, 3D depth sensors, proximity sensors, ambient light sensors, accelerometers and/or gyroscopes.
  • Memory 155 represents both volatile and non-volatile memory. Volatile memory may include one or more different types of media (typically solid-state) used by processor(s) 135 and graphics hardware 140 .
  • memory 155 may include memory cache, read-only memory (ROM), and/or random access memory (RAM).
  • Memory 155 may also include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM).
  • Memory 155 may be used to retain media (e.g., audio, image and video files), preference information, device profile information, computer program instructions or code organized into one or more modules and written in any desired computer programming language, and any other suitable data. When executed by processor(s) 135 and/or graphics hardware 140 such computer program code may implement one or more of the techniques or features described herein.
  • Image capture system 160 may capture still and video images and include one or more image sensors and one or more lens assemblies. Output from image capture system 160 may be processed, at least in part, by video codec(s) and/or processor(s) 135 and/or graphics hardware 140 , and/or a dedicated image processing unit incorporated within image capture system 160 . Images so captured may be stored in memory 155 .
  • electronic device 125 may have two major surfaces. A first or front surface may be coincident with display unit 145 . A second or back surface may be an opposing surface. In some embodiments, image capture system 160 may include one or more cameras directed outward from the first surface and one or more cameras directed outward from the second surface. Electronic device 125 could be, for example, a mobile telephone, personal media device, portable camera, or a tablet, notebook or desktop computer system.
  • phase-1 200 may include 3D target object 205 that is illuminated from light source 210 .
  • a relatively large number of images may be obtained to produce phase-1 image corpus 230 .
  • a total of 180 images may be captured (e.g., one image for every 1° of motion).
  • camera 215 may be moved completely around target object 205 .
  • a total of 360 images may be captured (e.g., one image for every 1° of motion).
  • phase-2 235 may include 3D target object 205 that is illuminated from light source 210 that moves from position 240 A to position 240 B to position 240 C along path 245 while camera 215 remains in a single position taking a relatively large number of images (e.g., 150) to produce phase-2 image corpus 250 .
  • the precise number of images needed to generate image corpus 230 and image corpus 250 may be dependent on the desired fidelity of the resulting model—the more precise the model, generally the more images will be needed.
  • captured images 230 and 250 capture shadowing, highlights and parallax information.
  • image corpus 230 may be organized so that each (or at least some) of the images are associated with their corresponding viewing angle as illustrated in table 300 .
  • the mapping between viewing (capture) angle and target object 205 may be thought of as a model (e.g., model 120 ).
  • sensor devices 150 may be used to determine a viewing angle. Once determined, the corresponding image may be retrieved from memory 155 and displayed using display element 145 . In one embodiment, if the viewing angle determined from sensor output is between a viewing angle captured in accordance with FIG.
  • the two images to either “side” of the sensor-provided viewing angle may be combined in, for example, a weighted sum.
  • either side means a captured image associated with a lower viewing angle most closely the same as the sensor indicated viewing angle and a captured image associated with a higher viewing angle most closely the same as the sensor indicated viewing angle.
  • image corpus 230 may be retained in structures different from a single table as shown in FIG. 3 . For example, a plurality of tables such as in a relational database or a B-tree or other data structure useful for data comparison and retrieval operations.
  • model generation in accordance with block 115 may apply PTM operation 400 to each image in image corpus 250 independently to produce PTM model 405 .
  • sensor devices 150 may be used to determine a lighting angle with respect to target object 205 (e.g., electronic device 125 ).
  • the corresponding position may be input to PTM model 405 (e.g., in terms of x-position 415 and y-position 420 and optionally a z-position, not shown) and used to generate output image 410 .
  • light angle may be represented in other coordinate systems such as, for example, yaw-pitch-roll).
  • PTM operation 400 may employ spherical harmonics (SH). In another embodiment, PTM operation 400 may employ hemispherical harmonics (HSH). In still other embodiments, different basis functions may be used such as, for example, Zernike polynomials, spherical wavelets, and Makhotkin hemispherical harmonics. The precise functional relationship or polynomial chosen may be a function of the implementation's operational environment, the desired fidelity of the resulting light model, and the amount of memory needed by the model.
  • PTM operation 400 produces model 405 that may use significantly fewer images than are in image corpus 250 .
  • Image corpus 250 may include a relatively large number of high resolution color images (e.g., 50-400 each).
  • PTM model 405 may need only a few “images” from which all images within that model's range may be generated.
  • model coefficients a 0 through as may be different for each pixel of image 410 represented by x input 415 and y input 420 .
  • p i as defined by EQ. 1 represents only the intensity or luminance of the ith pixel in output image 410 .
  • each pixel value in [C] may be the average color value of all corresponding pixels in image corpus 250 .
  • each pixel value in [C] may be the median value of all the corresponding pixels in image corpus 250 .
  • the value of each pixel in chroma image [C] may be a weighted average of all corresponding color values in image corpus 250 .
  • chroma values from image corpus 250 may be combined in any manner deemed useful for a particular embodiment (e.g., non-linearly).
  • Model deployment phase 105 in accordance with FIG. 1 may be invoked once at least one generated model (e.g., model 300 and/or model 405 ) are transferred to memory 155 of electronic device 125 .
  • target object 205 may be displayed on display unit 145 .
  • system 500 in accordance with another embodiment may employ device sensors 150 to supply models 300 and 405 with input (e.g., 415 and 420 ).
  • device sensors 150 may include ambient and/or color sensors to identify both the location and temperature of a light source.
  • device sensors 150 include a gyroscope and/or an accelerometer so that the orientation of device 125 may be determined.
  • combine operation 505 may be a simple merge operation.
  • combine operation 505 may represent a weighted combination of each model's output.
  • combine operation 505 may in fact select one model output based on sensor input and/or user input.
  • model 405 is operative and device sensors 150 indicate device 125 is tilted at an orientation representative of a viewer looking down on target object 205 at an approximately 45° angle. If a person were holding an object in their hand looking straight down onto its top, they would expect to see the object's top surface. As they moved their head to a 45° angle, they would expect to see less of the object's top surface and more of one or more side surfaces.
  • sensor input indicative of a 45° angle could be input to PTM model 405 and output image 510 would be a combination of the PTM coefficient images modified to provide color.
  • Images output in accordance with this disclosure may include shadows, highlights and parallax to the extent this information is captured in the generated image corpuses.
  • shadow information is not included in the image data used to generate the model, tilt, and/or the identified direction of a light source (relative to device 125 ) may be used to generate synthetic shadows (e.g., based on image processing).
  • Embodiments employing this technique may use sensor input to generate a first output image from the relevant light model (e.g., output image 410 or 510 ). This image may then be used to generate synthetic shadows. The synthetic shadows may then be applied to a first output image to generate a final output image which may be displayed, for example, on display unit 145 .
  • electronic device 125 may include a camera unit outwardly facing from display 145 .
  • the camera could then capture and analyze an image (separate from or in combination with device sensors 150 ) to determine the devices orientation and/or input to models 300 and 405 .
  • the resulting output image (e.g. image 510 ) may include shadows as captured during model generation or synthetically via image analysis.
  • the image captured may include a face such that aspects of the detected face (e.g., location of the eyes and/or mouth and/or nose) may be used to determine input to model 300 and/or 405 .
  • Computer system 600 may include one or more processors 605 , memory 610 ( 610 A and 610 B), one or more storage devices 615 , graphics hardware 620 , device sensors 625 (e.g., 3D depth sensor, proximity sensor, ambient light sensor, color light sensor, accelerometer and/or gyroscope), communication interface 630 , user interface adapter 635 and display adapter 640 —all of which may be coupled via system bus or backplane 645 .
  • processors 605 e.g., a general purpose computer system such as a desktop, laptop, notebook or tablet computer system.
  • Memory 610 610 A and 610 B
  • storage devices 615 e.g., graphics hardware 620
  • device sensors 625 e.g., 3D depth sensor, proximity sensor, ambient light sensor, color light sensor, accelerometer and/or gyroscope
  • communication interface 630 e.g., 3D depth sensor, proximity sensor, ambient light sensor, color light sensor, accelerometer and/or gyroscope
  • Processors 605 memory 610 (including storage devices 615 ), graphics hardware 620 , device sensors 625 , communication interface 630 , and system bus or backplane 645 provide the same or similar function as similarly identified elements in FIG. 1 and will not, therefore, be described further.
  • User interface adapter 635 may be used to connect keyboard 650 , microphone 655 , pointer device 660 , speaker 665 and other user interface devices such as a touch-pad and/or a touch screen (not shown).
  • Display adapter 640 may be used to connect one or more display units 670 (similar in function to display unit 145 ) which may provide touch input capability.
  • System 600 may be used to both develop models in accordance with this disclosure (e.g., models 120 , 300 and 405 ). The developed models may thereafter be deployed to computer system 600 or electronic device 125 . (In another embodiment, electronic device 125 may provide sufficient computational power as to enable model development so that general computer system 600 need not be used.)

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

Techniques are disclosed for displaying a graphical element in a manner that simulates three-dimensional (3D) visibility (including parallax and shadowing). More particularly, a number of images, each captured with a known spatial relationship to a target 3D object, may be used to construct a lighting model of the target object. In one embodiment, for example, polynomial texture maps (PTM) using spherical or hemispherical harmonics may be used to do this. Using PTM techniques a relatively small number of basis images may be identified. When the target object is to be displayed, orientation information may be used to generate a combination of the basis images so as to simulate the 3D presentation of the target object.

Description

BACKGROUND
The realistic display of three-dimensional (3D) objects on a two-dimensional (2D) surface has been a long-time goal in the image processing field. One approach to simulating a 3D object is to take a large number of images each illuminated from a different position. A specific image may then be selected and displayed based on a detected location of a light source (e.g., through an ambient or color light sensor). Another approach is to take a large number of images each with the 3D object in a different location relative to a fixed light source. Again, a specific image may be selected and displayed based on a determined orientation of the 3D object (e.g., through use of an accelerometer). Another method would be to combine these two prior approaches so that both lighting location and object orientation may be accounted for. It should be relatively easy to grasp that the number of images needed for either of the first two approaches can become very large-making it difficult to implement in low-memory devices.
SUMMARY
In one embodiment the disclosed concepts provide a method to display three dimensional (3D) representations of an object based on orientation information. The method includes displaying a first image of the object on a display unit of an electronic device, wherein the first image is indicative of a first 3D presentation of the object; determining (based on output from one or more sensors integral to the electronic device), orientation information of the electronic device; determining a second image to display based on a light model of the object and the orientation information; adding synthetic shadows, based on the orientation information, to the second image to generate a third image; and displaying the third image of the object on the display unit, wherein the third image is indicative of a second 3D presentation of the object—the second 3D presentation being different from the first 3D presentation.
In one embodiment, orientation information may be determined relative to a gravity field using, for example, an accelerometer or a gyroscope. In another embodiment, orientation information may be based on a direction of light. In still another embodiment, an image may be captured coincident in time with display of the first image (an in a direction of light emitted from the display unit). The image may then be analyzed to identify certain types of objects and, from there, an orientation of the electronic device may be determined. By way of example, if the captured image includes a face, then the angle of the face within the captured frame may provide some orientation information. Various types of light models may be used. In one embodiment, the light model may be a polynomial texture map (PTM) model. In general, the model may encode or predict the angle of light and therefore the presentation of the object based on the orientation information. In addition to synthetic shadows, parallax information may be incorporated into the model or added like the synthetic shadows. A computer executable program to implement the disclosed methods may be stored in any media that is readable and executable by a computer system.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a two-phase operation in accordance with one embodiment.
FIGS. 2A and 2B illustrate two baseline image capture operations in accordance with one embodiment.
FIG. 3 shows a light model system in accordance with one embodiment.
FIG. 4 shows a light model system in accordance with another embodiment.
FIG. 5 shows a system in accordance with yet another embodiment.
FIG. 6 shows a computer system in accordance with one embodiment.
DETAILED DESCRIPTION
This disclosure pertains to systems, methods, and computer readable media to display a graphical element exhibiting three-dimensional (3D) behavior. In general, techniques are disclosed for displaying a graphical element in a manner that simulates full 3D visibility (including parallax and shadowing). More particularly, a number of images, each captured with a known spatial relationship to a target object, may be used to construct a lighting model of the target object. In one embodiment, for example, polynomial texture maps (PTM) using spherical or hemispherical harmonics may be used to do this. Using PTM techniques a relatively small number of basis images may be identified. When the target object is to be displayed, orientation information may be used to generate a combination of the basis images so as to simulate the 3D presentation of the target object-including, in some embodiments, the use of shadows and parallax artifacts. Orientation information may be obtained from, for example, from an accelerometer or a light sensor.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed concepts. As part of this description, some of this disclosure's drawings represent structures and devices in block diagram form in order to avoid obscuring the novel aspects of the disclosed concepts. In the interest of clarity, not all features of an actual implementation are described. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in this disclosure to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosed subject matter, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
It will be appreciated that in the development of any actual implementation (as in any software and/or hardware development project), numerous decisions must be made to achieve the developers' specific goals (e.g., compliance with system- and business-related constraints), and that these goals may vary from one implementation to another. It will also be appreciated that such development efforts might be complex and time-consuming, but would nonetheless be a routine undertaking for those of ordinary skill in the design and implementation of graphics processing systems having the benefit of this disclosure.
Referring to FIG. 1, techniques in accordance with this disclosure may be thought of as being made up of model development phase 100 and model deployment phase 105. Model development phase 100 may include the capture of baseline images (block 110) and the development of a model from those images (block 115). In one embodiment, model 120 may include multiple images of the target object captured at different viewing positions and/or lighting angles. In another embodiment, model 120 may include the development of a PTM model based on the captured baseline images. In still another embodiment, model 120 may include a combination of captured images and one or more PTM models. Once generated, model 120 may be deployed to electronic device 125. As shown, electronic device 125 in accordance with one embodiment may include communication interface 130, one or more processors 135, graphics hardware 140, display element or unit 145, device sensors 150, memory 155, image capture system 160, and audio system 165 all of which may be coupled via system bus or backplane 170 which may be comprised of one or more continuous (as shown) or discontinuous communication links.
Communication interface 130 may be used to connect electronic device 125 to one or more networks. Illustrative networks include, but are not limited to, a local network such as a USB or Bluetooth network, a cellular network, an organization's local area network, and a wide area network such as the Internet. Communication interface 130 may use any suitable technology (e.g., wired or wireless) and protocol (e.g., Transmission Control Protocol (TCP), Internet Protocol (IP), User Datagram Protocol (UDP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol (HTTP), Post Office Protocol (POP), File Transfer Protocol (FTP), and Internet Message Access Protocol (IMAP)). Processor(s) 135 may be a system-on-chip such as those found in mobile devices and include one or more dedicated graphics processing units (GPUs). Processor 135 may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and each processor may include one or more processing cores. Graphics hardware 140 may be special purpose computational hardware for processing graphics and/or assisting processor(s) 135 perform computational tasks. In one embodiment, graphics hardware 140 may include one or more programmable GPUs and each such unit may include one or more processing cores. Display 145 may use any type of display technology such as, for example, light emitting diode (LED) technology. Display 145 may provide a means of both input and output for device 125. Device sensors 150 may include, by way of example, 3D depth sensors, proximity sensors, ambient light sensors, accelerometers and/or gyroscopes. Memory 155 represents both volatile and non-volatile memory. Volatile memory may include one or more different types of media (typically solid-state) used by processor(s) 135 and graphics hardware 140. For example, memory 155 may include memory cache, read-only memory (ROM), and/or random access memory (RAM). Memory 155 may also include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM). Memory 155 may be used to retain media (e.g., audio, image and video files), preference information, device profile information, computer program instructions or code organized into one or more modules and written in any desired computer programming language, and any other suitable data. When executed by processor(s) 135 and/or graphics hardware 140 such computer program code may implement one or more of the techniques or features described herein. Image capture system 160 may capture still and video images and include one or more image sensors and one or more lens assemblies. Output from image capture system 160 may be processed, at least in part, by video codec(s) and/or processor(s) 135 and/or graphics hardware 140, and/or a dedicated image processing unit incorporated within image capture system 160. Images so captured may be stored in memory 155. By way of example, electronic device 125 may have two major surfaces. A first or front surface may be coincident with display unit 145. A second or back surface may be an opposing surface. In some embodiments, image capture system 160 may include one or more cameras directed outward from the first surface and one or more cameras directed outward from the second surface. Electronic device 125 could be, for example, a mobile telephone, personal media device, portable camera, or a tablet, notebook or desktop computer system.
Baseline image capture in accordance with block 110 may include one or two phases. Referring to FIG. 2A, phase-1 200 may include 3D target object 205 that is illuminated from light source 210. As camera 215 moves from position 220A to position 220B to position 220C along path 225, a relatively large number of images may be obtained to produce phase-1 image corpus 230. By way of example, in moving from position 220A to position 220C, a total of 180 images may be captured (e.g., one image for every 1° of motion). In another embodiment camera 215 may be moved completely around target object 205. In this embodiment a total of 360 images may be captured (e.g., one image for every 1° of motion). Referring to FIG. 2B, optional phase-2 235 may include 3D target object 205 that is illuminated from light source 210 that moves from position 240A to position 240B to position 240C along path 245 while camera 215 remains in a single position taking a relatively large number of images (e.g., 150) to produce phase-2 image corpus 250. The precise number of images needed to generate image corpus 230 and image corpus 250 may be dependent on the desired fidelity of the resulting model—the more precise the model, generally the more images will be needed. In some embodiments, captured images 230 and 250 capture shadowing, highlights and parallax information.
Referring to FIG. 3, in one embodiment image corpus 230 may be organized so that each (or at least some) of the images are associated with their corresponding viewing angle as illustrated in table 300. In accordance with embodiments of this type, the mapping between viewing (capture) angle and target object 205 may be thought of as a model (e.g., model 120). During run-time (e.g., use of electronic device 125), sensor devices 150 may be used to determine a viewing angle. Once determined, the corresponding image may be retrieved from memory 155 and displayed using display element 145. In one embodiment, if the viewing angle determined from sensor output is between a viewing angle captured in accordance with FIG. 2A, the two images to either “side” of the sensor-provided viewing angle may be combined in, for example, a weighted sum. (As used here, the phrase “either side” means a captured image associated with a lower viewing angle most closely the same as the sensor indicated viewing angle and a captured image associated with a higher viewing angle most closely the same as the sensor indicated viewing angle.) One of ordinary skill in the art will recognize that image corpus 230 may be retained in structures different from a single table as shown in FIG. 3. For example, a plurality of tables such as in a relational database or a B-tree or other data structure useful for data comparison and retrieval operations.
Referring to FIG. 4, model generation in accordance with block 115 may apply PTM operation 400 to each image in image corpus 250 independently to produce PTM model 405. During run-time (e.g., use of electronic device 125), sensor devices 150 may be used to determine a lighting angle with respect to target object 205 (e.g., electronic device 125). Once determined, the corresponding position may be input to PTM model 405 (e.g., in terms of x-position 415 and y-position 420 and optionally a z-position, not shown) and used to generate output image 410. In another embodiment, light angle may be represented in other coordinate systems such as, for example, yaw-pitch-roll). In one embodiment, PTM operation 400 may employ spherical harmonics (SH). In another embodiment, PTM operation 400 may employ hemispherical harmonics (HSH). In still other embodiments, different basis functions may be used such as, for example, Zernike polynomials, spherical wavelets, and Makhotkin hemispherical harmonics. The precise functional relationship or polynomial chosen may be a function of the implementation's operational environment, the desired fidelity of the resulting light model, and the amount of memory needed by the model.
One feature of PTM operation 400, is that it produces model 405 that may use significantly fewer images than are in image corpus 250. Image corpus 250 may include a relatively large number of high resolution color images (e.g., 50-400 each). In contrast, PTM model 405 may need only a few “images” from which all images within that model's range may be generated. By way of example, in one embodiment PTM model 405 may employ spherical harmonics and result in a polynomial of the following form.
p i =a 0 x 2 +a 1 y 2 +a 2 xy+a 3 x+a 4 y+a 5,  EQ. 1
where ‘pi’ represents the model's output for pixel ‘i’ given an illuminate location (x, y), and a0 through a5 are model coefficients, the values of which are returned or found by PTM operation 400. In general, model coefficients a0 through as may be different for each pixel of image 410 represented by x input 415 and y input 420.
In practice, pi as defined by EQ. 1 represents only the intensity or luminance of the ith pixel in output image 410. To introduce color, a color matrix [C] may be introduced such that:
[P]=[C][P],  EQ. 2
where [C] represents the color value associated with each pixel in output image [P] (e.g., output image 410). In one embodiment, each pixel value in [C] may be the average color value of all corresponding pixels in image corpus 250. In another embodiment, each pixel value in [C] may be the median value of all the corresponding pixels in image corpus 250. In yet another embodiment, the value of each pixel in chroma image [C] may be a weighted average of all corresponding color values in image corpus 250. In still another embodiment, chroma values from image corpus 250 may be combined in any manner deemed useful for a particular embodiment (e.g., non-linearly).
Model deployment phase 105 in accordance with FIG. 1 may be invoked once at least one generated model (e.g., model 300 and/or model 405) are transferred to memory 155 of electronic device 125. Once installed onto device 125, target object 205 may be displayed on display unit 145. Referring to FIG. 5, system 500 in accordance with another embodiment may employ device sensors 150 to supply models 300 and 405 with input (e.g., 415 and 420). In one embodiment, device sensors 150 may include ambient and/or color sensors to identify both the location and temperature of a light source. In another embodiment, device sensors 150 include a gyroscope and/or an accelerometer so that the orientation of device 125 may be determined. If both models 300 and 405 are used, their respective output images may be combined 505 to generate output image 510. In one embodiment, combine operation 505 may be a simple merge operation. In another embodiment, combine operation 505 may represent a weighted combination of each model's output. In yet another embodiment, combine operation 505 may in fact select one model output based on sensor input and/or user input.
By way of another example consider, first the situation wherein model 405 is operative and device sensors 150 indicate device 125 is tilted at an orientation representative of a viewer looking down on target object 205 at an approximately 45° angle. If a person were holding an object in their hand looking straight down onto its top, they would expect to see the object's top surface. As they moved their head to a 45° angle, they would expect to see less of the object's top surface and more of one or more side surfaces. In practice, sensor input indicative of a 45° angle (represented as x and y coordinates, see FIG. 5) could be input to PTM model 405 and output image 510 would be a combination of the PTM coefficient images modified to provide color.
Images output in accordance with this disclosure may include shadows, highlights and parallax to the extent this information is captured in the generated image corpuses. In another embodiment, if shadow information is not included in the image data used to generate the model, tilt, and/or the identified direction of a light source (relative to device 125) may be used to generate synthetic shadows (e.g., based on image processing). Embodiments employing this technique may use sensor input to generate a first output image from the relevant light model (e.g., output image 410 or 510). This image may then be used to generate synthetic shadows. The synthetic shadows may then be applied to a first output image to generate a final output image which may be displayed, for example, on display unit 145. In still another embodiment, electronic device 125 may include a camera unit outwardly facing from display 145. The camera could then capture and analyze an image (separate from or in combination with device sensors 150) to determine the devices orientation and/or input to models 300 and 405. The resulting output image (e.g. image 510) may include shadows as captured during model generation or synthetically via image analysis. In one embodiment, the image captured may include a face such that aspects of the detected face (e.g., location of the eyes and/or mouth and/or nose) may be used to determine input to model 300 and/or 405.
Referring to FIG. 6, in addition to being deployed on electronic device 125 the disclosed techniques may be developed and deployed on representative computer system 600 (e.g., a general purpose computer system such as a desktop, laptop, notebook or tablet computer system). Computer system 600 may include one or more processors 605, memory 610 (610A and 610B), one or more storage devices 615, graphics hardware 620, device sensors 625 (e.g., 3D depth sensor, proximity sensor, ambient light sensor, color light sensor, accelerometer and/or gyroscope), communication interface 630, user interface adapter 635 and display adapter 640—all of which may be coupled via system bus or backplane 645. Processors 605 memory 610 (including storage devices 615), graphics hardware 620, device sensors 625, communication interface 630, and system bus or backplane 645 provide the same or similar function as similarly identified elements in FIG. 1 and will not, therefore, be described further. User interface adapter 635 may be used to connect keyboard 650, microphone 655, pointer device 660, speaker 665 and other user interface devices such as a touch-pad and/or a touch screen (not shown). Display adapter 640 may be used to connect one or more display units 670 (similar in function to display unit 145) which may provide touch input capability. System 600 may be used to both develop models in accordance with this disclosure (e.g., models 120, 300 and 405). The developed models may thereafter be deployed to computer system 600 or electronic device 125. (In another embodiment, electronic device 125 may provide sufficient computational power as to enable model development so that general computer system 600 need not be used.)
It is to be understood that the above description is intended to be illustrative, and not restrictive. The material has been presented to enable any person skilled in the art to make and use the disclosed subject matter as claimed and is provided in the context of particular embodiments, variations of which will be readily apparent to those skilled in the art (e.g., some of the disclosed embodiments may be used in combination with each other). For example, the deployment of models 120, 300 and 405 may be developed separately or together. In yet another embodiment, image corpuses 230 and 250 may be combined and used to generate a single light model. In one or more embodiments, one or more of the disclosed steps may be omitted, repeated, and/or performed in a different order than that described herein. The scope of the invention therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.”

Claims (23)

The invention claimed is:
1. An electronic device, comprising:
one or more memory devices;
a display unit coupled to the one or more memory devices;
an orientation sensor element; and
one or more processors coupled to the one or more memory devices, the display unit, and the orientation sensor element, the one or more processors configured to execute program instructions stored in the one or more memory devices to cause the electronic device to—
obtain, from the orientation sensor element, orientation information of the electronic device,
obtain an image of an object based on a light model of the object and the orientation information, wherein the light model of the object comprises a plurality of images of the object at different viewing angles, and wherein the obtained image is indicative of a three-dimensional presentation of the object at a viewing angle corresponding to the orientation information of the electronic device, and
display the obtained image of the object by the display unit.
2. The electronic device of claim 1, wherein the orientation information comprises an orientation of the electronic device relative to a gravity field.
3. The electronic device of claim 1, wherein the light model of the object comprises a polynomial texture map (PTM) model.
4. The electronic device of claim 1, wherein the light model of the object includes parallax information.
5. The electronic device of claim 1, wherein the program instructions to obtain the image of the object further comprise program instructions to select an image from the plurality of images of the object at different viewing angles.
6. The electronic device of claim 1, wherein the program instructions to obtain the image of the object further comprise program instructions to generate the image based on two or more of the plurality of images of the object at different viewing angles.
7. The electronic device of claim 6, wherein the two or more of the plurality of images comprise a first image and a second image, wherein the first image and the second image comprise images from the light model of the object at viewing angles most closely corresponding to the orientation information of the electronic device.
8. The electronic device of claim 1, further comprising program instructions stored in the one or more memory devices to cause the electronic device to:
add synthetic shadows, based on the orientation information, to the obtained image to generate a modified image of the object; and
display the modified image of the object by the display unit.
9. A non-transitory program storage device comprising instructions stored thereon to cause one or more processors to:
obtain, from an orientation sensor element, orientation information of an electronic device, wherein the electronic device includes a display unit;
obtain an image of an object based on a light model of the object and the orientation information, wherein the light model of the object comprises a plurality of images of the object at different viewing angles, and wherein the obtained image is indicative of a three-dimensional presentation of the object at a viewing angle corresponding to the orientation information of the electronic device; and
display the obtained image of the object by the display unit.
10. The non-transitory program storage device of claim 9, wherein the instructions to cause the one or more processors to obtain orientation information comprise instructions to cause the one or more processors to determine the orientation information based on a gravity field.
11. The non-transitory program storage device of claim 9, wherein the light model of the object comprises a polynomial texture map (PTM) model.
12. The non-transitory program storage device of claim 9, wherein the light model of the object includes parallax information.
13. The non-transitory program storage device of claim 9, wherein the instructions to cause the one or more processors to obtain the image of the object further comprise instructions to select an image from the plurality of images of the object at different viewing angles.
14. The non-transitory program storage device of claim 9, wherein the instructions to cause the one or more processors to obtain the image of the object further comprise instructions to generate the image based on two or more of the plurality of images of the object at different viewing angles.
15. The non-transitory program storage device of claim 14, wherein the two or more of the plurality of images comprise a first image and a second image, wherein the first image and the second image comprise images from the light model of the object at viewing angles most closely corresponding to the orientation information of the electronic device.
16. A method to display a three-dimensional representation of an object, comprising:
obtaining, from an orientation sensor element, orientation information of an electronic device;
obtaining an image of an object based on a light model of the object and the orientation information, wherein the light model of the object comprises a plurality of images of the object at different viewing angles, and wherein the obtained image is indicative of a three-dimensional presentation of the object at a viewing angle corresponding to the orientation information of the electronic device; and
displaying the obtained image of the object on a display unit associated with the electronic device.
17. The method of claim 16, wherein the orientation information comprises an orientation of the electronic device relative to a gravity field.
18. The method of claim 16, wherein the light model of the object comprises a polynomial texture map (PTM) model.
19. The method of claim 16, wherein the light model of the object includes parallax information.
20. The method of claim 16, wherein obtaining the image of the object further comprises selecting an image from the plurality of images of the object at different viewing angles.
21. The method of claim 16, wherein obtaining the image of the object further comprises generating the image based on two or more of the plurality of images of the object at different viewing angles.
22. The method of claim 21, wherein the two or more of the plurality of images comprise a first image and a second image, wherein the first image and the second image comprise images from the light model of the object at viewing angles most closely corresponding to the orientation information of the electronic device.
23. The method of claim 16, further comprising:
adding synthetic shadows, based on the orientation information, to the obtained image to generate a modified image of the object; and
displaying the modified image of the object by the display unit.
US16/291,590 2015-09-30 2019-03-04 3D lighting Active US10748331B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/291,590 US10748331B2 (en) 2015-09-30 2019-03-04 3D lighting

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201562235570P 2015-09-30 2015-09-30
US15/274,284 US10262452B2 (en) 2015-09-30 2016-09-23 3D lighting
US16/291,590 US10748331B2 (en) 2015-09-30 2019-03-04 3D lighting

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/274,284 Continuation US10262452B2 (en) 2015-09-30 2016-09-23 3D lighting

Publications (2)

Publication Number Publication Date
US20190206120A1 US20190206120A1 (en) 2019-07-04
US10748331B2 true US10748331B2 (en) 2020-08-18

Family

ID=57218978

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/274,284 Active 2036-10-19 US10262452B2 (en) 2015-09-30 2016-09-23 3D lighting
US16/291,590 Active US10748331B2 (en) 2015-09-30 2019-03-04 3D lighting

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/274,284 Active 2036-10-19 US10262452B2 (en) 2015-09-30 2016-09-23 3D lighting

Country Status (3)

Country Link
US (2) US10262452B2 (en)
CN (2) CN113516752A (en)
WO (1) WO2017058662A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021021585A1 (en) * 2019-07-29 2021-02-04 Ocelot Laboratories Llc Object scanning for subsequent object detection
CN111696189A (en) * 2020-06-17 2020-09-22 北京中科深智科技有限公司 Real-time hair self-shadow drawing method and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6891527B1 (en) * 1999-12-06 2005-05-10 Soundtouch Limited Processing signals to determine spatial positions
US20100079449A1 (en) 2008-09-30 2010-04-01 Apple Inc. System and method for rendering dynamic three-dimensional appearing imagery on a two-dimensional user interface
US20100103172A1 (en) 2008-10-28 2010-04-29 Apple Inc. System and method for rendering ambient light affected appearing imagery based on sensed ambient lighting
US20100156907A1 (en) 2008-12-23 2010-06-24 Microsoft Corporation Display surface tracking
US20100189342A1 (en) 2000-03-08 2010-07-29 Cyberextruder.Com, Inc. System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
US20110285822A1 (en) * 2010-05-21 2011-11-24 Tsinghau University Method and system for free-view relighting of dynamic scene based on photometric stereo
US20130016102A1 (en) * 2011-07-12 2013-01-17 Amazon Technologies, Inc. Simulating three-dimensional features
US20130015946A1 (en) 2011-07-12 2013-01-17 Microsoft Corporation Using facial data for device authentication or subject identification
US8416236B1 (en) 2012-07-19 2013-04-09 Google Inc. Calibration of devices used to generate images of a three-dimensional object data model
US20130147798A1 (en) * 2011-12-08 2013-06-13 The Board Of Trustees Of The University Of Illinois Inserting objects into content
US8547457B2 (en) 2009-06-22 2013-10-01 Empire Technology Development Llc Camera flash mitigation
WO2014006514A2 (en) 2012-07-04 2014-01-09 Opera Imaging B.V. Image processing in a multi-channel camera
US20140300773A1 (en) 2013-04-08 2014-10-09 Samsung Electronics Co., Ltd. Image capture devices and electronic apparatus having the same
WO2014190221A1 (en) 2013-05-24 2014-11-27 Microsoft Corporation Object display with visual verisimilitude

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103239255B (en) * 2013-05-20 2015-01-28 西安电子科技大学 Cone-beam X-ray luminescence computed tomography method
CN104167011B (en) * 2014-07-30 2017-02-08 北京航空航天大学 Micro-structure surface global lighting drawing method based on direction light radiation intensity

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6891527B1 (en) * 1999-12-06 2005-05-10 Soundtouch Limited Processing signals to determine spatial positions
US20100189342A1 (en) 2000-03-08 2010-07-29 Cyberextruder.Com, Inc. System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
US20100079449A1 (en) 2008-09-30 2010-04-01 Apple Inc. System and method for rendering dynamic three-dimensional appearing imagery on a two-dimensional user interface
US20100103172A1 (en) 2008-10-28 2010-04-29 Apple Inc. System and method for rendering ambient light affected appearing imagery based on sensed ambient lighting
US20100156907A1 (en) 2008-12-23 2010-06-24 Microsoft Corporation Display surface tracking
US8547457B2 (en) 2009-06-22 2013-10-01 Empire Technology Development Llc Camera flash mitigation
US20110285822A1 (en) * 2010-05-21 2011-11-24 Tsinghau University Method and system for free-view relighting of dynamic scene based on photometric stereo
US20130016102A1 (en) * 2011-07-12 2013-01-17 Amazon Technologies, Inc. Simulating three-dimensional features
US20130015946A1 (en) 2011-07-12 2013-01-17 Microsoft Corporation Using facial data for device authentication or subject identification
US20130147798A1 (en) * 2011-12-08 2013-06-13 The Board Of Trustees Of The University Of Illinois Inserting objects into content
WO2014006514A2 (en) 2012-07-04 2014-01-09 Opera Imaging B.V. Image processing in a multi-channel camera
US8416236B1 (en) 2012-07-19 2013-04-09 Google Inc. Calibration of devices used to generate images of a three-dimensional object data model
US20140300773A1 (en) 2013-04-08 2014-10-09 Samsung Electronics Co., Ltd. Image capture devices and electronic apparatus having the same
WO2014190221A1 (en) 2013-05-24 2014-11-27 Microsoft Corporation Object display with visual verisimilitude

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report and Written Opinion received in PCT Patent Application No. PCT/US2016/053461, dated Jan. 12, 2017.

Also Published As

Publication number Publication date
US20170091988A1 (en) 2017-03-30
CN108140256B (en) 2021-08-10
WO2017058662A1 (en) 2017-04-06
CN108140256A (en) 2018-06-08
US20190206120A1 (en) 2019-07-04
CN113516752A (en) 2021-10-19
US10262452B2 (en) 2019-04-16

Similar Documents

Publication Publication Date Title
US10701253B2 (en) Camera systems for motion capture
US11893701B2 (en) Method for simulating natural perception in virtual and augmented reality scenes
JP7008733B2 (en) Shadow generation for inserted image content
KR20210121182A (en) augmented reality system
US9740282B1 (en) Gaze direction tracking
US8803880B2 (en) Image-based lighting simulation for objects
US20170213396A1 (en) Virtual changes to a real object
US11069075B2 (en) Machine learning inference on gravity aligned imagery
JP2014525089A5 (en)
CN113315878A (en) Single pass object scanning
US20120120071A1 (en) Shading graphical objects based on face images
US20230394743A1 (en) Sub-pixel data simulation system
KR20160047488A (en) Systems, devices and methods for tracking objects on a display
EP3485464B1 (en) Computer system and method for improved gloss representation in digital images
CN112074843A (en) Method, system, article of manufacture, and apparatus for generating a digital scene
US10748331B2 (en) 3D lighting
US20180089876A1 (en) System to identify and use markers for motion capture
Czajkowski et al. Two-edge-resolved three-dimensional non-line-of-sight imaging with an ordinary camera
WO2019040169A1 (en) Mixed reality object rendering based on ambient light conditions
AU2023233092A1 (en) Artificial intelligence techniques for extrapolating hdr panoramas from ldr low fov images
US20110025685A1 (en) Combined geometric and shape from shading capture
JP5506371B2 (en) Image processing apparatus, image processing method, and program
Kolivand et al. Livephantom: Retrieving virtual world light data to real environments
US20240015263A1 (en) Methods and apparatus to provide remote telepresence communication
Zamłyńska et al. Study on AR Application Efficiency of Selected iOS and Android OS Mobile Devices

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STCV Information on status: appeal procedure

Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4