WO2019035600A1 - System and method for displaying real or virtual scene - Google Patents

System and method for displaying real or virtual scene Download PDF

Info

Publication number
WO2019035600A1
WO2019035600A1 PCT/KR2018/009072 KR2018009072W WO2019035600A1 WO 2019035600 A1 WO2019035600 A1 WO 2019035600A1 KR 2018009072 W KR2018009072 W KR 2018009072W WO 2019035600 A1 WO2019035600 A1 WO 2019035600A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
view
processor
pieces
matrix
Prior art date
Application number
PCT/KR2018/009072
Other languages
French (fr)
Inventor
Konstantin Viktorovich KOLCHIN
Gleb Sergeevich MILYUKOV
Sergey Alexandrovich TURKO
Jae-yeol RYU
Mikhail Vyacheslavovich POPOV
Stanislav Aleksandrovich Shtykov
Andrey Yurievich Shcherbinin
Chan-Yul Kim
Myung-Ho Kim
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from RU2017129073A external-priority patent/RU2665289C1/en
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to EP18845492.0A priority Critical patent/EP3615988B1/en
Publication of WO2019035600A1 publication Critical patent/WO2019035600A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/3406Control of illumination source
    • G09G3/342Control of illumination source using several illumination sources separately controlled corresponding to different display panel areas, e.g. along one dimension such as lines
    • G09G3/3426Control of illumination source using several illumination sources separately controlled corresponding to different display panel areas, e.g. along one dimension such as lines the different display panel areas being distributed in two dimensions, e.g. matrix
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0176Head mounted characterised by mechanical features
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/023Display panel composed of stacked panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0626Adjustment of display parameters for control of overall brightness

Definitions

  • the disclosure relates to an imaging technology. More particularly, the disclosure relates to a system and a method for displaying a real or virtual scene capable of generating high image quality three-dimensional (3D) images while addressing a vergence-accommodation conflict.
  • VR technology has been increasingly used in various fields of life within human society (traditional and well-known applications in game and education industries). To popularize the VR technology and provide for its long-term application, it is necessary to provide a visually comfortable interaction between users and reality.
  • Modern VR displays support various cues of human vision, for example, motion parallax, binocular disparity, binocular occlusion, and vergence.
  • an accommodation cues of a human eye for virtual objects is not supported by these displays. This causes a phenomenon called vergence-accommodation conflict to occur.
  • the vergence-accommodation conflict occurs because a human vision system needs to maintain a certain focal distance of eyeball lenses when viewing a 3D image, in order to focus on an image formed and viewed by a display or a lens, while simultaneously a user has to change focal distances of the eyeball lenses based on distances to a virtual object according to the current movement of his or her eyes.
  • the vergence-accommodation conflict occurs since virtual objects are viewed as if the virtual objects were located at different "distances", but the virtual objects actually exist on a flat surface of a display screen abreast of each other.
  • This conflict between a virtual sequence and reality causes visual discomfort, eye fatigue, eye tension, and headache.
  • a display apparatus including one or more light attenuation layers of which addresses are spatially assignable, and a controller configured to perform computations needed to control the display apparatus, and to address an optimization issue by using weighted nonnegative tensor factorization (NTF) for memory-efficient representation of a light field at a low density.
  • NTF weighted nonnegative tensor factorization
  • a need exists for a display system e.g., a head-mountable display suitable for a VR application, capable of addressing the vergence-accommodation conflict while generating a high-quality image.
  • a display system e.g., a head-mountable display suitable for a VR application, capable of addressing the vergence-accommodation conflict while generating a high-quality image.
  • the present invention provides a system and a method for displaying a scene.
  • the system includes a display configured to emit light, a spatial light modulator configured to modulate input light based on a transparency value, and at least one processor configured to acquire adjustment information including transparency of the spatial light modulator and light intensity information of the display from a plurality of pieces of view information corresponding to the scene and adjust an intensity value of the light emitted from the display and the transparency value of the spatial light modulator based on the adjustment information, wherein the plurality of pieces of view information are optical information of the scene, the optical information having been acquired at a plurality of viewpoints.
  • the disclosure enables a user to be immersed in a virtual reality (VR) of various tasks, such as 3D modeling, navigation, design, and entertainment.
  • VR virtual reality
  • the disclosure may be employed in various head-mounted devices (HMDs), such as VR glasses or helmets, which are being increasingly used in game and education industries at the moment.
  • HMDs head-mounted devices
  • FIG. 1 illustrates a light field diagram according to a view array of a specific scene captured at different viewpoints by using a camera array according to an embodiment of the disclosure
  • FIG. 2 illustrates an extended view of a display system for displaying a real or virtual scene according to an embodiment of the disclosure
  • FIGS. 3a and 3b illustrate spatial light modulators according to display types of a mobile electronic device according to various embodiments of the disclosure
  • FIG. 4 illustrates a display system including a belt for mounting to a head according to an embodiment of the disclosure
  • FIG. 5 is a flowchart of a method of operating a display system according to an embodiment of the disclosure
  • FIG. 6 illustrates a two-parameter light field expression by Levoy and Hanrahan according to an embodiment of the disclosure
  • FIG. 7 illustrates a weighted-matrix calculation method performed based on geometric parameters of a system according to an embodiment of the disclosure
  • FIG. 8 illustrates a matrix consisting of views using a barycentric coordinate system according to an embodiment of the disclosure
  • FIG. 9 is a block diagram of a display system according to an embodiment of the disclosure.
  • FIG. 10 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • FIG. 11 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • FIG. 12 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • FIG. 13 is a flowchart of an enhancing processing operation according to an embodiment of the disclosure.
  • FIG. 14 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • FIG. 15 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • an aspect of the disclosure is to provide an apparatus and a method for displaying a real or virtual scene without requiring complex computation while addressing a vergence-accommodation conflict.
  • a system for displaying an image in a unit of a scene includes a display configured to emit light, a spatial light modulator configured to modulate input light based on a transparency value, and at least one processor configured to acquire adjustment information including transparency of the spatial light modulator and light intensity of the display from a plurality of pieces of view information corresponding to the scene and adjust an intensity value of the light emitted from the display and the transparency value of the spatial light modulator based on the adjustment information, wherein the plurality of pieces of view information are optical information of the scene, which has been acquired at a plurality of viewpoints.
  • a scene display method of displaying an image in a unit of a scene includes receiving a plurality of pieces of view information corresponding to the scene, acquiring, from the plurality of pieces of view information, adjustment information including light intensity of light emitted from a display and transparency of a spatial light modulator configured to modulate the light, and adjusting an intensity value of the light emitted from the display and a transparency value of the spatial light modulator, based on the adjustment information, wherein the plurality of pieces of view information are optical information of the scene, which has been acquired at a plurality of viewpoints.
  • At least one non-transitory computer-readable recording medium has recorded thereon a computer-readable program for performing the method described above.
  • FIG. 1 illustrates a light field diagram according to a view array of a specific scene captured at different viewpoints by using a camera array according to an embodiment of the disclosure.
  • light field indicates a vector function indicating an amount of light moving in an arbitrary direction passing through an arbitrary point in a space.
  • light field indicates a spatial distribution of light fluxes coming out from a visualized image or scene.
  • Light field is specified by a conversion direction and a specific value of radiant energy at each point.
  • a light field of a specific (real or virtual) scene may be approximated by an array of a plurality of different views for a corresponding scene. The views may be respectively obtained from different viewpoints by using, for example, an array of cameras or micro lenses of a plenoptic camera. Therefore, as shown in FIG. 1, views may be slightly shifted with respect to each other.
  • FIG. 2 illustrates an extended view of a display system for displaying a real or virtual scene according to an embodiment of the disclosure.
  • a display system 1 may include a mobile electronic device 2, a spatial light modulator 3, and an optical lens 4.
  • the embodiment shows a case where the mobile electronic device 2 is a mobile or cellular phone, but those of ordinary skill in the art may replace the mobile or cellular phone by using devices capable of implementing the same functions, such as a laptop computer, a tablet computer, and a portable digital player.
  • a dice image shown as an initial scene in FIG. 2 is not to limit the embodiment of the disclosure, and the technical idea of the embodiment may be applied in the same way to more complex images including objects and subjects in various types and forms.
  • a display of the mobile electronic device 2 may be an organic light emitting diode (OLED) display or a display having a different pixel structure.
  • OLED organic light emitting diode
  • the spatial light modulator 3 is disposed at the front of the display of the mobile electronic device 2 and may have a pixel structure having a controllable color slide.
  • the spatial light modulator 3 will be described below.
  • FIGS. 3a and 3b illustrate spatial light modulators according to display types of a mobile electronic device according to various embodiments of the disclosure.
  • a liquid crystal display 7 is used as the display of the mobile electronic device 2, and as described in the document (Mukhin, I. A., Development of liquid-crystal monitors, BROADCASTING Television and radiobroadcasting: 1 part - No. 2(46), March 2005, pp. 55-56; 2 part - No. 4(48), June-July 2005, pp. 71-73), the liquid crystal display 7 may include a backlighting unit, one pair of a first polarizing plate P1 and a second polarizing plate P2, and a first liquid crystal layer LC1 located between the first polarizing plate P1 and the second polarizing plate P2.
  • a second crystal layer LC2 and a third polarizing plate P3 located in the proximity of a user are used as the spatial light modulator 3. Accordingly, compared with a method of using the first polarizing plate P1 located between the display and the first liquid crystal layer LC1, the first liquid crystal layer LC1, and the second polarizing plate P2 next to the first liquid crystal layer LC1 as a spatial light modulator, in a method of using the second crystal layer LC2 and the third polarizing plate P3 as the spatial light modulator 3, the number of polarizing plates used for a spatial light modulator may be reduced, and thus a size of the display system 1 (not shown in FIG. 3A) may be reduced.
  • an OLED display 8 may be used as the display of the mobile electronic device 2.
  • a fourth polarizing plate P4, a liquid crystal layer LC, and a fifth polarizing plate P5 may be used as the spatial light modulator 3.
  • the optical lens 4 is located at the rear of the spatial light modulator 3 at a viewpoint of the user of the display system 1 and is also located at the front of one eye of the user.
  • Optical lenses having the same form as the optical lens 4 may be arranged at the front of the other eye of the user. A set of these lenses constitutes an optical lens device.
  • a transparency value of pixels of the spatial light modulator 3 and an intensity value of pixels of the display of the mobile electronic device 2 may be variably changed by control signals provided from at least one processor or controller (not shown) included in the display system 1.
  • An adjustment operation for the transparency and intensity will be described when a method of operating the display system 1 is described.
  • FIG. 4 illustrates a display system including a belt for mounting to a head according to an embodiment of the disclosure.
  • the above-described components of the display system 1, particularly, the mobile electronic device 2 and the spatial light modulator 3 shown together in FIG. 2, may be accommodated in a case or enclosure 5 (see FIG. 4) made of a proper material, such as plastic or a synthetic material.
  • a specific mounting unit disposed on a leather belt 6 (see FIG. 4) connected to the case or enclosure 5 may be used.
  • the case or enclosure 5 may be virtual reality (VR) glasses or a VR helmet.
  • VR virtual reality
  • FIG. 5 is a flowchart of a method of operating a display system according to an embodiment of the disclosure.
  • FIG. 5 an operation of the display system 1 will be described below. Particularly, operations performed by the processor or controller described above will be described with reference to FIG. 5.
  • the processor or controller receives a set of views of a real or virtual scene, for example, dice shown in FIG. 2.
  • Each view of the real or virtual scene is specified by a field of view defined for a scene, as described with reference to FIG. 1.
  • a set of views of a scene may be acquired by using a plenoptic camera, for example, Lytro Illum.
  • the set of the acquired views may be stored in a memory of the mobile electronic device 2.
  • the processor or controller may access the memory of the mobile electronic device 2 to extract a set of views of a scene for subsequent processing.
  • the processor or controller may form a set of views of a scene by itself by using a rendering program.
  • the processor or controller may generate a matrix of the views by using geometric parameters of a system (for example, a distance between a display of a mobile electronic device and a spatial light modulator, a focal distance of a lens, and distances from the lens to the display and the modulator in each view).
  • geometric parameters of a system for example, a distance between a display of a mobile electronic device and a spatial light modulator, a focal distance of a lens, and distances from the lens to the display and the modulator in each view.
  • FIG. 6 illustrates a two-parameter light field expression by Levoy and Hanrahan according to an embodiment of the disclosure.
  • the processor or controller may be based on a two-parameter light field expression by Levoy and Hanrahan.
  • FIG. 6 shows an xy plane and a uv plane of a light field.
  • the light field may be represented by a four-dimensional (4D) function L(x, y, u, v) indicating the intensity of light in an optical space, which is incident to one arbitrary dot on the xy plane after passing through one arbitrary dot on the uv plane under the expression described above.
  • FIG. 7 illustrates a weighted-matrix calculation method based on geometric parameters of a system according to an embodiment of the disclosure.
  • integer coordinates of points at which light crosses images on a display and a modulator are shown, and virtual ghosts of the display and the modulator are calculated as below.
  • Equations 1 and 2 denotes 1 or 2, wherein 1 and 2 correspond to the modulator and the display, respectively.
  • the signs + and - included in ⁇ of Equations 1 and 2 correspond to the modulator and the display, respectively, and M 1 and M 2 denote magnification constants of the virtual ghosts of the modulator and the display, respectively.
  • p 1 and p 2 denote pixel sizes of the modulator and the display, respectively
  • W and H denote a height and a length of a physical view image on the xy plane of the light field (d k denoting a relative location or distance between the xy plane of the light field and a virtual ghost is selected to acquire best image quality)
  • d cn denotes a distance from an eye-lens plane to a light field plane.
  • the light field L(x, y, u, v) is factorized to a multiplication of transparency t(x 1 , y 1 ) of the spatial light modulator and light intensity l(x 2 , y 2 ) of the display.
  • x 1 , x 2 , y 1 , and y 2 may be represented as x, y, u, and v through Equation 1 and Equation 2.
  • t and l denoting transparency and intensity may be factorized to vectors a and b as follows.
  • w k denotes a width of images of the modulator or the display corresponding to a value of k and is measured based on the number of pixels measured along an x axis.
  • Equation 8 A value of the light field L(x, y, u, v) is "encapsulated" to an element T ij of a matrix of views, and thus Equation may be replaced by Equation 8.
  • the processor or controller In operation S3, the processor or controller generates an adjustment matrix indicating a product of a column vector indicating a transparency value of pixels of the spatial light modulator and a row vector indicating a brightness value of pixels of the display of the mobile electronic device.
  • elements of the column vector and the row vector are selected such that the adjustment matrix is approximately the same as the matrix of the views.
  • an element (I, j) of the adjustment matrix is obtained when light passes through a jth pixel of the display and an ith pixel of the spatial light modulator.
  • the matrix of the views is T, and the transparency and the intensity described above are a and b, the fact that the matrix of the views is "approximately the same" as the adjustment matrix indicates that .
  • the optimization operation may be performed by using weighted rank-1 residue iteration (WRRI).
  • WRRI weighted rank-1 residue iteration
  • a detailed operation of the WRRI is described in the related art (for example, HO, N.-D., Nonnegative Matrix Factorization Algorithms and Applications, PhD thesis, Universit ⁇ e catholique de Louvain, 2008; and HEIDE et al., Cascaded displays: spatiotemporal superresolution using offset pixel layers, ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2014, Volume 33, Issue 4, July 2014).
  • TOG ACM Transactions on Graphics
  • a weighted matrix W determined such that is provided.
  • the weighted matrix W includes only a weighted constant for a part where views of a scene are "encapsulated” and has a value of zero for the remaining parts.
  • the optimization operation continues until elements of the vectors a and b causing the adjustment matrix to be most approximate with the matrix of the views are found out. Equation 9 is an embodiment of the optimization operation.
  • Equation 9 symbol denotes L 2 -norm satisfying , and an operation symbol ⁇ denotes a product between elements, for example, an Hadamard product, performed for element of the vectors a and b until the adjustment matrix is approximately the same as the matrix of the views.
  • the centers of the pixels of the display included in the mobile electronic device 2 and the spatial light modulator 3 are matched with each other, number "1" is assigned to elements of the matrix W corresponding to corresponding views (for example, i and j where Tij is encapsulated from the views), and the remaining elements of the matrix are filled with zero. If this matching does not occur, the matrices T and W are constructed using barycentric coordinates, and distortion of the views of the scene in a subsequent processing operation is prevented through the construction using barycentric coordinates.
  • FIG. 8 illustrates a matrix consisting of views using a barycentric coordinate system according to an embodiment of the disclosure.
  • ⁇ and ⁇ denote coordinates (pixel centers) of a point marked in an X shape on the plane of the spatial light modulator 3, and w 00 , w 01 , w 10 , and w 11 are values allocated to four elements specified by coordinates , and .
  • a sum of w 00 , w 01 , w 10 , and w 11 is 1, and thus a unit weight is allocated to four neighboring elements.
  • each pixel value in each of four elements may be iterated four times.
  • an access of another method is also possible.
  • values of the light field are allocated as respective weights to four elements specified by coordinates , and according to barycentric coordinates.
  • elements corresponding to non-zero elements of the matrix T have a value of 1
  • the remaining elements have a value of 0.
  • the processor or controller adjusts the intensity value l of the pixels of the display of the mobile electronic device 2 according to the components of the vector b and adjusts the transparency value t of the pixels of the spatial light modulator 3 according to the components of the vector a.
  • Equations 4 and 5 described above mathematically represent a relationship among a, b, t, and l.
  • the processor or controller may perform a pre-processing operation for each view of a previous scene before proceeding to operations S2 to S3.
  • the pre-processing operation is an operation of enhancing details of views of a scene.
  • a defined view (a detail to be enhanced) of a scene is segmented to overlapping units including groups of pixels of the display of the mobile electronic device 2. A following operation for each unit is performed.
  • a color of each pixel is converted into a YUV color model, wherein Y denotes a brightness component, and U and V denote color-difference components.
  • a separation operation for the brightness component Y is performed for each pixel.
  • the brightness component Y is added to a bright channel Y for all pixels.
  • the bright channel is processed using Fourier transform.
  • a Gaussian window is used.
  • the details are searched and enhanced using phase congruency analysis in the Fourier spectrum.
  • a Fourier inverse transform operation is performed.
  • values of a Fourier spectrum are complex numbers.
  • the complex numbers are specified by an absolute value and an angle of deviation (that is, phase).
  • the complex numbers may be expressed in a form of 2D vector having the same length and phase and the same direction as the absolute value.
  • a search operation on a detail indicates an operation of separating vectors orienting one direction (together with specific divergence), and an enhancing operation on the detail indicates increasing a length of retrieved vectors, that is, an operation of increasing a magnitude of the absolute value.
  • the new color model Y' and the initial components U and V are combined as a color model Y'UV.
  • the color model Y'UV is converted into a color model RGB, and accordingly, a determined view of a scene may be acquired as the color model RGB.
  • FIG. 9 is a block diagram of a display system according to an embodiment of the disclosure.
  • a display system 900 may display a scene such that a light field which provides an experience approximate to a 3D effect in the real to a user is provided, when a real or virtual scene is displayed.
  • the display system 900 may include a mobile electronic device 910 and a spatial light modulator 920. According to an embodiment of the disclosure, the display system 900 may further include an optical lens (not shown). However, according to an embodiment of the disclosure, the optical lens is not necessary required as a separated component. The optical lens may be replaced by a medium having the same optical characteristics as the optical lens or included in the spatial light modulator 920.
  • the mobile electronic device 910 is a portable electronic device and may be implemented in various forms, such as a smartphone, a tablet computer, a personal digital assistant (PDA), and a portable multimedia player (PMP).
  • PDA personal digital assistant
  • PMP portable multimedia player
  • the mobile electronic device 910 may include a processor 911 and a display 912. Although FIG. 9 shows that the processor 911 is included in the mobile electronic device 910, this is not mandatory. According to an embodiment of the disclosure, the processor 911 may be located outside the mobile electronic device 910 and may control the mobile electronic device 910 and the spatial light modulator 920. For example, the processor 911 may be included in VR glasses or a VR helmet, which is a case in which the mobile electronic device 910 and the spatial light modulator 920 are accommodated.
  • the display 912 provides light to display a scene.
  • the display 912 may include a liquid crystal display mounted in the mobile electronic device 910.
  • the display 912 may include a backlight of the mobile electronic device 910.
  • the display 912 may include a liquid crystal of the mobile electronic device 910.
  • the processor 911 may control the mobile electronic device 910 and the spatial light modulator 920 to perform a display operation of the display system 900.
  • FIG. 9 shows that the spatial light modulator 920 is located outside the mobile electronic device 910, this is not mandatory. According to an embodiment of the disclosure, the spatial light modulator 920 may be included in the mobile electronic device 910 and modulate light provided from the display 912.
  • the processor 911 may receive a plurality of pieces of view information with respect to a scene to be displayed.
  • the scene may be a virtual scene or a real scene.
  • the plurality of pieces of view information are optical information of a scene, which has been acquired at a plurality of viewpoints.
  • the plurality of pieces of view information may be a set of a plurality of pieces of view information acquired by photographing a real scene at a plurality of viewpoints.
  • the plurality of pieces of view information may be an intrinsic image acquired from a plurality of matched cameras having different viewpoints.
  • the plurality of pieces of view information may be a set of a plurality of pieces of view information corresponding to a virtual scene formed using a rendering program.
  • the processor 911 may form a plurality of pieces of view information corresponding to a virtual scene by itself.
  • the processor 911 may acquire adjustment information from the plurality of pieces of view information.
  • the adjustment information may include information regarding transparency of the spatial light modulator 920 and light intensity of the display 912.
  • the light intensity of the display 912 indicates intensity of light emitted by each pixel of the display 912.
  • the light intensity of the display 912 may be variably changed under control of the processor 911.
  • the transparency of the spatial light modulator 920 indicates an optical influence of each pixel of the spatial light modulator 920 to light transmitting through the spatial light modulator 920 and may include color transparency.
  • the adjustment information may include a view matrix that is a matrix including each view information included in the plurality of pieces of view information, which is generated based on a geometric parameter.
  • the processor 911 may generate the view matrix from the plurality of pieces of view information.
  • the view matrix is a matrix representing a light field of a corresponding scene to be displayed.
  • the processor 911 may perform light field factorization on the generated view matrix. As described with reference to FIG. 7, a light field indicating light passing through a certain pixel of the display 912 and a certain pixel of the spatial light modulator 920 may be represented by a function of intensity of the display 912 and transparency of the spatial light modulator 920. According to an embodiment of the disclosure, the processor 911 may factorize a given view matrix to a product of a matrix indicating intensity of the display 912 and a matrix indicating transparency of the spatial light modulator 920, and this is called light field factorization.
  • the light field factorization may be approximately achieved.
  • the light field factorization will be described below.
  • the matrix indicating the intensity of the display 912 is a row vector and the matrix indicating the transparency of the spatial light modulator 920 is a column vector, but this is only illustrative, and the technical features of the disclosure are not limited thereto.
  • the processor 911 may factorize a view matrix to a product of various types of matrices.
  • the processor 911 may perform the light field factorization by using a WRRI algorithm.
  • the WRRI algorithm has a better processing speed and a less computation volume than a non-negative matrix factorization (NMF) algorithm, and thus processor 911 may perform real-time processing at a higher speed by using the WRRI algorithm than a speed using the NMF algorithm.
  • NMF non-negative matrix factorization
  • the processor 911 may calculate optimized intensity of the display 912 and optimized transparency of the spatial light modulator 920 through an Hadamard product with respect to a given light field by using the WRRI algorithm.
  • the processor 911 may form a row vector indicating intensity of the display 912, a column vector indicating transparency of the spatial light modulator 920, and an adjustment matrix indicating a product of the row vector and the column vector.
  • the processor 911 may select a row vector and a column vector by using the WRRI algorithm such that an adjustment matrix is approximately the same as a view matrix.
  • the processor 911 may adjust an intensity value of light emitted from the display 912 and a transparency value of the spatial light modulator 920, based on the adjustment information.
  • the processor 911 may adjust intensity of the display 912 and transparency of the spatial light modulator 920 based on a light field factorization result of a view matrix.
  • the processor 911 may form a row vector and a column vector forming an adjustment matrix which is approximately the same as the view matrix and adjust the intensity of the display 912 and the transparency of the spatial light modulator 920 based on the row vector and the column vector.
  • intensity of each pixel of the display 912 may be adjusted in response to an intensity control signal provided from the processor 911.
  • transparency of each pixel of the spatial light modulator 920 may be adjusted in response to a transparency control signal provided from the processor 911.
  • the optical lens delivers, to the user, light which has passed through the display 912 and the spatial light modulator 920.
  • the display system 900 provides a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
  • FIG. 10 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • a processor receives a plurality of pieces of view information corresponding to a scene.
  • the scene may be a real scene or a virtual scene.
  • the plurality of pieces of view information may be acquired by photographing the scene at different viewpoints.
  • the plurality of pieces of view information may have a relationship of being captured by being sequentially shifted at a previously defined angle.
  • the plurality of pieces of view information may be a set of a plurality of pieces of view information acquired by photographing a real scene at a plurality of viewpoints.
  • the plurality of pieces of view information may be an intrinsic image acquired from a plurality of matched cameras having different viewpoints.
  • the plurality of pieces of view information may be a set of a plurality of pieces of view information corresponding to a virtual scene formed using a rendering program.
  • the processor may form a plurality of pieces of view information corresponding to a virtual scene by itself.
  • the processor acquires adjustment information from the plurality of pieces of view information.
  • the adjustment information may include information regarding transparency of a spatial light modulator and light intensity of a display.
  • the light intensity of the display indicates intensity of light emitted by each pixel of the display.
  • the transparency of the spatial light modulator indicates an optical influence of each pixel of the spatial light modulator to light transmitting through the spatial light modulator and may include color transparency.
  • the adjustment information may include a view matrix.
  • the processor may control the transparency of the spatial light modulator and a light intensity value of the display based on the adjustment information.
  • intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
  • transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor.
  • Light which has passed through the display and the spatial light modulator may be delivered to a user through an optical lens.
  • the display method according to the disclosure may provide a light field which provides an experience approximate to a 3D effect in the real to a user by providing light concentrated through the optical lens to the user.
  • FIG. 11 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • a processor receives a plurality of pieces of view information corresponding to a scene.
  • the scene may be a real scene or a virtual scene.
  • the plurality of pieces of view information may be acquired by photographing the scene at different viewpoints.
  • the processor may acquire a view matrix included in adjustment information from the plurality of pieces of view information. For example, the processor may generate a view matrix that is a matrix including each view information included in the plurality of pieces of view information, based on a geometric parameter.
  • the view matrix is a matrix representing a light field of a corresponding scene to be displayed.
  • the processor may factorize the given view matrix to a product of a matrix indicating light intensity of a display included in a mobile electronic device and a matrix indicating transparency of a spatial light modulator.
  • the processor may perform the factorization by using a WRRI algorithm.
  • the processor may calculate optimized light intensity of the display and optimized transparency of the spatial light modulator through an Hadamard product with respect to a given light field by using the WRRI algorithm.
  • the processor may control the transparency of the spatial light modulator and the light intensity of the display based on a result of the factorization. For example, the processor may factorize the view matrix to a row vector and a column vector, control the transparency of the spatial light modulator based on the column vector, and control the light intensity of the display based on the row vector. According to an embodiment of the disclosure, transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor, and intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
  • a display system may display a scene to a user based on the operations described above.
  • light emitted from the display based on the adjusted light intensity is delivered to the user through an optical lens by passing through the spatial light modulator having the adjusted transparency.
  • the display system may provide a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
  • FIG. 12 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • a display system may perform an enhancing processing operation prior to acquiring adjustment information.
  • the display system may provide a relatively clear and realistic experience to a user by using the enhancing processing operation to enhance a detail of a view matrix.
  • a processor receives a plurality of pieces of view information corresponding to a scene.
  • the scene may be a real scene or a virtual scene.
  • the plurality of pieces of view information may be an intrinsic image acquired from a plurality of matched cameras having different viewpoints.
  • the processor may form a plurality of pieces of view information corresponding to a virtual scene by itself by using a rendering program.
  • the processor performs the enhancing processing operation on the pieces of view information.
  • the enhancing processing operation is an operation of enhancing only a detail while maintaining color information of the pieces of view information as it is.
  • the processor may separate only a brightness channel from each view information and perform a processing operation on the brightness channel.
  • the processor may use Fourier transform and phase congruency analysis for the enhancing processing operation.
  • the processor may generate adjustment information.
  • the processor may generate a view matrix that is a matrix including each view information included in the plurality of pieces of view information, based on a geometric parameter.
  • the view matrix is a matrix representing a light field of a corresponding scene to be displayed.
  • the processor may factorize the enhancing-processed view matrix to a product of vectors.
  • the processor may factorize the enhancing-processed view matrix to a product of a matrix indicating light intensity of a display included in a mobile electronic device and a matrix indicating transparency of a spatial light modulator.
  • the processor may perform the factorization by using a WRRI algorithm.
  • the processor may calculate optimized intensity of the display and optimized transparency of the spatial light modulator through an Hadamard product with respect to a given light field by using the WRRI algorithm.
  • the processor may control the transparency of the spatial light modulator and a light intensity value of the display based on the adjustment information. For example, the processor may factorize the view matrix to a column vector and a row vector, control the transparency of the spatial light modulator based on the column vector, and control the light intensity of the display based on the row vector. According to an embodiment of the disclosure, transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor, and intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
  • the display system may display a scene to the user based on the operations described above.
  • light emitted from the display based on the adjusted light intensity is delivered to the user through an optical lens by passing through the spatial light modulator having the adjusted transparency.
  • the display system may provide a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
  • FIG. 13 is a flowchart for describing the enhancing processing operation in more detail according to an embodiment of the disclosure.
  • a processor extracts a brightness channel of view information, on which the enhancing processing operation is to be performed.
  • the processor may segment the view information into a plurality of units which overlap each other, to extract the brightness channel.
  • Each unit may include a pre-defined plurality of pixels.
  • the processor may convert a color space model of the view information into a color space model having a brightness channel to extract the brightness channel.
  • the processor may convert the color space model of the view information into a YUV color space model or a YIQ color space model.
  • the embodiment illustrates that the color space model of the view information is converted into the YUV color space model.
  • the technical features of the disclosure are not limited to YUV color space information and may also be applied to other color spaces having a brightness channel.
  • a Y channel indicates information regarding brightness
  • U and V channels indicate information regarding colors.
  • the U channel is a value obtained by subtracting a brightness component from a blue (B) channel of an RGB color space
  • the V channel is a value obtained by subtracting the brightness component from a red (R) channel.
  • the processor may extract a Y component that is a brightness component of each unit of the view information. According to an embodiment of the disclosure, the processor may multiplex Y components of respective units to a Y channel that is a brightness channel of a view.
  • the processor may perform Fourier transform on the brightness component or the brightness channel.
  • the processor acquires a Fourier spectrum of the brightness component or the brightness channel through the Fourier transform.
  • the processor may use a Gaussian window to smooth a boundary part of the spectrum.
  • the processor performs a phase congruency analysis on the acquired Fourier spectrum.
  • the processor searches the Fourier spectrum for information regarding a detail through the phase congruency analysis.
  • the processor may search for the information regarding a detail through an operation of separating complex vectors orienting to a specific direction in the Fourier spectrum.
  • the processor performs a rebalance spectrum operation based on the retrieved information regarding a detail.
  • the rebalance spectrum operation is an operation of enhancing a retrieved detail.
  • the processor may enhance the detail through an operation of increasing a magnitude of a length of retrieved complex vectors, that is, a magnitude of an absolute value.
  • the processor performs Fourier inverse transform on the brightness component or the brightness channel on which the rebalance spectrum operation has been completed.
  • the processor acquires an enhanced new brightness component or brightness channel Y' through the Fourier inverse transform.
  • enhanced information is output.
  • the processor may combine information regarding all units on which the processing has been performed, by using a Gaussian window such that overlapping is smoothly processed.
  • the processor combines the new brightness channel Y' and the initial color channels U and V to a color space model Y'UV.
  • the color space model Y'UV is converted into a color space model RGB, and accordingly, enhanced view information of a scene may be acquired using the color space model RGB.
  • FIG. 14 is a flowchart of a method of displaying a scene according to another embodiment of the disclosure.
  • a display system may perform anti-aliasing processing based on adjustment information.
  • the display system may provide a relatively clear and realistic experience to a user by using an anti-aliasing processing operation to prevent a pixel staircase phenomenon and edge distortion.
  • a processor receives a plurality of pieces of view information corresponding to a scene.
  • the scene may be a real scene or a virtual scene.
  • the plurality of pieces of view information may be an intrinsic image acquired from a plurality of matched cameras having different viewpoints.
  • the processor may form a plurality of pieces of view information corresponding to a virtual scene by itself by using a rendering program.
  • the processor may acquire adjustment information from the plurality of pieces of view information.
  • the processor may generate a view matrix that is a matrix including each view information included in the plurality of pieces of view information, based on a geometric parameter.
  • the view matrix is a matrix representing a light field of a corresponding scene to be displayed.
  • the processor may perform the anti-aliasing processing operation based on the adjustment information.
  • the processor may factorize the view matrix included in the adjustment information to a product of a matrix indicating light intensity of a display included in a mobile electronic device and a matrix indicating transparency of a spatial light modulator.
  • the processor may perform the factorization by using a WRRI algorithm.
  • the processor may calculate optimized intensity of the display and optimized transparency of the spatial light modulator through an Hadamard product with respect to a given light field by using the WRRI algorithm.
  • the processor may perform the anti-aliasing processing operation on the view matrix. According to an embodiment of the disclosure, the processor may perform the anti-aliasing processing operation by using pixel barycentric coordinates. According to an embodiment of the disclosure, the processor may perform the anti-aliasing processing operation in an operation of performing light field factorization on the view matrix. For example, the processor may perform the anti-aliasing processing operation by calculating and using a weighted matrix in the operation of performing light field factorization.
  • the processor may control transparency of a spatial light modulator and light intensity of a display based on the adjustment information.
  • the processor may factorize the view matrix to a column vector and a row vector, control the transparency of the spatial light modulator based on the column vector, and control the light intensity of the display based on the row vector.
  • transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor, and light intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
  • the display system may display a scene to the user based on the operations described above.
  • light emitted from the display based on the adjusted light intensity is delivered to the user through an optical lens by passing through the spatial light modulator having the adjusted transparency.
  • the display system may provide a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
  • FIG. 15 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
  • a display system may perform an enhancing processing operation and an anti-aliasing processing operation on pieces of view information.
  • the display system may provide a relatively clear and realistic experience to a user by using the enhancing processing operation and the anti-aliasing processing operation to enhance a detail of views and prevent a pixel staircase phenomenon and edge distortion.
  • a processor receives a plurality of pieces of view information corresponding to a scene.
  • the scene may be a real scene or a virtual scene.
  • the processor performs the enhancing processing operation on the pieces of view information.
  • the enhancing processing operation is an operation of enhancing only a detail while maintaining color information of the pieces of view information as it is.
  • the processor may separate only a brightness channel from each view information and perform a processing operation on the brightness channel.
  • the processor may use Fourier transform and phase congruency analysis for the enhancing processing operation.
  • the processor may acquire adjustment information from the plurality of pieces of view information.
  • the processor may generate a view matrix that is a matrix including each view information included in the plurality of pieces of view information, based on a geometric parameter.
  • the view matrix is a matrix representing a light field of a corresponding scene to be displayed.
  • the processor may perform the anti-aliasing processing operation based on the adjustment information.
  • the processor may factorize the view matrix included in the adjustment information to a product of a matrix indicating light intensity of a display included in a mobile electronic device and a matrix indicating transparency of a spatial light modulator.
  • the processor may perform the factorization by using a WRRI algorithm.
  • the processor may calculate optimized intensity of the display and optimized transparency of the spatial light modulator through an Hadamard product with respect to a given light field by using the WRRI algorithm.
  • the processor may perform the anti-aliasing processing operation on the view matrix. According to an embodiment of the disclosure, the processor may perform the anti-aliasing processing operation by using pixel barycentric coordinates. According to an embodiment of the disclosure, the processor may perform the anti-aliasing processing operation in an operation of performing light field factorization on the view matrix. For example, the processor may perform the anti-aliasing processing operation by calculating and using a weighted matrix in the operation of performing light field factorization.
  • the processor may control transparency of a spatial light modulator and light intensity of a display based on the adjustment information.
  • the processor may factorize the view matrix to a column vector and a row vector, control the transparency of the spatial light modulator based on the column vector, and control the light intensity of the display based on the row vector.
  • transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor, and light intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
  • the display system may display a scene to the user based on the operations described above.
  • light emitted from the display based on the adjusted light intensity is delivered to the user through an optical lens by passing through the spatial light modulator having the adjusted transparency.
  • the display system may provide a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
  • the disclosed embodiments may be implemented in a form of a non-transitory computer-readable recording medium configured to store computer-executable instructions and data.
  • the instructions may be stored in a form of program codes and may perform, when executed by a processor, a certain operation by generating a certain program module.
  • the instructions may perform certain operations of the disclosed embodiments when executed by the processor.
  • Non-transitory computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system.
  • Examples of the non-transitory computer readable recording medium include a Read-Only Memory (ROM), a Random-Access Memory (RAM), Compact Disc-ROMs (CD-ROMs), magnetic tapes, floppy disks, and optical data storage devices.
  • the non-transitory computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • functional programs, code, and code segments for accomplishing the disclosure can be easily construed by programmers skilled in the art to which the disclosure pertains.
  • the various embodiments of the disclosure as described above typically involve the processing of input data and the generation of output data to some extent.
  • This input data processing and output data generation may be implemented in hardware or software in combination with hardware.
  • specific electronic components may be employed in a mobile device or similar or related circuitry for implementing the functions associated with the various embodiments of the disclosure as described above.
  • one or more processors operating in accordance with stored instructions may implement the functions associated with the various embodiments of the disclosure as described above. If such is the case, it is within the scope of the disclosure that such instructions may be stored on one or more non-transitory processor readable mediums.
  • processor readable mediums examples include a ROM, a RAM, CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices.
  • the processor readable mediums can also be distributed over network coupled computer systems so that the instructions are stored and executed in a distributed fashion.
  • functional computer programs, instructions, and instruction segments for accomplishing the disclosure can be easily construed by programmers skilled in the art to which the disclosure pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A system and a method for displaying a scene are provided. The system includes a display configured to emit light, a spatial light modulator configured to modulate input light based on a transparency value, and at least one processor configured to acquire adjustment information including transparency of the spatial light modulator and light intensity information of the display from a plurality of pieces of view information corresponding to the scene and adjust an intensity value of the light emitted from the display and the transparency value of the spatial light modulator based on the adjustment information, wherein the plurality of pieces of view information are optical information of the scene, the optical information having been acquired at a plurality of viewpoints.

Description

SYSTEM AND METHOD FOR DISPLAYING REAL OR VIRTUAL SCENE
The disclosure relates to an imaging technology. More particularly, the disclosure relates to a system and a method for displaying a real or virtual scene capable of generating high image quality three-dimensional (3D) images while addressing a vergence-accommodation conflict.
This section is not provided to describe the technical features of the disclosure, and thus the technical features of the disclosure are not limited by this section. This section is to provide the outline of the related art, which belongs to the same technical field as the disclosure, to those of ordinary skill in the art and to thereby make clear the technical importance due to differences between the related art and the disclosure.
Recently, VR technology has been increasingly used in various fields of life within human society (traditional and well-known applications in game and education industries). To popularize the VR technology and provide for its long-term application, it is necessary to provide a visually comfortable interaction between users and reality.
Modern VR displays support various cues of human vision, for example, motion parallax, binocular disparity, binocular occlusion, and vergence. However, an accommodation cues of a human eye for virtual objects is not supported by these displays. This causes a phenomenon called vergence-accommodation conflict to occur. The vergence-accommodation conflict occurs because a human vision system needs to maintain a certain focal distance of eyeball lenses when viewing a 3D image, in order to focus on an image formed and viewed by a display or a lens, while simultaneously a user has to change focal distances of the eyeball lenses based on distances to a virtual object according to the current movement of his or her eyes. In other words, the vergence-accommodation conflict occurs since virtual objects are viewed as if the virtual objects were located at different "distances", but the virtual objects actually exist on a flat surface of a display screen abreast of each other. This conflict between a virtual sequence and reality causes visual discomfort, eye fatigue, eye tension, and headache.
At the moment, light field display technology aiming at addressing the issues of negative effects by delivering the same light as normally received by eyes to the eyes under similar conditions to those of a real life has been being developed.
An embodiment of such a display is disclosed in US 2014/0063077. In more detail, this document discloses a display apparatus including one or more light attenuation layers of which addresses are spatially assignable, and a controller configured to perform computations needed to control the display apparatus, and to address an optimization issue by using weighted nonnegative tensor factorization (NTF) for memory-efficient representation of a light field at a low density. This NTF requires high costs. Furthermore, known apparatuses have no mobility and cannot be head-mounted.
An embodiment of the disclosure is disclosed in paper "The Light-Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light Field Displays with Focus Cues (ACM SIGGRAPH, Transactions on Graphics 33, 5, 2015)" by F. Huang, K. Chen, and G. Wetzstein. This paper discloses a portable VR display supporting an initial high resolution image and the possibility of focusing a user's eyes on a virtual object, that is, the possibility of addressing the vergence-accommodation conflict. A light field appears on each eye, and a more natural visual experience than that in existing near-eye displays is provided through the light field. The proposed display uses rank-1 light field factorization. To implement the display described above, an expensive time-division multi-image display or eye tracking unit is not required. However, the authors of the paper used computationally complicated non-negative matrix factorization (NMF) for a solution.
Therefore, a need exists for a display system, e.g., a head-mountable display suitable for a VR application, capable of addressing the vergence-accommodation conflict while generating a high-quality image.
The above information is presented as background information only to assist with an understanding of the disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the disclosure.
To implement the display supporting an initial high resolution image and a possibility of focusing a user's eyes on a virtual object, an expensive time-division multi-image display or eye tracking unit is not required. Therefore, a need exists for a display system, e.g., a head-mountable display suitable for a VR application, capable of addressing the vergence-accommodation conflict while generating a high-quality image.
The present invention provides a system and a method for displaying a scene. The system includes a display configured to emit light, a spatial light modulator configured to modulate input light based on a transparency value, and at least one processor configured to acquire adjustment information including transparency of the spatial light modulator and light intensity information of the display from a plurality of pieces of view information corresponding to the scene and adjust an intensity value of the light emitted from the display and the transparency value of the spatial light modulator based on the adjustment information, wherein the plurality of pieces of view information are optical information of the scene, the optical information having been acquired at a plurality of viewpoints.
The disclosure enables a user to be immersed in a virtual reality (VR) of various tasks, such as 3D modeling, navigation, design, and entertainment. The disclosure may be employed in various head-mounted devices (HMDs), such as VR glasses or helmets, which are being increasingly used in game and education industries at the moment.
The above and other aspects, features, and advantages of certain embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
FIG. 1 illustrates a light field diagram according to a view array of a specific scene captured at different viewpoints by using a camera array according to an embodiment of the disclosure;
FIG. 2 illustrates an extended view of a display system for displaying a real or virtual scene according to an embodiment of the disclosure;
FIGS. 3a and 3b illustrate spatial light modulators according to display types of a mobile electronic device according to various embodiments of the disclosure;
FIG. 4 illustrates a display system including a belt for mounting to a head according to an embodiment of the disclosure;
FIG. 5 is a flowchart of a method of operating a display system according to an embodiment of the disclosure;
FIG. 6 illustrates a two-parameter light field expression by Levoy and Hanrahan according to an embodiment of the disclosure;
FIG. 7 illustrates a weighted-matrix calculation method performed based on geometric parameters of a system according to an embodiment of the disclosure;
FIG. 8 illustrates a matrix consisting of views using a barycentric coordinate system according to an embodiment of the disclosure;
FIG. 9 is a block diagram of a display system according to an embodiment of the disclosure;
FIG. 10 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure;
FIG. 11 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure;
FIG. 12 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure;
FIG. 13 is a flowchart of an enhancing processing operation according to an embodiment of the disclosure;
FIG. 14 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure; and
FIG. 15 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
Throughout the drawings, like reference numerals will be understood to refer to like parts, components, and structures.
Aspects of the disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the disclosure is to provide an apparatus and a method for displaying a real or virtual scene without requiring complex computation while addressing a vergence-accommodation conflict.
Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments.
In accordance with an aspect of the disclosure, a system for displaying an image in a unit of a scene is provided. The system includes a display configured to emit light, a spatial light modulator configured to modulate input light based on a transparency value, and at least one processor configured to acquire adjustment information including transparency of the spatial light modulator and light intensity of the display from a plurality of pieces of view information corresponding to the scene and adjust an intensity value of the light emitted from the display and the transparency value of the spatial light modulator based on the adjustment information, wherein the plurality of pieces of view information are optical information of the scene, which has been acquired at a plurality of viewpoints.
In accordance with another aspect of the disclosure, a scene display method of displaying an image in a unit of a scene is provided. The method includes receiving a plurality of pieces of view information corresponding to the scene, acquiring, from the plurality of pieces of view information, adjustment information including light intensity of light emitted from a display and transparency of a spatial light modulator configured to modulate the light, and adjusting an intensity value of the light emitted from the display and a transparency value of the spatial light modulator, based on the adjustment information, wherein the plurality of pieces of view information are optical information of the scene, which has been acquired at a plurality of viewpoints.
In accordance with another aspect of the disclosure, at least one non-transitory computer-readable recording medium has recorded thereon a computer-readable program for performing the method described above.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the disclosure.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a component surface" includes reference to one or more of such surfaces.
By the term "substantially" it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
The term "various embodiments" used in the specification indicates that the term is used "illustratively or for description". It is not analyzed that an embodiment disclosed as "various embodiments" in the specification is necessarily more preferred than the other embodiments.
Reference will now be made to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the embodiments are merely described below, by referring to the figures, to explain aspects. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
FIG. 1 illustrates a light field diagram according to a view array of a specific scene captured at different viewpoints by using a camera array according to an embodiment of the disclosure.
Referring to FIG. 1, in the specification, "light field" indicates a vector function indicating an amount of light moving in an arbitrary direction passing through an arbitrary point in a space. For example, "light field" indicates a spatial distribution of light fluxes coming out from a visualized image or scene. "Light field" is specified by a conversion direction and a specific value of radiant energy at each point. A light field of a specific (real or virtual) scene may be approximated by an array of a plurality of different views for a corresponding scene. The views may be respectively obtained from different viewpoints by using, for example, an array of cameras or micro lenses of a plenoptic camera. Therefore, as shown in FIG. 1, views may be slightly shifted with respect to each other.
FIG. 2 illustrates an extended view of a display system for displaying a real or virtual scene according to an embodiment of the disclosure.
Referring to FIG. 2, a display system 1 may include a mobile electronic device 2, a spatial light modulator 3, and an optical lens 4.
The embodiment shows a case where the mobile electronic device 2 is a mobile or cellular phone, but those of ordinary skill in the art may replace the mobile or cellular phone by using devices capable of implementing the same functions, such as a laptop computer, a tablet computer, and a portable digital player. In addition, a dice image shown as an initial scene in FIG. 2 is not to limit the embodiment of the disclosure, and the technical idea of the embodiment may be applied in the same way to more complex images including objects and subjects in various types and forms. According to an embodiment of the disclosure, a display of the mobile electronic device 2 may be an organic light emitting diode (OLED) display or a display having a different pixel structure.
The spatial light modulator 3 is disposed at the front of the display of the mobile electronic device 2 and may have a pixel structure having a controllable color slide. The spatial light modulator 3 will be described below.
FIGS. 3a and 3b illustrate spatial light modulators according to display types of a mobile electronic device according to various embodiments of the disclosure.
Referring to FIG. 3a, a liquid crystal display 7 is used as the display of the mobile electronic device 2, and as described in the document (Mukhin, I. A., Development of liquid-crystal monitors, BROADCASTING Television and radiobroadcasting: 1 part - No. 2(46), March 2005, pp. 55-56; 2 part - No. 4(48), June-July 2005, pp. 71-73), the liquid crystal display 7 may include a backlighting unit, one pair of a first polarizing plate P1 and a second polarizing plate P2, and a first liquid crystal layer LC1 located between the first polarizing plate P1 and the second polarizing plate P2. Herein, a second crystal layer LC2 and a third polarizing plate P3 located in the proximity of a user are used as the spatial light modulator 3. Accordingly, compared with a method of using the first polarizing plate P1 located between the display and the first liquid crystal layer LC1, the first liquid crystal layer LC1, and the second polarizing plate P2 next to the first liquid crystal layer LC1 as a spatial light modulator, in a method of using the second crystal layer LC2 and the third polarizing plate P3 as the spatial light modulator 3, the number of polarizing plates used for a spatial light modulator may be reduced, and thus a size of the display system 1 (not shown in FIG. 3A) may be reduced.
Referring to FIG. 3b, an OLED display 8 may be used as the display of the mobile electronic device 2. As described with reference to FIG. 3A, a fourth polarizing plate P4, a liquid crystal layer LC, and a fifth polarizing plate P5 may be used as the spatial light modulator 3.
As described with reference to FIG. 2, the optical lens 4 is located at the rear of the spatial light modulator 3 at a viewpoint of the user of the display system 1 and is also located at the front of one eye of the user. Optical lenses having the same form as the optical lens 4 may be arranged at the front of the other eye of the user. A set of these lenses constitutes an optical lens device.
A transparency value of pixels of the spatial light modulator 3 and an intensity value of pixels of the display of the mobile electronic device 2 may be variably changed by control signals provided from at least one processor or controller (not shown) included in the display system 1. An adjustment operation for the transparency and intensity will be described when a method of operating the display system 1 is described.
FIG. 4 illustrates a display system including a belt for mounting to a head according to an embodiment of the disclosure.
Referring to FIG. 4, the above-described components of the display system 1, particularly, the mobile electronic device 2 and the spatial light modulator 3 shown together in FIG. 2, may be accommodated in a case or enclosure 5 (see FIG. 4) made of a proper material, such as plastic or a synthetic material. In addition, to provide the possibility of mounting the display system 1 to the head of the user, for example, a specific mounting unit disposed on a leather belt 6 (see FIG. 4) connected to the case or enclosure 5 may be used. According to an embodiment of the disclosure, the case or enclosure 5 may be virtual reality (VR) glasses or a VR helmet.
FIG. 5 is a flowchart of a method of operating a display system according to an embodiment of the disclosure.
Referring to FIG. 5, an operation of the display system 1 will be described below. Particularly, operations performed by the processor or controller described above will be described with reference to FIG. 5.
In operation S1, the processor or controller receives a set of views of a real or virtual scene, for example, dice shown in FIG. 2. Each view of the real or virtual scene is specified by a field of view defined for a scene, as described with reference to FIG. 1. A set of views of a scene may be acquired by using a plenoptic camera, for example, Lytro Illum. The set of the acquired views may be stored in a memory of the mobile electronic device 2. In this case, the processor or controller may access the memory of the mobile electronic device 2 to extract a set of views of a scene for subsequent processing.
According to an embodiment of the disclosure, the processor or controller may form a set of views of a scene by itself by using a rendering program.
In operation S1, the processor or controller may generate a matrix of the views by using geometric parameters of a system (for example, a distance between a display of a mobile electronic device and a spatial light modulator, a focal distance of a lens, and distances from the lens to the display and the modulator in each view).
FIG. 6 illustrates a two-parameter light field expression by Levoy and Hanrahan according to an embodiment of the disclosure.
Referring to FIG. 6, when the matrix of the views is generated, the processor or controller may be based on a two-parameter light field expression by Levoy and Hanrahan. FIG. 6 shows an xy plane and a uv plane of a light field. Referring to FIG. 6, the light field may be represented by a four-dimensional (4D) function L(x, y, u, v) indicating the intensity of light in an optical space, which is incident to one arbitrary dot on the xy plane after passing through one arbitrary dot on the uv plane under the expression described above.
FIG. 7 illustrates a weighted-matrix calculation method based on geometric parameters of a system according to an embodiment of the disclosure.
Referring to FIG. 7, according to a simple geometric consideration with reference to FIG. 7, integer coordinates of points at which light crosses images on a display and a modulator, the integer coordinates being derived from an angle of the display or the modulator, are shown, and virtual ghosts of the display and the modulator are calculated as below.
[Equation 1]
Figure PCTKR2018009072-appb-I000001
[Equation 2]
Figure PCTKR2018009072-appb-I000002
According to an embodiment of the disclosure, in Equations 1 and 2, k denotes 1 or 2, wherein 1 and 2 correspond to the modulator and the display, respectively. The signs + and - included in ± of Equations 1 and 2 correspond to the modulator and the display, respectively, and M1 and M2 denote magnification constants of the virtual ghosts of the modulator and the display, respectively. In addition, p1 and p2 denote pixel sizes of the modulator and the display, respectively, W and H denote a height and a length of a physical view image on the xy plane of the light field (dk denoting a relative location or distance between the xy plane of the light field and a virtual ghost is selected to acquire best image quality), and dcn denotes a distance from an eye-lens plane to a light field plane.
In a light field factorization operation, the light field L(x, y, u, v) is factorized to a multiplication of transparency t(x1, y1) of the spatial light modulator and light intensity l(x2, y2) of the display.
[Equation 3]
Figure PCTKR2018009072-appb-I000003
As described with respect to Equations described above, x1, x2, y1, and y2 may be represented as x, y, u, and v through Equation 1 and Equation 2.
However, this kind of tensor factorization is complex, and thus there exists high calculation burden. Therefore, the disclosure illustrates an embodiment of reducing this high calculation burden by using a simpler matrix factorization method than a tensor factorization method. For matrix factorization, t and l denoting transparency and intensity may be factorized to vectors a and b as follows.
[Equation 4]
Figure PCTKR2018009072-appb-I000004
[Equation 5]
Figure PCTKR2018009072-appb-I000005
[Equation 6]
Figure PCTKR2018009072-appb-I000006
[Equation 7]
Figure PCTKR2018009072-appb-I000007
Herein, wk denotes a width of images of the modulator or the display corresponding to a value of k and is measured based on the number of pixels measured along an x axis.
A value of the light field L(x, y, u, v) is "encapsulated" to an element Tij of a matrix of views, and thus Equation may be replaced by Equation 8.
[Equation 8]
Figure PCTKR2018009072-appb-I000008
In operation S3, the processor or controller generates an adjustment matrix indicating a product of a column vector indicating a transparency value of pixels of the spatial light modulator and a row vector indicating a brightness value of pixels of the display of the mobile electronic device. Herein, elements of the column vector and the row vector are selected such that the adjustment matrix is approximately the same as the matrix of the views.
In more detail, an element (I, j) of the adjustment matrix is obtained when light passes through a jth pixel of the display and an ith pixel of the spatial light modulator. When it is assumed that the matrix of the views is T, and the transparency and the intensity described above are a and b, the fact that the matrix of the views is "approximately the same" as the adjustment matrix indicates that
Figure PCTKR2018009072-appb-I000009
.
This optimization operation may be addressed by various methods. According to an embodiment of the disclosure, the optimization operation may be performed by using weighted rank-1 residue iteration (WRRI). A detailed operation of the WRRI is described in the related art (for example, HO, N.-D., Nonnegative Matrix Factorization Algorithms and Applications, PhD thesis, Universit´e catholique de Louvain, 2008; and HEIDE et al., Cascaded displays: spatiotemporal superresolution using offset pixel layers, ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2014, Volume 33, Issue 4, July 2014).
The number of views is limited, and thus minimization is required through limitation of the number of elements to be used for a computation. Accordingly, a weighted matrix W determined such that
Figure PCTKR2018009072-appb-I000010
is provided. Herein, the weighted matrix W includes only a weighted constant for a part where views of a scene are "encapsulated" and has a value of zero for the remaining parts. The optimization operation continues until elements of the vectors a and b causing the adjustment matrix to be most approximate with the matrix of the views are found out. Equation 9 is an embodiment of the optimization operation.
[Equation 9]
Figure PCTKR2018009072-appb-I000011
In Equation 9, symbol
Figure PCTKR2018009072-appb-I000012
denotes L2-norm satisfying
Figure PCTKR2018009072-appb-I000013
, and an operation symbol ˚ denotes a product between elements, for example, an Hadamard product, performed for element of the vectors a and b until the adjustment matrix is approximately the same as the matrix of the views.
The centers of the pixels of the display included in the mobile electronic device 2 and the spatial light modulator 3 are matched with each other, number "1" is assigned to elements of the matrix W corresponding to corresponding views (for example, i and j where Tij is encapsulated from the views), and the remaining elements of the matrix are filled with zero. If this matching does not occur, the matrices T and W are constructed using barycentric coordinates, and distortion of the views of the scene in a subsequent processing operation is prevented through the construction using barycentric coordinates.
FIG. 8 illustrates a matrix consisting of views using a barycentric coordinate system according to an embodiment of the disclosure.
Referring to FIG. 8, a construction operation using barycentric coordinates is described. Referring to FIG. 8, λ and μ denote coordinates (pixel centers) of a point marked in an X shape on the plane of the spatial light modulator 3, and w00, w01, w10, and w11 are values allocated to four elements specified by coordinates
Figure PCTKR2018009072-appb-I000014
,
Figure PCTKR2018009072-appb-I000015
and
Figure PCTKR2018009072-appb-I000016
.
A sum of w00, w01, w10, and w11 is 1, and thus a unit weight is allocated to four neighboring elements. To construct the matrix of the views, each pixel value in each of four elements may be iterated four times.
According to an embodiment of the disclosure, an access of another method is also possible. For example, when the matrix of the views is constructed, values of the light field are allocated as respective weights to four elements specified by coordinates
Figure PCTKR2018009072-appb-I000017
,
Figure PCTKR2018009072-appb-I000018
and
Figure PCTKR2018009072-appb-I000019
according to barycentric coordinates. In this case, among the elements of the weighted matrix W, elements corresponding to non-zero elements of the matrix T have a value of 1, and the remaining elements have a value of 0.
In operation S4, when components of the vectors a and b are identified, the processor or controller adjusts the intensity value l of the pixels of the display of the mobile electronic device 2 according to the components of the vector b and adjusts the transparency value t of the pixels of the spatial light modulator 3 according to the components of the vector a. Equations 4 and 5 described above mathematically represent a relationship among a, b, t, and l. Through these operations, a light field of a scene, which is approximately the same as observed by a user in the real, for example, as if a 3D effect is provided when the user views the scene, may be obtained.
According to an embodiment of the disclosure, the processor or controller may perform a pre-processing operation for each view of a previous scene before proceeding to operations S2 to S3. The pre-processing operation is an operation of enhancing details of views of a scene. In the pre-processing operation, a defined view (a detail to be enhanced) of a scene is segmented to overlapping units including groups of pixels of the display of the mobile electronic device 2. A following operation for each unit is performed.
First, a color of each pixel is converted into a YUV color model, wherein Y denotes a brightness component, and U and V denote color-difference components.
For each pixel, a separation operation for the brightness component Y is performed. Next, the brightness component Y is added to a bright channel Y for all pixels.
To obtain a Fourier spectrum, the bright channel is processed using Fourier transform. To smooth the spectrum at a boundary, a Gaussian window is used. The details are searched and enhanced using phase congruency analysis in the Fourier spectrum. In addition, to obtain a new bright channel Y', a Fourier inverse transform operation is performed.
The phase congruency analysis is now described. As known, values of a Fourier spectrum are complex numbers. The complex numbers are specified by an absolute value and an angle of deviation (that is, phase). In other words, the complex numbers may be expressed in a form of 2D vector having the same length and phase and the same direction as the absolute value. A search operation on a detail indicates an operation of separating vectors orienting one direction (together with specific divergence), and an enhancing operation on the detail indicates increasing a length of retrieved vectors, that is, an operation of increasing a magnitude of the absolute value.
After performing the operations described above, all processed units are combined such that overlapping is smoothly processed using a Gaussian window. Next, for each pixel, the new color model Y' and the initial components U and V are combined as a color model Y'UV. The color model Y'UV is converted into a color model RGB, and accordingly, a determined view of a scene may be acquired as the color model RGB.
FIG. 9 is a block diagram of a display system according to an embodiment of the disclosure.
Referring to FIG. 9, a display system 900 may display a scene such that a light field which provides an experience approximate to a 3D effect in the real to a user is provided, when a real or virtual scene is displayed.
The display system 900 may include a mobile electronic device 910 and a spatial light modulator 920. According to an embodiment of the disclosure, the display system 900 may further include an optical lens (not shown). However, according to an embodiment of the disclosure, the optical lens is not necessary required as a separated component. The optical lens may be replaced by a medium having the same optical characteristics as the optical lens or included in the spatial light modulator 920.
The mobile electronic device 910 is a portable electronic device and may be implemented in various forms, such as a smartphone, a tablet computer, a personal digital assistant (PDA), and a portable multimedia player (PMP).
The mobile electronic device 910 may include a processor 911 and a display 912. Although FIG. 9 shows that the processor 911 is included in the mobile electronic device 910, this is not mandatory. According to an embodiment of the disclosure, the processor 911 may be located outside the mobile electronic device 910 and may control the mobile electronic device 910 and the spatial light modulator 920. For example, the processor 911 may be included in VR glasses or a VR helmet, which is a case in which the mobile electronic device 910 and the spatial light modulator 920 are accommodated.
The display 912 provides light to display a scene. According to an embodiment of the disclosure, the display 912 may include a liquid crystal display mounted in the mobile electronic device 910. For example, the display 912 may include a backlight of the mobile electronic device 910. In addition, the display 912 may include a liquid crystal of the mobile electronic device 910.
The processor 911 may control the mobile electronic device 910 and the spatial light modulator 920 to perform a display operation of the display system 900.
Although FIG. 9 shows that the spatial light modulator 920 is located outside the mobile electronic device 910, this is not mandatory. According to an embodiment of the disclosure, the spatial light modulator 920 may be included in the mobile electronic device 910 and modulate light provided from the display 912.
The processor 911 may receive a plurality of pieces of view information with respect to a scene to be displayed. According to an embodiment of the disclosure, the scene may be a virtual scene or a real scene. The plurality of pieces of view information are optical information of a scene, which has been acquired at a plurality of viewpoints. According to an embodiment of the disclosure, the plurality of pieces of view information may be a set of a plurality of pieces of view information acquired by photographing a real scene at a plurality of viewpoints. For example, the plurality of pieces of view information may be an intrinsic image acquired from a plurality of matched cameras having different viewpoints. According to an embodiment of the disclosure, the plurality of pieces of view information may be a set of a plurality of pieces of view information corresponding to a virtual scene formed using a rendering program. According to an embodiment of the disclosure, the processor 911 may form a plurality of pieces of view information corresponding to a virtual scene by itself.
The processor 911 may acquire adjustment information from the plurality of pieces of view information. The adjustment information may include information regarding transparency of the spatial light modulator 920 and light intensity of the display 912.
The light intensity of the display 912 indicates intensity of light emitted by each pixel of the display 912. The light intensity of the display 912 may be variably changed under control of the processor 911. The transparency of the spatial light modulator 920 indicates an optical influence of each pixel of the spatial light modulator 920 to light transmitting through the spatial light modulator 920 and may include color transparency.
According to an embodiment of the disclosure, the adjustment information may include a view matrix that is a matrix including each view information included in the plurality of pieces of view information, which is generated based on a geometric parameter. For example, the processor 911 may generate the view matrix from the plurality of pieces of view information. The view matrix is a matrix representing a light field of a corresponding scene to be displayed.
In addition, the processor 911 may perform light field factorization on the generated view matrix. As described with reference to FIG. 7, a light field indicating light passing through a certain pixel of the display 912 and a certain pixel of the spatial light modulator 920 may be represented by a function of intensity of the display 912 and transparency of the spatial light modulator 920. According to an embodiment of the disclosure, the processor 911 may factorize a given view matrix to a product of a matrix indicating intensity of the display 912 and a matrix indicating transparency of the spatial light modulator 920, and this is called light field factorization.
According to an embodiment of the disclosure, the light field factorization may be approximately achieved. Hereinafter, the light field factorization will be described below. In the embodiment below, it is described that the matrix indicating the intensity of the display 912 is a row vector and the matrix indicating the transparency of the spatial light modulator 920 is a column vector, but this is only illustrative, and the technical features of the disclosure are not limited thereto. The processor 911 may factorize a view matrix to a product of various types of matrices.
According to an embodiment of the disclosure, the processor 911 may perform the light field factorization by using a WRRI algorithm. The WRRI algorithm has a better processing speed and a less computation volume than a non-negative matrix factorization (NMF) algorithm, and thus processor 911 may perform real-time processing at a higher speed by using the WRRI algorithm than a speed using the NMF algorithm.
In more detail, as described with reference to Equation 9, the processor 911 may calculate optimized intensity of the display 912 and optimized transparency of the spatial light modulator 920 through an Hadamard product with respect to a given light field by using the WRRI algorithm. According to an embodiment of the disclosure, the processor 911 may form a row vector indicating intensity of the display 912, a column vector indicating transparency of the spatial light modulator 920, and an adjustment matrix indicating a product of the row vector and the column vector. The processor 911 may select a row vector and a column vector by using the WRRI algorithm such that an adjustment matrix is approximately the same as a view matrix.
The processor 911 may adjust an intensity value of light emitted from the display 912 and a transparency value of the spatial light modulator 920, based on the adjustment information.
According to an embodiment of the disclosure, the processor 911 may adjust intensity of the display 912 and transparency of the spatial light modulator 920 based on a light field factorization result of a view matrix. For example, the processor 911 may form a row vector and a column vector forming an adjustment matrix which is approximately the same as the view matrix and adjust the intensity of the display 912 and the transparency of the spatial light modulator 920 based on the row vector and the column vector. According to an embodiment of the disclosure, intensity of each pixel of the display 912 may be adjusted in response to an intensity control signal provided from the processor 911. In addition, transparency of each pixel of the spatial light modulator 920 may be adjusted in response to a transparency control signal provided from the processor 911.
The optical lens delivers, to the user, light which has passed through the display 912 and the spatial light modulator 920. The display system 900 provides a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
FIG. 10 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
Referring to FIG. 10, in operation S1010, a processor receives a plurality of pieces of view information corresponding to a scene. According to an embodiment of the disclosure, the scene may be a real scene or a virtual scene. The plurality of pieces of view information may be acquired by photographing the scene at different viewpoints. For example, the plurality of pieces of view information may have a relationship of being captured by being sequentially shifted at a previously defined angle.
According to an embodiment of the disclosure, the plurality of pieces of view information may be a set of a plurality of pieces of view information acquired by photographing a real scene at a plurality of viewpoints. For example, the plurality of pieces of view information may be an intrinsic image acquired from a plurality of matched cameras having different viewpoints. According to an embodiment of the disclosure, the plurality of pieces of view information may be a set of a plurality of pieces of view information corresponding to a virtual scene formed using a rendering program. According to an embodiment of the disclosure, the processor may form a plurality of pieces of view information corresponding to a virtual scene by itself.
In operation S1020, the processor acquires adjustment information from the plurality of pieces of view information. The adjustment information may include information regarding transparency of a spatial light modulator and light intensity of a display. The light intensity of the display indicates intensity of light emitted by each pixel of the display. The transparency of the spatial light modulator indicates an optical influence of each pixel of the spatial light modulator to light transmitting through the spatial light modulator and may include color transparency. According to an embodiment of the disclosure, the adjustment information may include a view matrix.
In operation S1030, the processor may control the transparency of the spatial light modulator and a light intensity value of the display based on the adjustment information. According to an embodiment of the disclosure, intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor. In addition, transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor.
Light which has passed through the display and the spatial light modulator may be delivered to a user through an optical lens. The display method according to the disclosure may provide a light field which provides an experience approximate to a 3D effect in the real to a user by providing light concentrated through the optical lens to the user.
FIG. 11 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
Referring to FIG. 11, in operation S1110, a processor receives a plurality of pieces of view information corresponding to a scene. According to an embodiment of the disclosure, the scene may be a real scene or a virtual scene. The plurality of pieces of view information may be acquired by photographing the scene at different viewpoints.
In operation S1120, the processor may acquire a view matrix included in adjustment information from the plurality of pieces of view information. For example, the processor may generate a view matrix that is a matrix including each view information included in the plurality of pieces of view information, based on a geometric parameter. The view matrix is a matrix representing a light field of a corresponding scene to be displayed.
In operation S1130, the processor may factorize the given view matrix to a product of a matrix indicating light intensity of a display included in a mobile electronic device and a matrix indicating transparency of a spatial light modulator.
According to an embodiment of the disclosure, the processor may perform the factorization by using a WRRI algorithm. In more detail, as described with reference to Equation 9, the processor may calculate optimized light intensity of the display and optimized transparency of the spatial light modulator through an Hadamard product with respect to a given light field by using the WRRI algorithm.
In operation S1140, the processor may control the transparency of the spatial light modulator and the light intensity of the display based on a result of the factorization. For example, the processor may factorize the view matrix to a row vector and a column vector, control the transparency of the spatial light modulator based on the column vector, and control the light intensity of the display based on the row vector. According to an embodiment of the disclosure, transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor, and intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
A display system may display a scene to a user based on the operations described above. According to an embodiment of the disclosure, light emitted from the display based on the adjusted light intensity is delivered to the user through an optical lens by passing through the spatial light modulator having the adjusted transparency. The display system may provide a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
FIG. 12 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
Referring to FIG. 12, a display system may perform an enhancing processing operation prior to acquiring adjustment information. The display system may provide a relatively clear and realistic experience to a user by using the enhancing processing operation to enhance a detail of a view matrix.
In operation S1210, a processor receives a plurality of pieces of view information corresponding to a scene. According to an embodiment of the disclosure, the scene may be a real scene or a virtual scene. The plurality of pieces of view information may be an intrinsic image acquired from a plurality of matched cameras having different viewpoints. Alternatively, the processor may form a plurality of pieces of view information corresponding to a virtual scene by itself by using a rendering program.
In operation S1220, the processor performs the enhancing processing operation on the pieces of view information. The enhancing processing operation is an operation of enhancing only a detail while maintaining color information of the pieces of view information as it is. According to an embodiment of the disclosure, for the enhancing processing operation, the processor may separate only a brightness channel from each view information and perform a processing operation on the brightness channel. According to an embodiment of the disclosure, the processor may use Fourier transform and phase congruency analysis for the enhancing processing operation.
In operation S1230, the processor may generate adjustment information. According to an embodiment of the disclosure, the processor may generate a view matrix that is a matrix including each view information included in the plurality of pieces of view information, based on a geometric parameter. The view matrix is a matrix representing a light field of a corresponding scene to be displayed.
According to an embodiment of the disclosure, the processor may factorize the enhancing-processed view matrix to a product of vectors. For example, the processor may factorize the enhancing-processed view matrix to a product of a matrix indicating light intensity of a display included in a mobile electronic device and a matrix indicating transparency of a spatial light modulator.
According to an embodiment of the disclosure, the processor may perform the factorization by using a WRRI algorithm. In more detail, as described with reference to Equation 9, the processor may calculate optimized intensity of the display and optimized transparency of the spatial light modulator through an Hadamard product with respect to a given light field by using the WRRI algorithm.
In operation S1240, the processor may control the transparency of the spatial light modulator and a light intensity value of the display based on the adjustment information. For example, the processor may factorize the view matrix to a column vector and a row vector, control the transparency of the spatial light modulator based on the column vector, and control the light intensity of the display based on the row vector. According to an embodiment of the disclosure, transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor, and intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
The display system may display a scene to the user based on the operations described above. According to an embodiment of the disclosure, light emitted from the display based on the adjusted light intensity is delivered to the user through an optical lens by passing through the spatial light modulator having the adjusted transparency. The display system may provide a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
FIG. 13 is a flowchart for describing the enhancing processing operation in more detail according to an embodiment of the disclosure.
Referring to FIG. 13, in operation S1221, a processor extracts a brightness channel of view information, on which the enhancing processing operation is to be performed.
According to an embodiment of the disclosure, the processor may segment the view information into a plurality of units which overlap each other, to extract the brightness channel. Each unit may include a pre-defined plurality of pixels.
According to an embodiment of the disclosure, the processor may convert a color space model of the view information into a color space model having a brightness channel to extract the brightness channel. For example, the processor may convert the color space model of the view information into a YUV color space model or a YIQ color space model. The embodiment illustrates that the color space model of the view information is converted into the YUV color space model. However, the technical features of the disclosure are not limited to YUV color space information and may also be applied to other color spaces having a brightness channel.
In a YUV color space, a Y channel indicates information regarding brightness, and U and V channels indicate information regarding colors. For example, the U channel is a value obtained by subtracting a brightness component from a blue (B) channel of an RGB color space, and the V channel is a value obtained by subtracting the brightness component from a red (R) channel.
According to an embodiment of the disclosure, the processor may extract a Y component that is a brightness component of each unit of the view information. According to an embodiment of the disclosure, the processor may multiplex Y components of respective units to a Y channel that is a brightness channel of a view.
In operation S1222, the processor may perform Fourier transform on the brightness component or the brightness channel. The processor acquires a Fourier spectrum of the brightness component or the brightness channel through the Fourier transform. According to an embodiment of the disclosure, the processor may use a Gaussian window to smooth a boundary part of the spectrum.
In operation S1223, the processor performs a phase congruency analysis on the acquired Fourier spectrum. The processor searches the Fourier spectrum for information regarding a detail through the phase congruency analysis. According to an embodiment of the disclosure, the processor may search for the information regarding a detail through an operation of separating complex vectors orienting to a specific direction in the Fourier spectrum.
In operation S1224, the processor performs a rebalance spectrum operation based on the retrieved information regarding a detail. The rebalance spectrum operation is an operation of enhancing a retrieved detail. According to an embodiment of the disclosure, the processor may enhance the detail through an operation of increasing a magnitude of a length of retrieved complex vectors, that is, a magnitude of an absolute value.
In operation S1225, the processor performs Fourier inverse transform on the brightness component or the brightness channel on which the rebalance spectrum operation has been completed. The processor acquires an enhanced new brightness component or brightness channel Y' through the Fourier inverse transform.
In operation S1226, enhanced information is output. According to an embodiment of the disclosure, the processor may combine information regarding all units on which the processing has been performed, by using a Gaussian window such that overlapping is smoothly processed.
The processor combines the new brightness channel Y' and the initial color channels U and V to a color space model Y'UV. The color space model Y'UV is converted into a color space model RGB, and accordingly, enhanced view information of a scene may be acquired using the color space model RGB.
FIG. 14 is a flowchart of a method of displaying a scene according to another embodiment of the disclosure.
Referring to FIG. 14, a display system may perform anti-aliasing processing based on adjustment information. The display system may provide a relatively clear and realistic experience to a user by using an anti-aliasing processing operation to prevent a pixel staircase phenomenon and edge distortion.
In operation S1410, a processor receives a plurality of pieces of view information corresponding to a scene. According to an embodiment of the disclosure, the scene may be a real scene or a virtual scene. The plurality of pieces of view information may be an intrinsic image acquired from a plurality of matched cameras having different viewpoints. Alternatively, the processor may form a plurality of pieces of view information corresponding to a virtual scene by itself by using a rendering program.
In operation S1420, the processor may acquire adjustment information from the plurality of pieces of view information. According to an embodiment of the disclosure, the processor may generate a view matrix that is a matrix including each view information included in the plurality of pieces of view information, based on a geometric parameter. The view matrix is a matrix representing a light field of a corresponding scene to be displayed.
In operation S1430, the processor may perform the anti-aliasing processing operation based on the adjustment information.
According to an embodiment of the disclosure, the processor may factorize the view matrix included in the adjustment information to a product of a matrix indicating light intensity of a display included in a mobile electronic device and a matrix indicating transparency of a spatial light modulator.
According to an embodiment of the disclosure, the processor may perform the factorization by using a WRRI algorithm. In more detail, as described with reference to Equation 9, the processor may calculate optimized intensity of the display and optimized transparency of the spatial light modulator through an Hadamard product with respect to a given light field by using the WRRI algorithm.
The processor may perform the anti-aliasing processing operation on the view matrix. According to an embodiment of the disclosure, the processor may perform the anti-aliasing processing operation by using pixel barycentric coordinates. According to an embodiment of the disclosure, the processor may perform the anti-aliasing processing operation in an operation of performing light field factorization on the view matrix. For example, the processor may perform the anti-aliasing processing operation by calculating and using a weighted matrix in the operation of performing light field factorization.
In operation S1440, the processor may control transparency of a spatial light modulator and light intensity of a display based on the adjustment information. According to an embodiment of the disclosure, the processor may factorize the view matrix to a column vector and a row vector, control the transparency of the spatial light modulator based on the column vector, and control the light intensity of the display based on the row vector. According to an embodiment of the disclosure, transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor, and light intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
The display system may display a scene to the user based on the operations described above. According to an embodiment of the disclosure, light emitted from the display based on the adjusted light intensity is delivered to the user through an optical lens by passing through the spatial light modulator having the adjusted transparency. The display system may provide a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
FIG. 15 is a flowchart of a method of displaying a scene according to an embodiment of the disclosure.
Referring to FIG. 15, a display system may perform an enhancing processing operation and an anti-aliasing processing operation on pieces of view information. The display system may provide a relatively clear and realistic experience to a user by using the enhancing processing operation and the anti-aliasing processing operation to enhance a detail of views and prevent a pixel staircase phenomenon and edge distortion.
In operation S1510, a processor receives a plurality of pieces of view information corresponding to a scene. According to an embodiment of the disclosure, the scene may be a real scene or a virtual scene.
In operation S1520, the processor performs the enhancing processing operation on the pieces of view information. The enhancing processing operation is an operation of enhancing only a detail while maintaining color information of the pieces of view information as it is. According to an embodiment of the disclosure, for the enhancing processing operation, the processor may separate only a brightness channel from each view information and perform a processing operation on the brightness channel. According to an embodiment of the disclosure, the processor may use Fourier transform and phase congruency analysis for the enhancing processing operation.
In operation S1530, the processor may acquire adjustment information from the plurality of pieces of view information. According to an embodiment of the disclosure, the processor may generate a view matrix that is a matrix including each view information included in the plurality of pieces of view information, based on a geometric parameter. The view matrix is a matrix representing a light field of a corresponding scene to be displayed.
In operation S1540, the processor may perform the anti-aliasing processing operation based on the adjustment information.
According to an embodiment of the disclosure, the processor may factorize the view matrix included in the adjustment information to a product of a matrix indicating light intensity of a display included in a mobile electronic device and a matrix indicating transparency of a spatial light modulator.
According to an embodiment of the disclosure, the processor may perform the factorization by using a WRRI algorithm. In more detail, as described with reference to Equation 9, the processor may calculate optimized intensity of the display and optimized transparency of the spatial light modulator through an Hadamard product with respect to a given light field by using the WRRI algorithm.
The processor may perform the anti-aliasing processing operation on the view matrix. According to an embodiment of the disclosure, the processor may perform the anti-aliasing processing operation by using pixel barycentric coordinates. According to an embodiment of the disclosure, the processor may perform the anti-aliasing processing operation in an operation of performing light field factorization on the view matrix. For example, the processor may perform the anti-aliasing processing operation by calculating and using a weighted matrix in the operation of performing light field factorization.
In operation S1550, the processor may control transparency of a spatial light modulator and light intensity of a display based on the adjustment information. According to an embodiment of the disclosure, the processor may factorize the view matrix to a column vector and a row vector, control the transparency of the spatial light modulator based on the column vector, and control the light intensity of the display based on the row vector. According to an embodiment of the disclosure, transparency of each pixel of the spatial light modulator may be adjusted in response to a transparency control signal provided from the processor, and light intensity of each pixel of the display may be adjusted in response to an intensity control signal provided from the processor.
The display system may display a scene to the user based on the operations described above. According to an embodiment of the disclosure, light emitted from the display based on the adjusted light intensity is delivered to the user through an optical lens by passing through the spatial light modulator having the adjusted transparency. The display system may provide a light field which provides an experience approximate to a 3D effect in the real to the user by providing light concentrated through the optical lens to the user.
The technical features of the disclosure could be clearly described with reference to the above-described embodiments and the accompanying drawings. It would be obvious to those of ordinary skill in the art that the technical features of the disclosure could also be modified and implemented by other embodiments without departing from the technical features of the disclosure. Therefore, the embodiments disclosed in the specification and the accompanying drawings should be understood in the illustrative sense only and not for the purpose of limitation. A component expressed in a singular form does not exclude the feature wherein the component exists plural in number unless they are defined differently.
The disclosed embodiments may be implemented in a form of a non-transitory computer-readable recording medium configured to store computer-executable instructions and data. The instructions may be stored in a form of program codes and may perform, when executed by a processor, a certain operation by generating a certain program module. In addition, the instructions may perform certain operations of the disclosed embodiments when executed by the processor.
Certain aspects of the disclosure can also be embodied as computer readable code on a non-transitory computer readable recording medium. A non-transitory computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the non-transitory computer readable recording medium include a Read-Only Memory (ROM), a Random-Access Memory (RAM), Compact Disc-ROMs (CD-ROMs), magnetic tapes, floppy disks, and optical data storage devices. The non-transitory computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. In addition, functional programs, code, and code segments for accomplishing the disclosure can be easily construed by programmers skilled in the art to which the disclosure pertains.
At this point it should be noted that the various embodiments of the disclosure as described above typically involve the processing of input data and the generation of output data to some extent. This input data processing and output data generation may be implemented in hardware or software in combination with hardware. For example, specific electronic components may be employed in a mobile device or similar or related circuitry for implementing the functions associated with the various embodiments of the disclosure as described above. Alternatively, one or more processors operating in accordance with stored instructions may implement the functions associated with the various embodiments of the disclosure as described above. If such is the case, it is within the scope of the disclosure that such instructions may be stored on one or more non-transitory processor readable mediums. Examples of the processor readable mediums include a ROM, a RAM, CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The processor readable mediums can also be distributed over network coupled computer systems so that the instructions are stored and executed in a distributed fashion. In addition, functional computer programs, instructions, and instruction segments for accomplishing the disclosure can be easily construed by programmers skilled in the art to which the disclosure pertains.
It should be understood that embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in other embodiments.
While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.

Claims (15)

  1. A system for displaying an image in units of a scene, the system comprising:
    a display configured to emit light;
    a spatial light modulator configured to modulate input light based on a transparency value; and
    at least one processor configured to:
    acquire adjustment information comprising transparency of the spatial light modulator and light intensity of the display from a plurality of pieces of view information corresponding to the scene, and
    adjust an intensity value of the light emitted from the display and the transparency value of the spatial light modulator based on the adjustment information,
    wherein the plurality of pieces of view information are optical information of the scene, the optical information having been acquired at a plurality of viewpoints.
  2. The system of claim 1,
    wherein the adjustment information comprises a view matrix factorized to a row vector and a column vector, and
    wherein the at least one processor is further configured to:
    control the intensity value of the light based on the row vector, and
    control the transparency value of the spatial light modulator based on the column vector.
  3. The system of claim 2, wherein the at least one processor is further configured to factorize the view matrix to the row vector and the column vector by a weighted rank-1 residue iteration (WRRI) method.
  4. The system of claim 3, wherein the at least one processor is further configured to:
    generate a weighted matrix based on barycentric coordinates of the view matrix, and
    factorize the view matrix, on which anti-aliasing processing has been performed by using the weighted matrix, by the WRRI method.
  5. The system of claim 2,
    wherein the at least one processor is further configured to form the view matrix based on a geometric parameter, and
    wherein the geometric parameter comprises a distance between the display and the spatial light modulator.
  6. The system of claim 2, wherein the at least one processor is further configured to:
    perform an enhancing processing operation on the plurality of pieces of view information by extracting a brightness channel of the plurality of pieces of view information, and
    acquire the view matrix from the enhancing-processed plurality of pieces of view information.
  7. The system of claim 6, wherein the at least one processor is further configured to:
    segment the plurality of pieces of view information into a plurality of units which overlap each other,
    extract first brightness components from the plurality of units,
    generate second brightness components by retrieving and enhancing detail vectors based on the first brightness components, and
    perform the enhancing processing operation by using the second brightness components.
  8. The system of claim 2, further comprising:
    a mobile electronic device comprising the display and the at least one processor; and
    a mounting device configured to mount the mobile electronic device to a head of a user,
    wherein the mounting device is further configured to accommodate the mobile electronic device and the spatial light modulator therein.
  9. The system of claim 8, wherein the mounting device comprises at least one of a virtual reality (VR) helmet or VR glasses.
  10. The system of claim 8,
    wherein the plurality of pieces of view information are acquired using a plenoptic camera and stored in a memory of the mobile electronic device, and
    wherein the at least one processor is further configured to access the memory to extract the plurality of pieces of view information.
  11. A scene display method of displaying an image in units of a scene, the method comprising:
    receiving a plurality of pieces of view information corresponding to the scene;
    acquiring, from the plurality of pieces of view information, adjustment information comprising intensity of light emitted from a display and transparency of a spatial light modulator configured to modulate the light; and
    adjusting an intensity value of the light emitted from the display and a transparency value of the spatial light modulator, based on the adjustment information,
    wherein the plurality of pieces of view information are optical information of the scene, the optical information having been acquired at a plurality of viewpoints.
  12. The method of claim 11,
    wherein the acquiring of the adjustment information comprises acquiring a view matrix to be factorized to a row vector and a column vector, from the plurality of pieces of view information corresponding to the scene, and
    wherein the adjusting of the intensity value of the light emitted from the display and the transparency value of the spatial light modulator, based on the adjustment information, comprises controlling the intensity value of the light based on the row vector and controlling the transparency value of the spatial light modulator based on the column vector.
  13. The method of claim 12,
    wherein the factorizing of the view matrix comprises factorizing the view matrix by a weighted rank-1 residue iteration (WRRI) method, and
    wherein the factorizing of the view matrix by the WRRI method comprises generating a weighted matrix based on barycentric coordinates of the view matrix and factorizing the view matrix, on which anti-aliasing processing has been performed by using the weighted matrix, by the WRRI method.
  14. The method of claim 13, further comprising performing an enhancing processing operation on the plurality of pieces of view information by extracting a brightness channel of the plurality of pieces of view information,
    wherein the acquiring of the view matrix from the plurality of pieces of view information comprises acquiring the view matrix from the enhancing-processed plurality of pieces of view information.
  15. The method of claim 14, wherein the performing of the enhancing processing operation comprises:
    segmenting the plurality of pieces of view information into a plurality of units which overlap each other;
    extracting first brightness components from the plurality of units;
    generating second brightness components by retrieving and enhancing detail vectors based on the first brightness components; and
    performing the enhancing processing operation by using the second brightness components.
PCT/KR2018/009072 2017-08-15 2018-08-09 System and method for displaying real or virtual scene WO2019035600A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP18845492.0A EP3615988B1 (en) 2017-08-15 2018-08-09 System and method for displaying real or virtual scene

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
RU2017129073 2017-08-15
RU2017129073A RU2665289C1 (en) 2017-08-15 2017-08-15 System of displaying of real or virtual scene and method of functioning thereof
KR1020180077317A KR102561264B1 (en) 2017-08-15 2018-07-03 System for displaying a real or virtual scene and method of operating thereof
KR10-2018-0077317 2018-07-03

Publications (1)

Publication Number Publication Date
WO2019035600A1 true WO2019035600A1 (en) 2019-02-21

Family

ID=65361180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/009072 WO2019035600A1 (en) 2017-08-15 2018-08-09 System and method for displaying real or virtual scene

Country Status (2)

Country Link
US (1) US10585286B2 (en)
WO (1) WO2019035600A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110164366A (en) * 2019-04-22 2019-08-23 联想(北京)有限公司 A kind of information processing method, electronic equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11300793B1 (en) * 2020-08-20 2022-04-12 Facebook Technologies, Llc. Systems and methods for color dithering
EP4202910A4 (en) * 2020-11-18 2024-03-27 Samsung Electronics Co Ltd Stacked display device and control method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140168783A1 (en) * 2012-07-02 2014-06-19 Nvidia Corporation Near-eye microlens array displays
WO2014155288A2 (en) * 2013-03-25 2014-10-02 Ecole Polytechnique Federale De Lausanne (Epfl) Method and apparatus for head worn display with multiple exit pupils
JP2015504616A (en) * 2011-09-26 2015-02-12 マイクロソフト コーポレーション Video display correction based on sensor input of transmission myopia display
KR20150105941A (en) * 2013-01-10 2015-09-18 소니 주식회사 Image display device, image generating device, and transparent spatial light modulating device
US20150310789A1 (en) * 2014-03-18 2015-10-29 Nvidia Corporation Superresolution display using cascaded panels

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8570634B2 (en) 2007-10-11 2013-10-29 Nvidia Corporation Image processing of an incoming light field using a spatial light modulator
US8488246B2 (en) 2010-02-28 2013-07-16 Osterhout Group, Inc. See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US9146403B2 (en) 2010-12-01 2015-09-29 Massachusetts Institute Of Technology Content-adaptive parallax barriers for automultiscopic display
US8848006B2 (en) 2012-01-25 2014-09-30 Massachusetts Institute Of Technology Tensor displays
JP6232763B2 (en) 2013-06-12 2017-11-22 セイコーエプソン株式会社 Head-mounted display device and method for controlling head-mounted display device
US9551873B2 (en) 2014-05-30 2017-01-24 Sony Interactive Entertainment America Llc Head mounted device (HMD) system having interface with mobile computing device for rendering virtual reality content
US10261579B2 (en) 2014-09-01 2019-04-16 Samsung Electronics Co., Ltd. Head-mounted display apparatus
KR102230076B1 (en) 2014-09-01 2021-03-19 삼성전자 주식회사 Head-mounted display apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015504616A (en) * 2011-09-26 2015-02-12 マイクロソフト コーポレーション Video display correction based on sensor input of transmission myopia display
US20140168783A1 (en) * 2012-07-02 2014-06-19 Nvidia Corporation Near-eye microlens array displays
KR20150105941A (en) * 2013-01-10 2015-09-18 소니 주식회사 Image display device, image generating device, and transparent spatial light modulating device
WO2014155288A2 (en) * 2013-03-25 2014-10-02 Ecole Polytechnique Federale De Lausanne (Epfl) Method and apparatus for head worn display with multiple exit pupils
US20150310789A1 (en) * 2014-03-18 2015-10-29 Nvidia Corporation Superresolution display using cascaded panels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3615988A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110164366A (en) * 2019-04-22 2019-08-23 联想(北京)有限公司 A kind of information processing method, electronic equipment

Also Published As

Publication number Publication date
US20190056594A1 (en) 2019-02-21
US10585286B2 (en) 2020-03-10

Similar Documents

Publication Publication Date Title
WO2016190489A1 (en) Head mounted display and control method therefor
WO2013081429A1 (en) Image processing apparatus and method for subpixel rendering
WO2019147021A1 (en) Device for providing augmented reality service, and method of operating the same
WO2019035600A1 (en) System and method for displaying real or virtual scene
WO2015037796A1 (en) Display device and method of controlling the same
EP3615988A1 (en) System and method for displaying real or virtual scene
WO2017065517A1 (en) 3d display apparatus and control method thereof
WO2020101420A1 (en) Method and apparatus for measuring optical characteristics of augmented reality device
WO2016021925A1 (en) Multiview image display apparatus and control method thereof
WO2022108321A1 (en) Display device and control method thereof
WO2019035582A1 (en) Display apparatus and server, and control methods thereof
WO2018182159A1 (en) Smart glasses capable of processing virtual object
EP3225025A1 (en) Display device and method of controlling the same
WO2016163783A1 (en) Display device and method of controlling the same
WO2022108001A1 (en) Method for controlling electronic device by recognizing motion at edge of field of view (fov) of camera, and electronic device therefor
WO2021242008A1 (en) Electronic device and operating method thereof
WO2014007414A1 (en) Terminal for increasing visual comfort sensation of 3d object and control method thereof
WO2021230646A1 (en) System and method for depth map recovery
WO2017179912A1 (en) Apparatus and method for three-dimensional information augmented video see-through display, and rectification apparatus
WO2017146314A1 (en) Hologram output method using display panel and glassless multi-view lenticular sheet, and three-dimensional image generation method and output method using two display panels to which lenticular sheet is attached
WO2021133139A1 (en) Electronic apparatus and control method thereof
WO2013094786A1 (en) Electronic device having 3-dimensional display and method of operating thereof
WO2016175418A1 (en) Display device and control method therefor
WO2024072000A1 (en) Electronic device for displaying 3d image and operation method thereof
WO2013077648A1 (en) Display apparatus and display method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18845492

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018845492

Country of ref document: EP

Effective date: 20191126

NENP Non-entry into the national phase

Ref country code: DE