KR101869912B1 - Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same - Google Patents
Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same Download PDFInfo
- Publication number
- KR101869912B1 KR101869912B1 KR1020170179619A KR20170179619A KR101869912B1 KR 101869912 B1 KR101869912 B1 KR 101869912B1 KR 1020170179619 A KR1020170179619 A KR 1020170179619A KR 20170179619 A KR20170179619 A KR 20170179619A KR 101869912 B1 KR101869912 B1 KR 101869912B1
- Authority
- KR
- South Korea
- Prior art keywords
- image
- rendering
- center
- line
- resolution
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
A pseudo image display device includes an image display unit for displaying an image, a line of sight detecting unit for detecting a user's line of sight toward the image, and a pivoting unit for variably generating a foveated image based on a line of sight position in the image. And an image generating unit.
Description
The present invention relates to a three-dimensional image technology, and more particularly, to a three-dimensional image display device capable of efficiently generating a three-dimensional image in consideration of human cognitive factors, a paved image display method performed thereby, And a recording medium storing the same.
Generally, a ray tracing technique generates a plurality of light rays for one pixel and tracks the paths for optical calculation such as reflection, refraction, transmission and the like in a process of generating a three-dimensional image in real time. The conventional technique can realize high image quality through such calculation process, but it may involve a lot of ray tracing process, resulting in high computer power and lower image processing speed.
Korean Patent Laid-Open No. 10-2017-0124091 (Sep. 11, 2017) relates to a graphics processing system, which includes a method of computing a tile-based graphics processor, the method comprising: generating a set of a plurality of images Comprising: preparing a list of graphical geometries to be processed for each of the sub-regions of the images to be rendered for the scene rendered at a first resolution; And rendering each of the render tiles of each image using lists of geometries to be processed for image sub-regions that are ready for the scene to be rendered at the first resolution, And rendering the images.
Korean Patent Laid-Open Publication No. 10-2016-0130258 (Mar. 23, 2015) is related to a computer graphics system, which includes a graphics processing unit (GPU) having a pixel shader and a texture unit, processing unit; Wherein the pixel shader is configured to receive or generate one or more sets of texture coordinates for each pixel sample location, the pixel shader and texture unit calculating texture space gradient values for one or more primitives therebetween, Unit gradient scale factors configured to modify the gradient values to smoothly switch between regions of the display device having different pixel resolutions.
An embodiment of the present invention provides a purged image display apparatus capable of efficiently generating a three-dimensional image in consideration of human cognitive factors, a purged image display method performed thereby, and a recording medium storing the purged image display apparatus .
An embodiment of the present invention is directed to a paved image display device capable of reducing a penalty according to a cognitive factor of a person by tracking the user's gaze in real time and rendering the image quality difference in the center region and the non- And a recording medium storing the pseudo image display method.
In an embodiment of the present invention, a rendering area is dynamically determined according to a user's line of sight and an object positioned in the line of sight to thereby enable a foveated image to be variably generated according to a situation in an image. A method of displaying a pseudo image, and a recording medium storing the method.
An embodiment of the present invention provides a paved image display device capable of adjusting a rendering area to reflect a user's vision to provide an optimized paved image according to a user characteristic, a paved image display method performed thereby, And a recording medium.
Among the embodiments, the pioneered video display device includes an image display unit for displaying an image, a line of sight detecting unit for detecting a user's line of sight toward the image, and a variable image display unit for displaying a foveated image on the basis of the line- And a pseudo image generating unit.
The purged image generating unit may set a sight line position in the image as a center point and a center area generated based on the center point.
The pseudo image generating unit may render the center area at a first resolution through ray tracing and render the remaining area at a second resolution to generate the pseudo image.
The poverty image generation unit may render the first portion of the remaining region as a partial block having the second resolution and interpolate and render the second portion of the remaining region into a plurality of adjacent partial blocks.
The pseudo image generation unit may perform blending on the boundary between the center area and the remaining area composed of the first and second parts.
The pseudo image generation unit may determine the size of the center area on the basis of the importance of the object at the center point.
The visual-line detecting unit may measure the user's visual acuity before displaying the image.
The pavided image generating unit may determine the size of the center area or the second resolution based on the measured user's visual acuity.
The purged image generating unit may track the change of the sight line position in the image to vary the center point.
The pseudo image generation unit may perform ray tracing in units of pixels based on previously stored geometry data to generate the pseudo image.
The paved image display device may be implemented as an HMD (Head Mounted Display).
In embodiments, a method of displaying a paved image is performed in a pavided image display device, the method comprising the steps of: (a) displaying an image; (b) detecting a user's gaze viewing the image; And (c) variably generating a foveated image based on a line-of-sight position in the image.
The step (c) may include setting a line of sight position in the image as a center point and setting a center area generated based on the center point.
The step (c) may include rendering the center region at a first resolution through ray tracing and rendering the remaining region at a second resolution to generate the poverty image.
The step (c) may further include rendering a first portion of the remaining region as a partial block having the second resolution and interpolating a second portion of the remaining region into a plurality of adjacent partial blocks .
Among the embodiments, the recording medium on which the computer program relating to the pozevideo image display method is recorded includes (a) a function of displaying an image, (b) a function of detecting a user's gaze viewing the image, and (c) And a function of variably generating a foveated image based on the line-of-sight position.
The pored image display apparatus according to an embodiment of the present invention, the pored image display method performed by the apparatus, and the recording medium storing the pored image display apparatus can efficiently generate a three-dimensional image in consideration of human cognitive factors.
The pavided image display device according to an embodiment of the present invention, the pavided image display method performed by the pavided image display device, and the recording medium storing the pavided image display device track the user's gaze in real time, It is possible to reduce a penalty according to a person's cognitive factors.
A pored visual display device according to an embodiment of the present invention, a povided visual display method performed by the method, and a recording medium storing the pored visual display device dynamically determine a rendering area according to an object positioned on a user's eyes and a sight line thereof, It is possible to variably generate a foveated image according to the situation of the user.
The pored image display device, the pored image display method, and the recording medium storing the pored image display method according to an embodiment of the present invention can adjust the rendering area by reflecting the user's visual acuity, Can be provided.
1 is a block diagram illustrating a purged video display device according to an embodiment of the present invention.
FIG. 2 is a view for explaining an exemplary procedure of setting a center area by the podied rendering module of FIG.
FIG. 3 is a view for explaining an embodiment of the interpolated rendering performed by the poverty rendering module in FIG. 1 in more detail.
FIG. 4 is a diagram illustrating a ray tracing process performed by the ray tracing module shown in FIG.
5 is a view for explaining an acceleration structure and geometric data used in a ray tracing process.
FIG. 6 is a detailed block diagram of the poverty rendering module shown in FIG. 1. FIG.
FIG. 7 is a flowchart illustrating a process of generating a pseudo image by the pseudo image display device of FIG.
The description of the present invention is merely an example for structural or functional explanation, and the scope of the present invention should not be construed as being limited by the embodiments described in the text. That is, the embodiments are to be construed as being variously embodied and having various forms, so that the scope of the present invention should be understood to include equivalents capable of realizing technical ideas. Also, the purpose or effect of the present invention should not be construed as limiting the scope of the present invention, since it does not mean that a specific embodiment should include all or only such effect.
Meanwhile, the meaning of the terms described in the present application should be understood as follows.
The terms "first "," second ", and the like are intended to distinguish one element from another, and the scope of the right should not be limited by these terms. For example, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
It is to be understood that when an element is referred to as being "connected" to another element, it may be directly connected to the other element, but there may be other elements in between. On the other hand, when an element is referred to as being "directly connected" to another element, it should be understood that there are no other elements in between. On the other hand, other expressions that describe the relationship between components, such as "between" and "between" or "neighboring to" and "directly adjacent to" should be interpreted as well.
It is to be understood that the singular " include " or "have" are to be construed as including the stated feature, number, step, operation, It is to be understood that the combination is intended to specify that it does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.
In each step, the identification code (e.g., a, b, c, etc.) is used for convenience of explanation, the identification code does not describe the order of each step, Unless otherwise stated, it may occur differently from the stated order. That is, each step may occur in the same order as described, may be performed substantially concurrently, or may be performed in reverse order.
The present invention can be embodied as computer-readable code on a computer-readable recording medium, and the computer-readable recording medium includes all kinds of recording devices for storing data that can be read by a computer system . Examples of the computer-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like. In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner.
All terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. Commonly used predefined terms should be interpreted to be consistent with the meanings in the context of the related art and can not be interpreted as having ideal or overly formal meaning unless explicitly defined in the present application.
1 is a block diagram illustrating a purged video display device according to an embodiment of the present invention.
1, the paved
The
The line of
In one embodiment, the line of
The pseudo
The pseudo
The gaze
In one embodiment, gaze
The paved rendering module 134 may set the line of sight position in the image to the
FIG. 2 is a view for explaining an exemplary procedure of setting a center area by the podied rendering module of FIG.
In FIG. 2, the porcelain rendering module 134 may set a
The paved rendering module 134 may generate a
The paved rendering module 134 may create a
The paved rendering module 134 may set the first resolution to the
In one embodiment, the paved rendering module 134 divides at least one of the remaining regions except for the
In one embodiment, the poverty rendering module 134 may calculate a central area area index and a sampling adjustment index based on Equation (1) below, set the
[Equation 1]
Here, m denotes an area difference index, and Δd X and Δd Y mean the difference between the x coordinate and the y coordinate between the coordinate of the coordinate point and the position coordinate of the
The paved rendering module 134 may request the
In one embodiment, the paved rendering module 134 may request the
In another embodiment, the paved rendering module 134 may request to render and perform ray tracing on a pixel-by-pixel basis according to the sampling rate set in the
In another embodiment, the paved rendering module 134 normally performs a predetermined ray tracing process through the ray tracing of the
The paved rendering module 134 may divide an image to be rendered into a plurality of blocks based on the
The poverty
FIG. 3 is a view for explaining an embodiment of the interpolated rendering performed by the poverty rendering module in FIG. 1 in more detail.
3 (a), the paved rendering module 134 determines that the
In Figures 3 (b) to 3 (c), the paved rendering module 134 may interpolate and render the
The paved rendering module 134 may perform blending on the boundary between the
The paved rendering module 134 may store the blocks generated through the above steps in the
In one embodiment, the porved rendering module 134 may determine the size of the
In another embodiment, the porved rendering module 134 may determine the size of the
In one embodiment, the porcelain rendering module 134 may determine the size of the
The
The
FIG. 4 is a view for explaining a ray tracing process performed by the ray tracing module shown in FIG. 1, and FIG. 5 is a view for explaining an acceleration structure and geometry data used in a ray tracing process.
4, the
In Figure 5, the
The
In one embodiment, the
In another embodiment, the
The
The
In one embodiment, the purged
FIG. 6 is a detailed block diagram of the poverty rendering module shown in FIG. 1. FIG.
The paved rendering module 134 includes a determining region unit for receiving a gaze position from the gaze
The Determining Region Unit includes a gaze position translating unit for converting the gaze position received from the gaze
FIG. 7 is a flowchart illustrating a process of generating a pseudo image by the pseudo image display device of FIG.
7, the
The paved
The
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit and scope of the present invention as defined by the following claims It can be understood that
100: a pavided image display device
110: image display unit 120:
130: pseudo image generation unit 140: buffer unit
150:
200: rendering region 210: center point
220: center area 230: non-center area
Claims (16)
A line of sight detecting unit for measuring a user's visual acuity before displaying the image and detecting a user's line of sight looking at the image; And
And a pseudo image generation unit that variably generates a foveated image based on the sight line position by setting a line of sight position in the image as a center point and setting a center area generated based on the center point, A video display device.
Wherein the center region is rendered at a first resolution through ray tracing and the remaining region is rendered at a second resolution to generate the poverty image.
Rendering the first portion of the remaining region as a partial block having the second resolution and interpolating and rendering the second portion of the remaining region as a plurality of adjacent partial blocks.
Wherein the blending is performed on the boundary between the center area and the remaining area composed of the first and second parts.
Wherein the size of the center region is determined based on the importance of the object at the center point.
Wherein the determination unit determines the size of the center area or the second resolution based on the measured user's visual acuity.
Wherein the center point is changed by tracking a change of a line-of-sight position in the image.
Wherein the pored image is generated by performing ray tracing in units of pixels based on previously stored geometry data.
Wherein the display device is implemented as an HMD (Head Mounted Display).
(a) displaying an image;
(b) measuring a user's visual acuity before the display of the image and detecting a user's gaze viewing the image; And
(c) setting a line of sight position in the image as a center point, setting a center area generated based on the center point, and variably generating a foveated image based on the line of sight position, Image display method.
Setting a visual line position in the image as a center point and setting a center region generated based on the center point.
Rendering the center region at a first resolution through ray tracing and rendering the remaining region at a second resolution to generate the poverty image.
Rendering the first portion of the remaining region as a partial block having the second resolution and interpolating and rendering the second portion of the remaining region as a plurality of adjacent partial blocks, Way.
(b) measuring a user's visual acuity before the display of the image and detecting a user's gaze viewing the image; And
(c) setting a center position of a line of sight in the image as a center point, setting a center area generated based on the center point, and variably generating a foveated image based on the line of sight position, A recording medium on which a computer program for a video display method is recorded.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170179619A KR101869912B1 (en) | 2017-12-26 | 2017-12-26 | Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170179619A KR101869912B1 (en) | 2017-12-26 | 2017-12-26 | Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101869912B1 true KR101869912B1 (en) | 2018-06-21 |
Family
ID=62806569
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020170179619A KR101869912B1 (en) | 2017-12-26 | 2017-12-26 | Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101869912B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220085203A (en) * | 2020-12-15 | 2022-06-22 | 세종대학교산학협력단 | Ray tracing method and apparatus based on attention for dynamic scenes |
KR20220085204A (en) * | 2020-12-15 | 2022-06-22 | 세종대학교산학협력단 | Attention-based ray tracing method and apparatus for foveated rendering |
WO2023106838A1 (en) * | 2021-12-10 | 2023-06-15 | 세종대학교산학협력단 | Ray tracing picture quality control method according to camera movement, picture quality control device for performing same, and recording medium storing same |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160130258A (en) | 2014-04-05 | 2016-11-10 | 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 | Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location |
US20170236252A1 (en) * | 2016-02-12 | 2017-08-17 | Qualcomm Incorporated | Foveated video rendering |
US20170287446A1 (en) * | 2016-03-31 | 2017-10-05 | Sony Computer Entertainment Inc. | Real-time user adaptive foveated rendering |
KR20170124091A (en) | 2016-04-29 | 2017-11-09 | 에이알엠 리미티드 | Graphics processing systems |
-
2017
- 2017-12-26 KR KR1020170179619A patent/KR101869912B1/en active IP Right Grant
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160130258A (en) | 2014-04-05 | 2016-11-10 | 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 | Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location |
US20170236252A1 (en) * | 2016-02-12 | 2017-08-17 | Qualcomm Incorporated | Foveated video rendering |
US20170287446A1 (en) * | 2016-03-31 | 2017-10-05 | Sony Computer Entertainment Inc. | Real-time user adaptive foveated rendering |
KR20170124091A (en) | 2016-04-29 | 2017-11-09 | 에이알엠 리미티드 | Graphics processing systems |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220085203A (en) * | 2020-12-15 | 2022-06-22 | 세종대학교산학협력단 | Ray tracing method and apparatus based on attention for dynamic scenes |
KR20220085204A (en) * | 2020-12-15 | 2022-06-22 | 세종대학교산학협력단 | Attention-based ray tracing method and apparatus for foveated rendering |
WO2022131532A1 (en) * | 2020-12-15 | 2022-06-23 | 세종대학교산학협력단 | Foveated rendering-related concentration level-based ray tracing method and device |
WO2022131531A1 (en) * | 2020-12-15 | 2022-06-23 | 세종대학교산학협력단 | Concentration-based ray tracing method and device for dynamic scene |
KR102537319B1 (en) * | 2020-12-15 | 2023-05-26 | 세종대학교산학협력단 | Ray tracing method and apparatus based on attention for dynamic scenes |
KR102539910B1 (en) * | 2020-12-15 | 2023-06-05 | 세종대학교산학협력단 | Attention-based ray tracing method and apparatus for foveated rendering |
WO2023106838A1 (en) * | 2021-12-10 | 2023-06-15 | 세종대학교산학협력단 | Ray tracing picture quality control method according to camera movement, picture quality control device for performing same, and recording medium storing same |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3179447B1 (en) | Foveated rendering | |
CN112513712B (en) | Mixed reality system with virtual content warping and method of generating virtual content using the same | |
AU2021290369B2 (en) | Mixed reality system with color virtual content warping and method of generating virtual content using same | |
CN110431599B (en) | Mixed reality system with virtual content warping and method for generating virtual content using the same | |
US11663689B2 (en) | Foveated rendering using eye motion | |
KR101869912B1 (en) | Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same | |
WO2017169273A1 (en) | Information processing device, information processing method, and program | |
US10699383B2 (en) | Computational blur for varifocal displays | |
US11032530B1 (en) | Gradual fallback from full parallax correction to planar reprojection | |
WO2023183716A1 (en) | Systems and methods for dynamically rendering images of three-dimensional data with varying detail to emulate human vision | |
KR20150054650A (en) | Method for rendering image and Image outputting device thereof | |
US20210358084A1 (en) | Upsampling low temporal resolution depth maps | |
KR20220030016A (en) | Play device and operating method of thereof | |
KR20230087952A (en) | Ray tracing image quality control method according to camera movement, image quality control apparatus performing the same, and recording medium storing the same | |
WO2017169272A1 (en) | Information processing device, information processing method, and program | |
US20230306676A1 (en) | Image generation device and image generation method | |
JP2015515058A (en) | Method and corresponding apparatus for representing participating media in a scene | |
KR102539910B1 (en) | Attention-based ray tracing method and apparatus for foveated rendering | |
TW202338749A (en) | Method for determining two-eye gaze point and host | |
KR101649188B1 (en) | Method of measuring 3d effect perception and apparatus for measuring 3d effect perception | |
KR20240053443A (en) | A method of measuring a head up display ghost image and an apparatus of measuring a head up display ghost image | |
CN115661408A (en) | Generating and modifying hand representations in an artificial reality environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |