KR101869912B1 - Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same - Google Patents

Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same Download PDF

Info

Publication number
KR101869912B1
KR101869912B1 KR1020170179619A KR20170179619A KR101869912B1 KR 101869912 B1 KR101869912 B1 KR 101869912B1 KR 1020170179619 A KR1020170179619 A KR 1020170179619A KR 20170179619 A KR20170179619 A KR 20170179619A KR 101869912 B1 KR101869912 B1 KR 101869912B1
Authority
KR
South Korea
Prior art keywords
image
rendering
center
line
resolution
Prior art date
Application number
KR1020170179619A
Other languages
Korean (ko)
Inventor
박우찬
황임재
Original Assignee
세종대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 세종대학교 산학협력단 filed Critical 세종대학교 산학협력단
Priority to KR1020170179619A priority Critical patent/KR101869912B1/en
Application granted granted Critical
Publication of KR101869912B1 publication Critical patent/KR101869912B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A pseudo image display device includes an image display unit for displaying an image, a line of sight detecting unit for detecting a user's line of sight toward the image, and a pivoting unit for variably generating a foveated image based on a line of sight position in the image. And an image generating unit.

Figure R1020170179619

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a pored image display device, a pored image display method, and a recording medium storing the same. [0001] The present invention relates to a pored image display device,

The present invention relates to a three-dimensional image technology, and more particularly, to a three-dimensional image display device capable of efficiently generating a three-dimensional image in consideration of human cognitive factors, a paved image display method performed thereby, And a recording medium storing the same.

Generally, a ray tracing technique generates a plurality of light rays for one pixel and tracks the paths for optical calculation such as reflection, refraction, transmission and the like in a process of generating a three-dimensional image in real time. The conventional technique can realize high image quality through such calculation process, but it may involve a lot of ray tracing process, resulting in high computer power and lower image processing speed.

Korean Patent Laid-Open No. 10-2017-0124091 (Sep. 11, 2017) relates to a graphics processing system, which includes a method of computing a tile-based graphics processor, the method comprising: generating a set of a plurality of images Comprising: preparing a list of graphical geometries to be processed for each of the sub-regions of the images to be rendered for the scene rendered at a first resolution; And rendering each of the render tiles of each image using lists of geometries to be processed for image sub-regions that are ready for the scene to be rendered at the first resolution, And rendering the images.

Korean Patent Laid-Open Publication No. 10-2016-0130258 (Mar. 23, 2015) is related to a computer graphics system, which includes a graphics processing unit (GPU) having a pixel shader and a texture unit, processing unit; Wherein the pixel shader is configured to receive or generate one or more sets of texture coordinates for each pixel sample location, the pixel shader and texture unit calculating texture space gradient values for one or more primitives therebetween, Unit gradient scale factors configured to modify the gradient values to smoothly switch between regions of the display device having different pixel resolutions.

Korean Patent Publication No. 10-2017-0124091 Korean Patent Publication No. 10-2016-0130258

An embodiment of the present invention provides a purged image display apparatus capable of efficiently generating a three-dimensional image in consideration of human cognitive factors, a purged image display method performed thereby, and a recording medium storing the purged image display apparatus .

An embodiment of the present invention is directed to a paved image display device capable of reducing a penalty according to a cognitive factor of a person by tracking the user's gaze in real time and rendering the image quality difference in the center region and the non- And a recording medium storing the pseudo image display method.

In an embodiment of the present invention, a rendering area is dynamically determined according to a user's line of sight and an object positioned in the line of sight to thereby enable a foveated image to be variably generated according to a situation in an image. A method of displaying a pseudo image, and a recording medium storing the method.

An embodiment of the present invention provides a paved image display device capable of adjusting a rendering area to reflect a user's vision to provide an optimized paved image according to a user characteristic, a paved image display method performed thereby, And a recording medium.

Among the embodiments, the pioneered video display device includes an image display unit for displaying an image, a line of sight detecting unit for detecting a user's line of sight toward the image, and a variable image display unit for displaying a foveated image on the basis of the line- And a pseudo image generating unit.

The purged image generating unit may set a sight line position in the image as a center point and a center area generated based on the center point.

The pseudo image generating unit may render the center area at a first resolution through ray tracing and render the remaining area at a second resolution to generate the pseudo image.

The poverty image generation unit may render the first portion of the remaining region as a partial block having the second resolution and interpolate and render the second portion of the remaining region into a plurality of adjacent partial blocks.

The pseudo image generation unit may perform blending on the boundary between the center area and the remaining area composed of the first and second parts.

The pseudo image generation unit may determine the size of the center area on the basis of the importance of the object at the center point.

The visual-line detecting unit may measure the user's visual acuity before displaying the image.

The pavided image generating unit may determine the size of the center area or the second resolution based on the measured user's visual acuity.

The purged image generating unit may track the change of the sight line position in the image to vary the center point.

The pseudo image generation unit may perform ray tracing in units of pixels based on previously stored geometry data to generate the pseudo image.

The paved image display device may be implemented as an HMD (Head Mounted Display).

In embodiments, a method of displaying a paved image is performed in a pavided image display device, the method comprising the steps of: (a) displaying an image; (b) detecting a user's gaze viewing the image; And (c) variably generating a foveated image based on a line-of-sight position in the image.

The step (c) may include setting a line of sight position in the image as a center point and setting a center area generated based on the center point.

The step (c) may include rendering the center region at a first resolution through ray tracing and rendering the remaining region at a second resolution to generate the poverty image.

The step (c) may further include rendering a first portion of the remaining region as a partial block having the second resolution and interpolating a second portion of the remaining region into a plurality of adjacent partial blocks .

Among the embodiments, the recording medium on which the computer program relating to the pozevideo image display method is recorded includes (a) a function of displaying an image, (b) a function of detecting a user's gaze viewing the image, and (c) And a function of variably generating a foveated image based on the line-of-sight position.

The pored image display apparatus according to an embodiment of the present invention, the pored image display method performed by the apparatus, and the recording medium storing the pored image display apparatus can efficiently generate a three-dimensional image in consideration of human cognitive factors.

The pavided image display device according to an embodiment of the present invention, the pavided image display method performed by the pavided image display device, and the recording medium storing the pavided image display device track the user's gaze in real time, It is possible to reduce a penalty according to a person's cognitive factors.

A pored visual display device according to an embodiment of the present invention, a povided visual display method performed by the method, and a recording medium storing the pored visual display device dynamically determine a rendering area according to an object positioned on a user's eyes and a sight line thereof, It is possible to variably generate a foveated image according to the situation of the user.

The pored image display device, the pored image display method, and the recording medium storing the pored image display method according to an embodiment of the present invention can adjust the rendering area by reflecting the user's visual acuity, Can be provided.

1 is a block diagram illustrating a purged video display device according to an embodiment of the present invention.
FIG. 2 is a view for explaining an exemplary procedure of setting a center area by the podied rendering module of FIG.
FIG. 3 is a view for explaining an embodiment of the interpolated rendering performed by the poverty rendering module in FIG. 1 in more detail.
FIG. 4 is a diagram illustrating a ray tracing process performed by the ray tracing module shown in FIG.
5 is a view for explaining an acceleration structure and geometric data used in a ray tracing process.
FIG. 6 is a detailed block diagram of the poverty rendering module shown in FIG. 1. FIG.
FIG. 7 is a flowchart illustrating a process of generating a pseudo image by the pseudo image display device of FIG.

The description of the present invention is merely an example for structural or functional explanation, and the scope of the present invention should not be construed as being limited by the embodiments described in the text. That is, the embodiments are to be construed as being variously embodied and having various forms, so that the scope of the present invention should be understood to include equivalents capable of realizing technical ideas. Also, the purpose or effect of the present invention should not be construed as limiting the scope of the present invention, since it does not mean that a specific embodiment should include all or only such effect.

Meanwhile, the meaning of the terms described in the present application should be understood as follows.

The terms "first "," second ", and the like are intended to distinguish one element from another, and the scope of the right should not be limited by these terms. For example, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

It is to be understood that when an element is referred to as being "connected" to another element, it may be directly connected to the other element, but there may be other elements in between. On the other hand, when an element is referred to as being "directly connected" to another element, it should be understood that there are no other elements in between. On the other hand, other expressions that describe the relationship between components, such as "between" and "between" or "neighboring to" and "directly adjacent to" should be interpreted as well.

It is to be understood that the singular " include " or "have" are to be construed as including the stated feature, number, step, operation, It is to be understood that the combination is intended to specify that it does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

In each step, the identification code (e.g., a, b, c, etc.) is used for convenience of explanation, the identification code does not describe the order of each step, Unless otherwise stated, it may occur differently from the stated order. That is, each step may occur in the same order as described, may be performed substantially concurrently, or may be performed in reverse order.

The present invention can be embodied as computer-readable code on a computer-readable recording medium, and the computer-readable recording medium includes all kinds of recording devices for storing data that can be read by a computer system . Examples of the computer-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like. In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner.

All terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. Commonly used predefined terms should be interpreted to be consistent with the meanings in the context of the related art and can not be interpreted as having ideal or overly formal meaning unless explicitly defined in the present application.

1 is a block diagram illustrating a purged video display device according to an embodiment of the present invention.

1, the paved image display device 100 includes an image display unit 110, a sight line detection unit 120, a pseudo image generation unit 130, a buffer unit 140, and a memory unit 150 can do.

The image display unit 110 displays an image. In one embodiment, the image display unit 110 may be implemented as a display panel capable of displaying a three-dimensional image received from the pseudo image generation unit 130.

The line of sight detection unit 120 detects a user's line of sight looking at the image. In one embodiment, the line of sight detection unit 120 can generate a line-of-sight image in real time by tracking the line of sight of a user in an external schedule detection area through a camera module, display of the user's eye movement can be detected in real time to detect a user's gaze as a gaze image. In one embodiment, the line of sight detection unit 120 may be implemented as an IR (Infra-Red) camera capable of recognizing the eye shape and the eye movement of the user.

In one embodiment, the line of sight detection unit 120 may measure the user's visual acuity before the display of the image. For example, the visual-line detecting unit 120 may be implemented as a camera module having a vision measuring lens. When the driving of the image display device 100 is started, the pupil size is sensed from a user's eye captured in a predetermined measurement region, The user's vision can be measured through the inspection. For example, the visual-line detecting unit 120 may test the visual acuity by performing the stored visual acuity test procedure, output a test image according to the visual acuity test procedure to the image display unit 110, It is also possible to predict the user's visual acuity. The operation associated with this user's visual acuity will be described in more detail below.

The pseudo image generation unit 130 variably generates a foveated image based on the sight line position in the image. More specifically, the poverty image generation unit 130 calculates a line-of-sight position in an image corresponding to the user's line of sight detected by the line-of-sight detection unit 120, variably sets the center area according to the calculated line- It is possible to generate a pseudo image in which the region is rendered at the reference resolution and the other region is rendered at a resolution lower than the reference resolution.

The pseudo image generation unit 130 may include an eye position tracking module 132, a pseudo rendering module 134, and a ray tracing module 136. [

The gaze position tracking module 132 can calculate the gaze position in the image corresponding to the detected user's gaze. In one embodiment, the gaze position tracking module 132 analyzes the position of the user's gaze in the displayed image based on the gaze image transmitted from the gaze detector 120, Coordinates can be calculated, and the calculated position coordinates can be transmitted to the porthole rendering module 134 as a line-of-sight position.

In one embodiment, gaze position tracking module 132 may track changes to the line of sight position in the image to vary center point 210. The gaze position tracking module 132 may calculate the changed gaze position immediately after the user's gaze change is received through the gaze detector 120 and provide the gaze position to the porphiste rendering module 134, And may request to change the center point 210 based on the changed line-of-sight position. The center point 210 will be described later in more detail.

The paved rendering module 134 may set the line of sight position in the image to the center point 210 and set the generated center area 220 based on the center point 210. This will be described in more detail with reference to FIG.

FIG. 2 is a view for explaining an exemplary procedure of setting a center area by the podied rendering module of FIG.

In FIG. 2, the porcelain rendering module 134 may set a rendering region 200 that includes a center point 210, a center region 220, and a non-centered region 230 based on a line of sight in the image.

The paved rendering module 134 may generate a rendering area 200 including resolution information for each region in the image to be generated and may convert the position coordinates of the user's gaze received from the gaze position tracking module 132 into a rendering area 200 as shown in FIG. In FIG. 2, for convenience, the center point 210 is shown as being located in the middle of the image, but it is not limited thereto and may be in various positions within the image.

The paved rendering module 134 may create a central region 220 based on the center point 210 and set it in the rendering region 200. In one embodiment, the porved rendering module 134 may set a position coordinate area in the center area 220 that includes the center point 210 and is within a certain reference distance from the position coordinate of the center point 210, , A circle, an ellipse, a polygon, or a combination of patterns therebetween. In another embodiment, the poverty rendering module 134 may set the position coordinate area calculated by applying the position coordinates of the center point 210 to the pre-stored user time awareness range prediction algorithm to the center area 220, For example, as the user's gaze approaches the center of the image, a position coordinate area close to the circle and having a large area can be calculated as the center area 220. [

The paved rendering module 134 may set the first resolution to the center area 220 in the rendering area 200 and the second resolution to the remaining area. Here, the first resolution means resolution finer than the second resolution, and in one embodiment, can be processed as a sampling rate.

In one embodiment, the paved rendering module 134 divides at least one of the remaining regions except for the center region 220 to set the non-centered region 230, and in each of the non-centered regions 230, A resolution having a lower value can be set as the distance from the central region 220 increases. The paved rendering module 134 may define a region of the remaining region that is close to and within the first distance from the central region 220 to a first non-central region 230a and a second resolution, An area within the second distance can be set to the second non-center area 230b and the third resolution lower than the second resolution, and the remainder to the third non-center area 230c and the fourth resolution lower than the third resolution And the center point 210, and sequentially repeats the second through fourth resolutions by repeating the depreciation calculation on the first resolution based on the depreciation value.

In one embodiment, the poverty rendering module 134 may calculate a central area area index and a sampling adjustment index based on Equation (1) below, set the central area 220 based on the central area area index The sampling rate of the second resolution set in the non-center area 230 can be calculated based on the sampling adjustment index. Accordingly, the paved rendering module 134 sets the center area 220 to have an area that is less than the reference area as the user's gaze moves away from the center of the image, and provides a lower sampling rate The second resolution can be set.

[Equation 1]

Figure 112017129071282-pat00001

Here, m denotes an area difference index, and Δd X and Δd Y mean the difference between the x coordinate and the y coordinate between the coordinate of the coordinate point and the position coordinate of the center point 210 in the center of the rendering area 200, and d 0 Means the maximum straight line distance at which the center point 210 can be located based on the coordinates in the center. Also, a denotes a center area area index, a 0 denotes a reference area that can be set by a designer or a user, s denotes a sampling adjustment index, and s 0 denotes a sampling rate of the first resolution do.

The paved rendering module 134 may request the ray tracing module 136 to render each region according to resolution information for each region included in the rendering region 200. [

In one embodiment, the paved rendering module 134 may request the ray tracing module 136 to render the central region 220 at a first resolution through ray tracing and render the remaining region at a second resolution. For example, the paved rendering module 134 may perform ray tracing in the case of the center area 220 and request rendering to render at the set resolution. In the case of the non-center area 230, It is possible to apply a rendering technique matched with the resolution set in each of the center areas 230 to request rendering to the corresponding resolution.

In another embodiment, the paved rendering module 134 may request to render and perform ray tracing on a pixel-by-pixel basis according to the sampling rate set in the central area 220 and the non-center area 230, respectively.

In another embodiment, the paved rendering module 134 normally performs a predetermined ray tracing process through the ray tracing of the central region 220 to render the center region 220 at a first resolution, The ray tracing module 136 may request the ray tracing module 136 to render a second resolution by omitting some of the tracking process and performing only the main process.

The paved rendering module 134 may divide an image to be rendered into a plurality of blocks based on the rendering area 200 and request rendering to the ray tracing module 136 on a block-by-block basis. In one embodiment, the povided rendering module 134 may divide a plurality of blocks into a central area 220 and the remaining area, or may divide the blocks into a plurality of blocks according to a certain block size. In one embodiment, the poverty rendering module 134 determines, for each block, which region of the rendering region 200 the block to be rendered corresponds to, based on the position of the corresponding block in the image and the region calculated in the previous step For example, if the block to be rendered corresponds to the center area 220, the ray tracing module 136 may be requested to render using a relatively high sampling number associated with the area, Otherwise, it may request to render using a sampling number set to a relatively low value.

The poverty image generation unit 130 may render the first portion 310 of the remaining region 230 except for the center region 220 as a partial block having the second resolution and the second portion 320 of the remaining region 230 ) Can be interpolated and rendered into a plurality of adjacent partial blocks. This will be described in more detail with reference to FIG.

FIG. 3 is a view for explaining an embodiment of the interpolated rendering performed by the poverty rendering module in FIG. 1 in more detail.

3 (a), the paved rendering module 134 determines that the ray tracing module 136 should render the remaining area 230 excluding the center area 220 as a partial block having the second resolution, as described above, And when rendering pixels rendered in response to the request are received from the ray tracing module 136, the rendering pixels may be processed in block units. In this process, deficient cells may occur depending on the difference between the first resolution and the second resolution (for example, the difference in the number of sampling), and thus the first portion 310 May comprise rendered pixels and the second portion 320 may comprise non-rendered pixels. For example, the paved rendering module 134 may request the ray tracing module 136 to render a pixel in the first portion 310, and may render the first When the rendered pixel is generated in portion 310, the unrendered pixel in second portion 320 may remain white.

In Figures 3 (b) to 3 (c), the paved rendering module 134 may interpolate and render the second portion 320 of the remaining region 230, which has been processed into blocks, into a plurality of adjacent partial blocks . In one embodiment, the porved rendering module 134 may interpolate the second portion 310 into a neutral color pixel that has not been rendered through horizontal interpolation or vertical interpolation on the adjacent first portion 310. For example, in FIG. 3 (b), the paved rendering module 134 determines the pixel data values of adjacent first portions 310 in the horizontal position relative to non-rendered pixels of the second portion 320 that are not rendered The interpolation is performed using the pixel data values of the adjacent first portion 310 in the vertical position in another example of FIG. 3 (c) Pixel of the position. In one embodiment, the porved rendering module 134 may perform vertical interpolation on the unrendered pixels after performing horizontal interpolation to complete the pixels of the remaining area 230 that have been processed with the block.

The paved rendering module 134 may perform blending on the boundary between the center region 220 and the remaining region 230 comprised of the first and second portions 310, In one embodiment, the poverty rendering module 134 may perform filtering on the boundary between each block processed on a block-by-block basis to reduce the difference in pixel data value between the boundaries, It is possible to mitigate the phenomenon of the Mach-bands in which the boundary between them is clearly felt, so that they can be visualized naturally.

The paved rendering module 134 may store the blocks generated through the above steps in the memory unit 150. When the task execution is completed for all the blocks on the image to be rendered, Images can be generated.

In one embodiment, the porved rendering module 134 may determine the size of the central region 220 based on the importance of the object at the center point 210. For example, the paved rendering module 134 may detect the object information displayed at the corresponding position based on the positional coordinates of the center point 210, and may detect the object information corresponding to the corresponding position in the object information in the primitive data The importance of the object is calculated based on the object type (e.g., person), the object size, and the object property (for example, when the object is set as the main object) Can be determined more dynamically.

In another embodiment, the porved rendering module 134 may determine the size of the central region 220 based on attributes relating to the object, the light source, and the user. For example, the paved rendering module 134 may determine that the pseudo-rendering module 134 determines the size of the center region 220 if the object includes more than the reference importance, includes a light source that is greater than or equal to the reference intensity, Can be set to a reference size or more.

In one embodiment, the porcelain rendering module 134 may determine the size of the central region 220 or the second resolution based on the measured user's visual acuity if the user's visual acuity is measured by the visual line detector 120 . For example, if the user's visual acuity is greater than the reference visual acuity, the paved rendering module 134 may set the size of the central region 220 to be larger than the reference size so that the image of the central region 220, . In another example, the porphoid rendering module 134 may set the second resolution to be lower than the first resolution by a greater rate than the reference ratio if the user's visual acuity is worse than the reference visual acuity so that the image of the non- It is possible to generate low resolution images with lower image quality.

The ray tracing module 136 may perform rendering based on ray tracing. When the rendering request is received from the povided rendering module 134, the ray tracing module 136 can perform ray tracing according to the requested resolution for each area, and output the rendering result according to the corresponding rendering to the povided rendering module 134 To enable the povided rendering module 134 to complete the povided image.

The ray tracing module 136 may perform ray tracing on a pixel-by-pixel basis based on pre-stored geometry data. This will be described in more detail with reference to FIGS. 4 to 5. FIG.

FIG. 4 is a view for explaining a ray tracing process performed by the ray tracing module shown in FIG. 1, and FIG. 5 is a view for explaining an acceleration structure and geometry data used in a ray tracing process.

4, the ray tracing module 136 may generate a primary ray P from each camera position of the camera 410 to perform calculations to find an object 420 that meets the ray P have. The ray tracing module 136 determines that the object to be encountered with the ray P is an object 420 having refraction property or an object 431 or 432 having reflection property, A reflection ray F for a refraction effect and a reflection ray R for a reflection effect are generated at a position at which the light ray 450 meets a light ray and a shadow ray S is generated in a light ray 450 direction, Lt; / RTI > In one embodiment, the ray tracing module 136 may generate shadows at the point where the shadow ray S is generated if the shadow ray S is encountered with another object 440.

In Figure 5, the ray tracing module 136 may perform ray tracing on an Acceleration Structure (AS) based on at least one triangle information in the geometry data, and in one embodiment uses a KD-tree So that an acceleration structure can be constructed. Here, the KD-tree is a type of Spatial Partitioning Structure and can be used for a Ray-Triangle Intersection Test. For example, the KD-tree may include a box node 510, an inner node 520, and a leaf node 530. In one embodiment, the leaf node 730 may include a triangle list for pointing at least one triangle information included in the geometric data. For example, the triangle information may include vertex coordinates, normal vectors, and texture coordinates for the three points of the triangle. In one embodiment, when the triangle information included in the geometric data is implemented as an array, the triangle list included in the leaf node may correspond to the array index. In one embodiment, the internal node 520 has a spatial region based on a bounding box, and the spatial region may be divided into two regions and allocated to two lower nodes. That is, the internal node 520 is composed of a sub-tree of a divided region and two divided regions thereof, and the leaf node 530 includes only a series of raw data. The division plane dividing the space must find a point at which a value for finding the raw data to be encountered with an arbitrary light ray (for example, the number of node visits, the number of times of calculating whether intersection with the raw data, etc.) SAH (Surface are heuristic) can be the method used to find the point.

The ray tracing module 136 may selectively perform at least part of the ray tracing process in accordance with the rendering request of the porphyte rendering module 134. [

In one embodiment, the ray tracing module 136 performs rendering through ray tracing only for the area requested to be rendered through the ray tracing from the povided rendering module 134, and performs ray tracing for the non- Rendering can be performed by applying a rendering technique matched with the requested resolution among the previously stored rendering techniques.

In another embodiment, the ray tracing module 136 may perform rendering by performing all ray tracing in the ray tracing process on the region requested to be rendered with the first resolution, and may perform rendering on the region requested to be rendered with the second resolution It is possible to perform rendering by omitting some of the ray traces in the ray tracing process and performing only the main ray tracing. For example, ray tracing module 136 may track and render all of the elements that strike any rays shot based on camera position for ray tracing in the case of central region 220, The rendering can be performed by only tracing the primary element which collides with the arbitrary shot ray and omitting the tracking of the secondary or higher elements.

The buffer unit 140 can buffer the blocks constituted by at least a part of the pseudo image generated by the pseudo image generation unit 130. When the execution of the above operations is completed for all the areas of the image to be generated, The pornographic image generated as an image can be stored in the memory unit 150.

The memory unit 150 may store the pornped image received from the buffer unit 140 and may store the pornped image stored in the image display unit 110 under the control of the pornographic image generating unit 130 or a separate processor It is possible to provide a pseudo image. The memory unit 150 may store information to be processed in the embedded image display device 100 and may include a geometry data including raw data, a dynamic acceleration structure, a static acceleration structure, and a storage area for a resultant image . In one embodiment, the memory unit 150 may be implemented as a volatile or non-volatile external memory.

In one embodiment, the purged video display device 100 may be implemented as an HMD (Head Mounted Display).

FIG. 6 is a detailed block diagram of the poverty rendering module shown in FIG. 1. FIG.

The paved rendering module 134 includes a determining region unit for receiving a gaze position from the gaze position tracking module 132 and calculating a center region 220 of the image to be generated based on the gaze position tracking module 132 and an other region (a visual region) An interpolation unit for requesting the ray tracing module 136 to perform rendering with different sampling numbers and interpolating according to the difference of pixel values and a deblock filter unit for mitigating a mach band effect occurring at the boundary between the blocks , Each function is as described above.

The Determining Region Unit includes a gaze position translating unit for converting the gaze position received from the gaze position tracking module 132 into coordinates on the rendering region 200 and the gaze position translating unit for converting the gaze position received from the gaze position tracking module 132 to the center region 220, And a Visual Region Calculation Unit that calculates the other area. At this time, the Visual Region Calculation Unit can create two or more other regions.

FIG. 7 is a flowchart illustrating a process of generating a pseudo image by the pseudo image display device of FIG.

7, the image display unit 110 displays an image (step S710), and the visual line detecting unit 120 detects a user's line of sight looking at the corresponding image (step S720), and the purged image generating unit 130 And generates a pseudo image based on the visual line position in the image (step S730).

The paved image display device 100 according to an exemplary embodiment of the present invention performs rendering using a large number of samplings only for a main region of the user's eyes and performs rendering using a small number of samplings for the other regions It is possible to increase the performance of the entire ray tracing without penalty for human cognitive factors.

The apparatus 100 for displaying a displayed image according to an exemplary embodiment of the present invention may reduce the visual fatigue of the user by adjusting the rendering by reflecting the visual cognitive system based on the biological structure of the human being.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit and scope of the present invention as defined by the following claims It can be understood that

100: a pavided image display device
110: image display unit 120:
130: pseudo image generation unit 140: buffer unit
150:
200: rendering region 210: center point
220: center area 230: non-center area

Claims (16)

An image display unit for displaying an image;
A line of sight detecting unit for measuring a user's visual acuity before displaying the image and detecting a user's line of sight looking at the image; And
And a pseudo image generation unit that variably generates a foveated image based on the sight line position by setting a line of sight position in the image as a center point and setting a center area generated based on the center point, A video display device.
delete The apparatus of claim 1, wherein the purged image generating unit
Wherein the center region is rendered at a first resolution through ray tracing and the remaining region is rendered at a second resolution to generate the poverty image.
4. The apparatus according to claim 3, wherein the purged image generating unit
Rendering the first portion of the remaining region as a partial block having the second resolution and interpolating and rendering the second portion of the remaining region as a plurality of adjacent partial blocks.
5. The apparatus of claim 4, wherein the purged image generating unit
Wherein the blending is performed on the boundary between the center area and the remaining area composed of the first and second parts.
The apparatus of claim 1, wherein the purged image generating unit
Wherein the size of the center region is determined based on the importance of the object at the center point.
delete 4. The apparatus according to claim 3, wherein the purged image generating unit
Wherein the determination unit determines the size of the center area or the second resolution based on the measured user's visual acuity.
The apparatus of claim 1, wherein the purged image generating unit
Wherein the center point is changed by tracking a change of a line-of-sight position in the image.
The apparatus of claim 1, wherein the purged image generating unit
Wherein the pored image is generated by performing ray tracing in units of pixels based on previously stored geometry data.
The method according to claim 1,
Wherein the display device is implemented as an HMD (Head Mounted Display).
A method for displaying a paved image in a pavided image display device,
(a) displaying an image;
(b) measuring a user's visual acuity before the display of the image and detecting a user's gaze viewing the image; And
(c) setting a line of sight position in the image as a center point, setting a center area generated based on the center point, and variably generating a foveated image based on the line of sight position, Image display method.
13. The method of claim 12, wherein step (c)
Setting a visual line position in the image as a center point and setting a center region generated based on the center point.
14. The method of claim 13, wherein step (c)
Rendering the center region at a first resolution through ray tracing and rendering the remaining region at a second resolution to generate the poverty image.
15. The method of claim 14, wherein step (c)
Rendering the first portion of the remaining region as a partial block having the second resolution and interpolating and rendering the second portion of the remaining region as a plurality of adjacent partial blocks, Way.
(a) a function of displaying an image;
(b) measuring a user's visual acuity before the display of the image and detecting a user's gaze viewing the image; And
(c) setting a center position of a line of sight in the image as a center point, setting a center area generated based on the center point, and variably generating a foveated image based on the line of sight position, A recording medium on which a computer program for a video display method is recorded.
KR1020170179619A 2017-12-26 2017-12-26 Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same KR101869912B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020170179619A KR101869912B1 (en) 2017-12-26 2017-12-26 Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020170179619A KR101869912B1 (en) 2017-12-26 2017-12-26 Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same

Publications (1)

Publication Number Publication Date
KR101869912B1 true KR101869912B1 (en) 2018-06-21

Family

ID=62806569

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020170179619A KR101869912B1 (en) 2017-12-26 2017-12-26 Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same

Country Status (1)

Country Link
KR (1) KR101869912B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220085203A (en) * 2020-12-15 2022-06-22 세종대학교산학협력단 Ray tracing method and apparatus based on attention for dynamic scenes
KR20220085204A (en) * 2020-12-15 2022-06-22 세종대학교산학협력단 Attention-based ray tracing method and apparatus for foveated rendering
WO2023106838A1 (en) * 2021-12-10 2023-06-15 세종대학교산학협력단 Ray tracing picture quality control method according to camera movement, picture quality control device for performing same, and recording medium storing same

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160130258A (en) 2014-04-05 2016-11-10 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
US20170236252A1 (en) * 2016-02-12 2017-08-17 Qualcomm Incorporated Foveated video rendering
US20170287446A1 (en) * 2016-03-31 2017-10-05 Sony Computer Entertainment Inc. Real-time user adaptive foveated rendering
KR20170124091A (en) 2016-04-29 2017-11-09 에이알엠 리미티드 Graphics processing systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160130258A (en) 2014-04-05 2016-11-10 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
US20170236252A1 (en) * 2016-02-12 2017-08-17 Qualcomm Incorporated Foveated video rendering
US20170287446A1 (en) * 2016-03-31 2017-10-05 Sony Computer Entertainment Inc. Real-time user adaptive foveated rendering
KR20170124091A (en) 2016-04-29 2017-11-09 에이알엠 리미티드 Graphics processing systems

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220085203A (en) * 2020-12-15 2022-06-22 세종대학교산학협력단 Ray tracing method and apparatus based on attention for dynamic scenes
KR20220085204A (en) * 2020-12-15 2022-06-22 세종대학교산학협력단 Attention-based ray tracing method and apparatus for foveated rendering
WO2022131532A1 (en) * 2020-12-15 2022-06-23 세종대학교산학협력단 Foveated rendering-related concentration level-based ray tracing method and device
WO2022131531A1 (en) * 2020-12-15 2022-06-23 세종대학교산학협력단 Concentration-based ray tracing method and device for dynamic scene
KR102537319B1 (en) * 2020-12-15 2023-05-26 세종대학교산학협력단 Ray tracing method and apparatus based on attention for dynamic scenes
KR102539910B1 (en) * 2020-12-15 2023-06-05 세종대학교산학협력단 Attention-based ray tracing method and apparatus for foveated rendering
WO2023106838A1 (en) * 2021-12-10 2023-06-15 세종대학교산학협력단 Ray tracing picture quality control method according to camera movement, picture quality control device for performing same, and recording medium storing same

Similar Documents

Publication Publication Date Title
EP3179447B1 (en) Foveated rendering
CN112513712B (en) Mixed reality system with virtual content warping and method of generating virtual content using the same
AU2021290369B2 (en) Mixed reality system with color virtual content warping and method of generating virtual content using same
CN110431599B (en) Mixed reality system with virtual content warping and method for generating virtual content using the same
US11663689B2 (en) Foveated rendering using eye motion
KR101869912B1 (en) Foveated image displaying apparatus, foveated image displaying method by the same and storage media storing the same
WO2017169273A1 (en) Information processing device, information processing method, and program
US10699383B2 (en) Computational blur for varifocal displays
US11032530B1 (en) Gradual fallback from full parallax correction to planar reprojection
WO2023183716A1 (en) Systems and methods for dynamically rendering images of three-dimensional data with varying detail to emulate human vision
KR20150054650A (en) Method for rendering image and Image outputting device thereof
US20210358084A1 (en) Upsampling low temporal resolution depth maps
KR20220030016A (en) Play device and operating method of thereof
KR20230087952A (en) Ray tracing image quality control method according to camera movement, image quality control apparatus performing the same, and recording medium storing the same
WO2017169272A1 (en) Information processing device, information processing method, and program
US20230306676A1 (en) Image generation device and image generation method
JP2015515058A (en) Method and corresponding apparatus for representing participating media in a scene
KR102539910B1 (en) Attention-based ray tracing method and apparatus for foveated rendering
TW202338749A (en) Method for determining two-eye gaze point and host
KR101649188B1 (en) Method of measuring 3d effect perception and apparatus for measuring 3d effect perception
KR20240053443A (en) A method of measuring a head up display ghost image and an apparatus of measuring a head up display ghost image
CN115661408A (en) Generating and modifying hand representations in an artificial reality environment

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant