KR101227155B1 - Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image - Google Patents

Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image Download PDF

Info

Publication number
KR101227155B1
KR101227155B1 KR1020110057554A KR20110057554A KR101227155B1 KR 101227155 B1 KR101227155 B1 KR 101227155B1 KR 1020110057554 A KR1020110057554 A KR 1020110057554A KR 20110057554 A KR20110057554 A KR 20110057554A KR 101227155 B1 KR101227155 B1 KR 101227155B1
Authority
KR
South Korea
Prior art keywords
image
graphic
frame buffer
function
rendering
Prior art date
Application number
KR1020110057554A
Other languages
Korean (ko)
Other versions
KR20120138185A (en
Inventor
김성재
홍명엽
이동철
박범진
Original Assignee
주식회사 넥서스칩스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 넥서스칩스 filed Critical 주식회사 넥서스칩스
Priority to KR1020110057554A priority Critical patent/KR101227155B1/en
Publication of KR20120138185A publication Critical patent/KR20120138185A/en
Application granted granted Critical
Publication of KR101227155B1 publication Critical patent/KR101227155B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Abstract

A graphic image processing apparatus and method for converting a low resolution graphic image into a high resolution graphic image in real time, the method comprising: a first image generator for generating a first image in a first frame buffer by rendering a 3D graphic model and generating the first image in a first frame buffer And a second image generator configured to receive the rendered first image as a texture and render the second image having a lower resolution than the first image in the second frame buffer. According to the graphic image processing apparatus and method, it is possible to provide a graphic image processing apparatus and method capable of eliminating the phenomenon of staircase or degradation of image quality when outputting to a large display device.

Description

GRAPHIC IMAGE PROCESSING APPARATUS AND METHOD FOR REALTIME TRANSFORMING LOW RESOLUTION IMAGE INTO HIGH RESOLUTION IMAGE}

A graphic image processing apparatus and method for converting a low resolution graphic image into a high resolution graphic image in real time. More specifically, a technique for outputting a high resolution image on a large TV or a monitor by converting a low resolution general image or a stereoscopic image into a high resolution general image or a stereoscopic image in real time in a mobile terminal is disclosed.

Conventionally, when an image displayed on a mobile terminal such as a game terminal or a smartphone is output to a large display device such as a TV or a monitor, the image displayed on the mobile terminal is simply enlarged and output. As such, when the image displayed on the mobile terminal is simply enlarged and outputted on the large display device as it is, when the resolution of the mobile terminal display device is low resolution, the quality of the image output on the large display device is remarkably degraded and the stereoscopic effect is very poor. there was.

In addition, if a stereoscopic image output to a mobile terminal and a large display device such as a 3D TV or a 3D monitor uses the same image with the same resolution, it is normal for a large display device such as a 3D TV or a 3D monitor when outputting a stereoscopic image. However, there has been a problem that UI manipulation becomes abnormally difficult in a touch-oriented terminal.

To convert a low resolution graphic image into a high resolution graphic image in real time and output it to a large display device such as a 3D TV or a 3D monitor, and to present a graphic image processing apparatus and method for displaying a general image or a stereoscopic image on a display device of a mobile terminal. do.

In this case, the high resolution graphic image that is converted in real time is not simply enlarged to a low resolution graphic image but is rendered in high resolution by using a graphic accelerator (GPU) to produce a high resolution graphic image. It is to provide a graphic image processing apparatus and method that can solve the phenomenon.

According to an aspect, a graphic image processing apparatus for converting a low resolution graphic image into a high resolution graphic image in real time may be generated in a first image generator and a first frame buffer by rendering a 3D graphic model to generate a first image in a first frame buffer. And a second image generator configured to receive the rendered first image as a texture and render the second image having a lower resolution than the first image in the second frame buffer.

According to another aspect, at least one of a target buffer setting of a render target buffer specifying function and a viewport area setting of a viewport function among graphic processing functions may be set as a value corresponding to the first frame buffer or the second frame buffer. It may further include a function change unit for changing.

According to another aspect, the apparatus may further include an image output unit configured to output the first image and the second image generated in the first frame buffer and the second frame buffer to the corresponding display apparatus, respectively.

In addition, according to an additional aspect, the 3D graphic model may include 3D graphic model information for generating a single view image or 3D graphic model information corresponding to left and right eyes for generating a stereoscopic image.

Meanwhile, when the 3D graphic model includes 3D graphic model information for generating an image of a single viewpoint, the function changing unit may change the rendering function among the graphic processing functions into a first rendering function and a second rendering function.

In this case, the first image generator may generate a first viewpoint image and a second viewpoint image to the first frame buffer by applying the first rendering function and the second rendering function to the 3D graphic model.

According to an aspect, a graphic image processing method for converting a low resolution graphic image into a high resolution graphic image in real time may include: generating a first image in a first frame buffer by rendering a 3D graphic model; And generating a second image having a lower resolution than that of the first image in the second frame buffer by rendering the generated first image as a texture.

According to another aspect, the first image generating step may include one or more of a target buffer setting of a render target buffer specifying function and a viewport area setting of a viewport function among the graphic processing functions. It may include the step of changing to a value corresponding to.

According to another aspect, in the second image generating step, one or more of a target buffer setting of a render target buffer designation function and a viewport area setting of a viewport function among the graphic processing functions may be selected from the second frame buffer. It may include the step of changing to a value corresponding to.

According to an additional aspect, a graphic image processing method for converting a low resolution graphic image into a high resolution graphic image in real time may include converting the first image and the second image generated in the first frame buffer and the second frame buffer into corresponding display devices, respectively. The method may further include outputting an image output.

The first image generating step may include changing a rendering function among the graphic processing functions to a first rendering function and a second rendering function when the 3D graphic model includes 3D graphic model information for generating an image of a single viewpoint. The apparatus may further include and may generate a first view image and a second view image in the first frame buffer by applying the first rendering function and the second rendering function to the 3D graphic model.

Graphic that can convert low resolution general video or stereoscopic image into high resolution general video or stereoscopic video in real time and output to large display device such as 3D TV or 3D monitor, and display low resolution general video or stereoscopic video on display device of mobile terminal An image processing apparatus and method may be provided.

In addition, high-resolution normal or stereoscopic images that are converted in real time are not simply enlarged to low-resolution images, but are rendered in real time at high resolution using a graphics accelerator (GPU) to produce high-resolution images. It is possible to provide a graphic image processing apparatus and a method that can solve the phenomenon.

1 illustrates an example in which a low resolution graphic image and a high resolution graphic image are output to each display device according to an exemplary embodiment.
2 is a schematic block diagram of a mobile terminal including a graphic image processing apparatus.
3 is a block diagram of a graphic image processing apparatus for converting a low resolution graphic image into a high resolution graphic image in real time according to an exemplary embodiment.
4 is a flowchart of a graphic image processing method of converting a low resolution graphic image into a high resolution graphic image in real time according to an exemplary embodiment.
5 is a flowchart of a detailed procedure of generating a first image according to the embodiment of FIG. 4.
6 is a flowchart of a detailed procedure of a second image generating step according to the embodiment of FIG. 4.

Specific details of other embodiments are included in the detailed description and the drawings. Advantages and features of the present invention, and methods of achieving the same will become apparent with reference to the embodiments described below in detail in conjunction with the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. To fully disclose the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout.

Hereinafter, a graphic image processing apparatus for converting a low resolution graphic image into a high resolution graphic image in real time and a method thereof will be described in detail with reference to the accompanying drawings.

1 illustrates an example in which a low resolution graphic image and a high resolution graphic image are output to each display device according to an exemplary embodiment.

1 illustrates a large display device connected to a mobile terminal and a mobile terminal through an external terminal to output an image displayed on the mobile terminal. The mobile terminal includes all devices for outputting an image such as a game terminal or a smartphone. The large display device is generally a TV or a monitor, but is not limited thereto and may include all devices that support relatively higher resolutions as compared with a mobile terminal. In this case, the reference for the low resolution high resolution may be a case where the difference is about twice as large as the number of pixels. For example, the number of pixels of high resolution 1280 × 720 (921600) is 2.4 times higher than the low resolution 800 × 480 (384000). However, the present invention is not limited only to a case where the number of pixels of the high resolution and the low resolution differs by more than two times.

In addition, the mobile terminal and the large display device should be interpreted as including both a device capable of outputting a general image (2D image) or a stereoscopic image. As illustrated in FIG. 1, types of images that may be output from a mobile terminal and a large display device may be output in various combinations according to formats supported by the devices.

2 is a schematic block diagram of a mobile terminal including a graphic image processing apparatus. As shown in FIG. 2, a mobile terminal such as a game terminal or a smartphone executes a graphic application.

As illustrated in FIG. 2, the graphic image processing apparatus 100 receives 3D graphic modeling data from a graphic application. The graphic image processing apparatus 100 performs a rendering operation on input 3D graphic modeling data by using a graphic processing unit (GPU) built in a mobile terminal and outputs a relatively high resolution image to be output to a large display device. Each of the relatively low resolution images to be displayed is generated and output to each device.

The 3D graphic modeling data is information about a 3D object and may include geometry, viewpoint, texture mapping, lighting, and shading information. The 3D graphic modeling data input to the graphic image processing apparatus 100 may include a graphic image, that is, data for generating a general image of a single viewpoint or a stereoscopic image of left and right eyes. The graphic image refers to an image generated in real time by using a real-time rendering technology, and may include both a single view and a multiview image for stereoscopic.

On the other hand, Realtime Rendering refers to a process of creating a video called a raster graphics image in real time from 3D graphic model data using a computer program. Real-time rendering is used in a variety of fields, including architecture, video games, simulators, special effects, and design visualization. Real-time rendering requires a lot of computation, so we use a graphics accelerator (GPU) to speed up rendering.

A graphics accelerator (GPU) may be generally mounted in the form of a hardware chip in a terminal. In order to use a graphics accelerator (GPU) attached to the terminal in a software platform of the terminal, a graphic library for the corresponding graphic accelerator exists. It is only possible to utilize the graphics acceleration of the actual graphics accelerator.

In this case, the graphic library implements a real-time rendering graphics development API (application programming interface) commonly used in the industry, and controls the accelerator through a hardware graphics accelerator driver inside the graphic library. The graphic library is divided into data setting function, state conversion function, and rendering function. The data setting function sets the model data (3D vertex, etc.), and the state transformation function sets the rendering method. Rendering functions are functions that actually perform rasterization on geometry processing and frame buffers.

Graphics libraries to support graphics accelerators on a single platform are provided by graphics accelerator manufacturers. PC graphics libraries include DirectX and OpenGL. On embedded devices, libraries are mostly provided according to the OpenGL ES specification, and Direct3D Mobile is additionally provided on Windows. However, the graphic library is not limited to the library described herein, but means any kind of graphic library for using the GPU.

3 is a block diagram of a graphic image processing apparatus for converting a low resolution graphic image into a high resolution graphic image in real time according to an exemplary embodiment. Hereinafter, a graphic image processing apparatus for converting a low resolution graphic image into a high resolution graphic image in real time will be described in detail with reference to FIG. 3.

The graphic image processing apparatus 100 includes a first image generator 120 and a second image generator 130. The first image generator 120 renders the 3D graphic model to generate a first image in the first frame buffer 123.

More specifically, the first image generator 120 may include a geometry processor 121 and a raster processor 122.

In general, a process of processing a graphic using a 3D graphic accelerator may be roughly divided into a geometry processing process and a rasterization process.

First, the geometric processing process mainly refers to a process of transforming an object of a 3D coordinate system according to a viewpoint, performing lighting processing and shading, and projecting the image to a two-dimensional coordinate system. The geometrical process involves a significant amount of matrix and trigonometric operations, resulting in significant computational load. In the conventional 3D graphics processing method, the CPU performs this geometric processing, but recently, by performing the geometric processing in the 3D graphics accelerator, the computational load of the CPU is greatly reduced, thereby improving the performance of the entire system.

Raster processing is the process of determining the color value in the image of the 2D coordinate system and storing it in the frame buffer. Raster processing is subdivided into several sub-processes, including Triangle initialization, Edge walk, Span processing, Z-test, Alpha-blend, Color blend, and texture mapping.

The geometry processor 121 performing the geometry process performs geometry transformation on the 3D graphic model. Geometry transformation is the process of transforming a 3D graphic model using affine transformation. Affine transformations include translation, zooming and rotation transformations of figures. Geometry transformations include model-view transformations, lighting calculations, projection transformations, clip tests, view port transformations, etc., for 3D graphical models. .

Model view conversion is a process of converting a viewpoint of a 3D graphic model. The illumination calculation is a process of calculating the direction and amount of projection light, the direction and amount of reflected light, the ambient light, etc., projected onto the 3D graphic model from a predetermined light source such as a point light source, a parallel light source, a concentrated light source, and the like. Projection transformation is a process of mapping an object in the camera (field frustum) to a cube, resulting in a perspective effect in which an object located near the camera is enlarged by the projection transformation. Clip testing is the process of cutting out invisible and invisible parts of the mapped cube in the 3D graphics model for viewport transformation. Viewport transformation is the process of converting a projection-converted 3D graphic model into an on-screen coordinate system.

The geometry processor 121 converts the 3D graphic model into vector graphics through geometry transformation. Vector graphics are a way of mathematically expressing the position, length, and direction of a line. That is, the geometry processor 121 may perform geometry transformation by using a model view transformation function, an illumination calculation function, a projection transformation function, a clip test function, and a viewport transformation function that perform each process. In particular, the modelview transform function and the projection transform function may be represented by a 4 × 4 matrix that transforms the coordinates of the 3D graphic model. The geometry processor 121 may use a geometry engine that speeds up geometry transformation by hardware.

The raster processor 122 that performs the rasterization process performs rasterization on the 3D graphic model converted into vector graphics by the geometric transformation. Rasterization is the process of converting vector graphics into pixel pattern images. That is, the raster processor 122 performs a process of attaching a polygon of a 3D graphic model, which has not existed, corresponding to a pixel on the screen.

Polygon refers to polygon, the smallest unit used to represent solid shapes in three-dimensional computer graphics. It is often used in 3D computer graphics or 3D CAD where fast calculation is required. It is a polyline whose start and end points are connected by lines in three-dimensional space. It is used to represent the edge of an object by expressing a curve in a straight line following a few points on the curve. When drawing three-dimensional figures in three-dimensional computer graphics, the object surface is divided into small triangular polygons. This is because by converting the divided polygon into numerical data, the invisible part of the object can be calculated and displayed on the screen as an image. When dividing an object's surface, triangles are used which are easy to calculate. Triangular polygons gather to form a rectangular polygon. One polygon represents one face, and the more polygons you add, the more detailed the object is. However, the amount of work to calculate increases, which takes a long time.

The raster processor 122 uses a complex polygonal model to output a high quality image suitable for a relatively high resolution external display device.

The geometry processor 121 performs a rasterization on the 3D graphic model on which the geometry transformation has been performed to transfer the first image to the first frame buffer 123 so that the first image is generated in real time in the first frame buffer 123. do. The raster processor 122 may use a raster engine that speeds up the rasterization conversion by hardware.

As described above, the first image generator 120 generates a high resolution image to be output to the large display device through a geometry process and a raster process on the input 3D graphic model.

When the 3D graphic model data generates a stereoscopic image as a multiview image for stereoscopic images, the rendering process of the first frame buffer is divided for each viewpoint, and the above rendering process is repeated for each viewpoint to generate a stereoscopic image of multiple viewpoints. do.

The second image generator 130 receives a second image generated in the first frame buffer 123 by the first image generator 120 as a texture and renders the second image having a lower resolution than the first image. An image is generated in the second frame buffer 133.

That is, the second image generator 130 first receives a high resolution image generated by the first image generator 120 as a texture and generates a rendering to a simple polygon in order to generate an image to be output to a relatively low resolution mobile terminal. To generate a second image. Since relatively low-resolution images are generated, there is no need to work on complex polygons, and low-resolution images can be generated quickly and easily by mapping high-resolution textures input to one or two polygons. However, the simple polygon is not necessarily limited to one or two and may vary depending on the resolution supported by the mobile terminal. In addition, when the GPU or the processor of the terminal supports memory reduction / expansion transmission, the content of the high resolution first frame buffer 123 may be transmitted to the low resolution second frame buffer 133 using this function.

Like the first image generator 120, the second image generator 130 may include a geometry processor 131 and a raster processor 132. The geometry processor 131 of the second image generator 130 performs geometry transformation. The raster processor 132 performs a rasterization process by mapping a texture of a high resolution input first image, such as the raster processor 123 of the first image generator 120, to a simple polygon and performing rasterization on the second frame buffer 133. A second image of low resolution is generated.

According to an additional aspect, the graphic image processing apparatus 100 may further include a function change unit 110. The function changing unit 110 may select one or more of a target buffer setting of a render target buffer designation function and a viewport area setting of a viewport function among the graphic processing functions from the first frame buffer 123 or the second frame buffer. Change to the value corresponding to (133).

In general, in the case of a mobile terminal which generates an image by processing 3D graphic model data using a GPU, the basic rendering target buffer is set as a buffer to be output to the terminal, and the viewport area is set to fit the terminal. .

Unlike the conventional method of first generating a low resolution image suitable for a terminal and then simply expanding the same to generate and output a high resolution image suitable for an external display device, the graphic image processing apparatus 100 has a function change unit 110 externally first. In order to generate a high resolution image suitable for a display device in the first frame buffer 123, a setting of a function for designating a render target buffer among graphic processing functions is used as a first frame buffer in which a high resolution image for outputting is generated. Change the viewport area of the viewport function to the viewport area for the external display device.

In addition, the rendering function of the first image generator 120 renders the 3D graphic model data using the set value to generate a high resolution first image in the first frame buffer 123.

Next, when the first image generation is completed by the first image generator 120, the function changer 110 renders a graphic processing function to generate a low resolution image suitable for the mobile terminal in the second frame buffer 123. Change the setting of the function specifying the target buffer to the second frame buffer, and change the viewport area setting of the Viportport function to the viewport area suitable for the mobile device.

Next, the rendering function of the second image generator 130 renders the low resolution second image in the second frame buffer 133 by rendering the first image using the set value.

According to a further embodiment, the graphic image processing apparatus 100 may further include an image output unit 140. The image output unit 140 outputs the first image and the second image generated in the first frame buffer 123 and the second frame buffer 133 to the corresponding display apparatus, respectively. The image output unit 140 outputs the high resolution image generated in the first frame buffer 123 to the external large display device, and outputs the low resolution image generated in the second frame buffer 133 to the mobile terminal. In this case, the first frame buffer 123 may be designated as an output buffer of the external display device and output without copying.

Meanwhile, the first image generator 120 or the second image generator 130 may generate a normal image or a stereoscopic image to match the output image type supported by the corresponding output display apparatus, respectively.

For example, if both the mobile terminal and the large display device support only the general image, even if the input 3D graphic model data includes the multi-view image information for generating the stereoscopic image, the data of any one of the multi-view images is used. Both the first image generator 120 and the second image generator 130 generate a general image.

However, when either a mobile terminal or a large display device supports a stereoscopic image, the first image generating unit 120 uses a 3D graphic model data of a multiview image to generate a high resolution stereoscopic image (first image). Create Next, when the mobile terminal supports a stereoscopic image, the second image generator 130 receives a stereoscopic image (first image) generated by the first image generator 120 as a texture and then performs a low resolution stereoscopic image (second). Video). If the mobile terminal supports only the general image, the second image generator 130 receives one of the left eye and the right eye image of the first image as a texture to generate a general low resolution general image.

The input 3D graphic model may include 3D graphic model information for generating an image of a single view or 3D graphic model information corresponding to left and right eyes for generating a stereoscopic image.

In this case, when the input 3D graphic model information is single view image information for generating a general image, the function changing unit 110 may change the rendering function among the graphic processing functions into a first rendering function and a second rendering function. The first image generator 120 generates a first viewpoint image and a second viewpoint image in the first frame buffer 123 by applying a first rendering function and a second rendering function to the 3D graphic model.

The first rendering function includes a model view matrix value, a projection matrix value, and a viewport value to generate a first viewpoint image for the 3D graphic model, and the second rendering function generates a second viewpoint image for the 3D graphic model. This includes model view matrix values, projection matrix values, and viewport values. The first rendering function refers to a function that performs rendering by changing a model view matrix value, a projection matrix value, and a viewport value to generate a first view image. The second rendering function refers to a function that performs rendering by changing a model view matrix value, a projection matrix value, and a viewport value to generate a second view image.

Model-View Matrix is a matrix value that changes the viewpoint of 3D graphic model, Projection Matrix is a matrix value that maps 3D graphic model to virtual space, and Viewport is 3D graphic The value converts the model to the on-screen coordinate system. The meaning of the viewport may include a clip area.

The first view image refers to any one of a left eye image and a right eye image, and the second view image refers to the other. The stereoscopic image is displayed by combining the first viewpoint image and the second viewpoint image.

In this case, the projection matrix value and the viewport value change may be determined according to the format of the final stereoscopic image. In general, the format of Side by side (left half of the entire area, left or right eye image, right half of right or left eye image), Top-down (upper half of the whole area, left or right eye image, lower half of right eye or left eye image) The projection matrix value and the viewport value may be determined, and the projection matrix value and the viewport value may be set for other area division.

That is, the first rendering function moves / rotates the 3D graphic model to the left eye viewpoint (Viewpoint) position before the geometry processing so that the left eye image is generated, and the second rendering function moves / rotates the 3D graphic model to the right eye viewpoint (Viewpoint) position. The model view matrix can be changed to rotate to generate the right eye image. At this time, the rotation can be rotated to match the focus position. The focus position can be set automatically by specifying an arbitrary point or by finding the nearest point using Depth information of the rendering device.

The first rendering function or the second rendering function may change the projection matrix value such that the 3D graphic model is mapped to the cube at a changed time point according to the movement / rotation of the 3D graphic model and / or the position setting of the camera, respectively. In this case, the cube to which the 3D graphic model is mapped refers to a virtual space in which a left eye image or a right eye image is displayed. In addition, the first rendering function or the second rendering function may change the viewport so that the 3D graphic model is converted into a coordinate system on the screen at the left eye view or the right eye view, respectively. In this case, both the projection matrix value and the viewport value may be changed, or only one of the two may be changed.

4 is a flowchart of a graphic image processing method of converting a low resolution graphic image into a high resolution graphic image in real time according to an exemplary embodiment. 5 is a flowchart of a detailed procedure of generating a first image according to the embodiment of FIG. 4. 6 is a flowchart of a detailed procedure of a second image generating step according to the embodiment of FIG. 4.

Hereinafter, a graphic image processing method for converting a low resolution graphic image into a high resolution graphic image in real time will be described with reference to FIGS. 4 to 6.

In the graphic image processing method according to the exemplary embodiment, first, a 3D graphic model is rendered to generate a first image in a first frame buffer (step S100).

Referring to FIG. 5, the step of generating the first image (step S100) is described in more detail. In the first image generating step (step S100), 3D graphic modeling data is input (step S110). The 3D graphic modeling data is information about a 3D object and may include geometry, viewpoint, texture mapping, lighting, and shading information. The 3D graphic modeling data input to the graphic image processing apparatus 100 may include data about a multi-view image of a left eye and a right eye for generating a single view image or a stereoscopic image for generating a general image.

If the 3D graphic model includes 3D graphic model information for generating an image of a single view, the method may further include changing a rendering function among the graphic processing functions into a first rendering function and a second rendering function. As such, when the first rendering function and the second rendering function are changed, the following steps of the first image generating step (step S100) may be performed by applying the first rendering function and the second rendering function to the 3D graphic model. The first view image and the second view image are generated at.

Next, any one or more of the target buffer setting of the rendering target buffer specifying function and the viewport area setting of the viewport function among the graphic processing functions are changed to values corresponding to the first frame buffer (step S120). That is, the function changing unit 110 first outputs a setting of a function for designating a render target buffer among graphics processing functions to the external large display device in order to generate a high resolution image suitable for the external display device in the first frame buffer 123. A first frame buffer 123 in which a high resolution image is to be generated is changed, and a viewport area setting of a viewport function is changed to a viewport area suitable for a high resolution external display device.

Next, a process of performing geometry on the 3D graphic model is performed (step S130). The geometry processor 121 performing the geometry process performs geometry transformation on the 3D graphic model. The geometry processor 121 converts the 3D graphic model into vector graphics through geometry transformation. Vector graphics are a way of mathematically expressing the position, length, and direction of a line. That is, the geometry processor 121 may perform geometry transformation by using a model view transformation function, an illumination calculation function, a projection transformation function, a clip test function, and a viewport transformation function that perform each process. In particular, the modelview transform function and the projection transform function may be represented by a 4 × 4 matrix that transforms the coordinates of the 3D graphic model. The geometry processor 121 may use a geometry engine that speeds up geometry transformation by hardware.

Then, rasterization is performed on the 3D graphic model converted into vector graphics by geometry transformation in the geometry processing step (step S140). Rasterization is the process of converting vector graphics into pixel pattern images. That is, the raster processor 122 performs a process of attaching a polygon of a 3D graphic model, which has not existed, corresponding to a pixel on the screen.

Polygon means the smallest unit of polygon that represents a 3D graphic model. The raster processor 122 uses a complex polygonal model to output a high quality image suitable for a relatively high resolution external display device.

Finally, a first image is generated in the first frame buffer 123 (step S150). That is, the raster processor 122 may perform rasterization on the 3D graphic model in which the geometry transformation is performed by the geometry processor 121 to transfer the first image to the first frame buffer 123 to the first frame buffer 123. The first image is generated in real time. The raster processor 122 may use a raster engine that speeds up the rasterization conversion by hardware.

Next, in the graphic image processing method, the first image generated in the first frame buffer is received as a texture in the first image generating step (step S100), and a second image having a lower resolution than the first image is rendered. The frame buffer is generated (step S200).

More specifically, referring to the second image generating step (step S200), first, at least one of a target buffer setting of a rendering target buffer specifying function and a viewport area setting of a viewport function among the graphic processing functions may be described above. Change to a value corresponding to the second frame buffer (step S210). That is, the function changing unit 110 changes the setting of a function that designates a rendering target buffer among the graphic processing functions to the second frame buffer in order to generate a low resolution image suitable for the mobile terminal in the second frame buffer 123. Change the viewport area setting of the (Viweport) function to the viewport area appropriate for the mobile device.

Next, the first image generated in the first frame buffer is received as a texture (step S220). The second image generator 130 receives a high resolution image generated by the first image generator 120 as a texture in order to generate an image to be output to a relatively low resolution mobile terminal.

Then, the raster processing is performed on the simple polygon (step S230). That is, since the raster processor 132 generates a relatively low resolution image, it is not necessary to work on a complicated polygon, and can easily and quickly generate a low resolution image by mapping a high resolution texture input to one or two polygons. However, the simple polygon is not necessarily limited to one or two and may vary depending on the resolution supported by the mobile terminal.

Finally, the raster processor 132 generates a second image in the second frame buffer 133 (step S240). The raster processor 132 generates a second low resolution image in the second frame buffer 133 by performing rasterization by mapping a texture of the input high resolution first image to a simple polygon.

Finally, the graphic image processing method outputs the first image and the second image generated in the first frame buffer and the second frame buffer, respectively, to the corresponding display apparatus (step S300). The image output unit 140 outputs the first image and the second image generated in the first frame buffer 123 and the second frame buffer 133 to the corresponding display apparatus, respectively.

The image output unit 140 outputs the high resolution image generated in the first frame buffer 123 to the external large display device, and outputs the low resolution image generated in the second frame buffer 133 to the mobile terminal. In this case, the first frame buffer 123 may be designated as an output buffer of the external display device and output without copying.

It will be understood by those skilled in the art that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. The scope of the present invention is defined by the appended claims rather than the foregoing detailed description, and all changes or modifications derived from the meaning and scope of the claims and the equivalents thereof are included in the scope of the present invention Should be interpreted.

100: graphic image processing apparatus 110: function changing unit
120: first image generation unit 130: second image generation unit
121 131: geometry processing unit 122 132: raster processing unit
123: first frame buffer 133: second frame buffer
140: video output unit

Claims (11)

  1. delete
  2. A first image generator configured to render a 3D graphic model and generate a first image in a first frame buffer;
    A second image generator configured to receive and render a first image generated in the first frame buffer as a texture to generate a second image having a lower resolution than the first image in a second frame buffer; And
    A function change that changes one or more of a target buffer setting of a render target buffer specifying function and a viewport area setting of a viewport function among graphic processing functions to a value corresponding to the first frame buffer or the second frame buffer. Graphical image processing device for converting a low-resolution graphic image including a high-resolution graphic image in real time.
  3. A first image generator configured to render a 3D graphic model and generate a first image in a first frame buffer;
    A second image generator configured to receive and render a first image generated in the first frame buffer as a texture to generate a second image having a lower resolution than the first image in a second frame buffer; And
    A graphic output unit configured to output the first image and the second image generated in the first frame buffer and the second frame buffer to a corresponding display device, respectively; and convert the low resolution graphic image into a high resolution graphic image in real time. Device.
  4. A first image generator configured to render a 3D graphic model and generate a first image in a first frame buffer; And
    And a second image generator configured to receive the first image generated in the first frame buffer as a texture and render the first image, and to generate a second image having a lower resolution than the first image in the second frame buffer.
    The 3D graphics model,
    And 3D graphic model information for generating a single view image or 3D graphic model information corresponding to left and right eyes for generating a stereoscopic image.
  5. 5. The method of claim 4,
    If the 3D graphics model includes 3D graphics model information for generating an image of a single view, a function changing unit for changing the rendering function of the graphics processing function to the first rendering function and the second rendering function; further comprises a low resolution 3D image processing device for converting a graphic image into a high resolution graphic image in real time.
  6. The display apparatus of claim 5, wherein the first image generator comprises:
    Applying the first rendering function and the second rendering function to the 3D graphic model to generate a first view image and a second view image in the first frame buffer, respectively, wherein the low resolution graphic image is a high resolution graphic image. Stereoscopic image processing device to convert in real time.
  7. delete
  8. Change one or more of the target buffer setting of the rendering target buffer designation function and the viewport area setting of the viewport function among the graphic processing functions to values corresponding to the first frame buffer, and render the 3D graphic model to Generating a first image in a first frame buffer; And
    A second image generating step of generating a second image having a lower resolution than the first image in a second frame buffer by receiving and rendering a first image generated in the first frame buffer as a texture; Graphic image processing method for converting an image into a high resolution graphic image in real time.
  9. The method of claim 8, wherein the generating of the second image comprises:
    Changing at least one of a target buffer setting of a render target buffer specifying function and a viewport area setting of a viewport function among graphic processing functions to a value corresponding to the second frame buffer; Graphic image processing method for converting an image into a high resolution graphic image in real time.
  10. Generating a first image in a first frame buffer by rendering a 3D graphic model;
    A second image generation step of generating a second image having a lower resolution than the first image in the second frame buffer by receiving and rendering the first image generated in the first frame buffer as a texture; And
    An image output step of outputting the first image and the second image generated in the first frame buffer and the second frame buffer to the corresponding display device, respectively; Graphic image processing for real-time conversion of a low resolution graphic image to a high resolution graphic image Way.
  11. Generating a first image in a first frame buffer by rendering a 3D graphic model; And
    And a second image generation step of generating a second image having a lower resolution than the first image in the second frame buffer by receiving and rendering the first image generated in the first frame buffer as a texture.
    The first image generating step,
    When the 3D graphic model includes 3D graphic model information for generating an image of a single view, the rendering function of the graphic processing functions is changed into a first rendering function and a second rendering function, and the third rendering method is performed on the 3D graphic model. And a low resolution graphic image for generating a first view image and a second view image in the first frame buffer, respectively, by applying a first rendering function and a second rendering function to a high resolution graphic image.
KR1020110057554A 2011-06-14 2011-06-14 Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image KR101227155B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110057554A KR101227155B1 (en) 2011-06-14 2011-06-14 Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020110057554A KR101227155B1 (en) 2011-06-14 2011-06-14 Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image
PCT/KR2011/006818 WO2012173304A1 (en) 2011-06-14 2011-09-15 Graphical image processing device and method for converting a low-resolution graphical image into a high-resolution graphical image in real time

Publications (2)

Publication Number Publication Date
KR20120138185A KR20120138185A (en) 2012-12-24
KR101227155B1 true KR101227155B1 (en) 2013-01-29

Family

ID=47357276

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110057554A KR101227155B1 (en) 2011-06-14 2011-06-14 Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image

Country Status (2)

Country Link
KR (1) KR101227155B1 (en)
WO (1) WO2012173304A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9898853B2 (en) 2014-09-01 2018-02-20 Samsung Electronics Co., Ltd. Rendering apparatus and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004086643A (en) 2002-08-28 2004-03-18 Nippon Hoso Kyokai <Nhk> Data acquiring device for computer graphics and data acquiring method and data acquiring program for computer graphics
KR20050049623A (en) * 2003-11-22 2005-05-27 한국전자통신연구원 Apparatus and method for display of stereo-scopic video
JP2009163588A (en) 2008-01-09 2009-07-23 Hitachi Ltd Image signal processor, image resolution enhancement method, and image display device
JP2009169727A (en) * 2008-01-17 2009-07-30 Ricoh Co Ltd Image processing apparatus, image processing method and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997006512A2 (en) * 1995-08-04 1997-02-20 Microsoft Corporation Method and system for rendering graphical objects to image chunks and combining image layers into a display image
US6891535B2 (en) * 2001-03-16 2005-05-10 Mitsubishi Electric Research Labs, Inc. System and method for modeling graphics objects
CA2480081C (en) * 2002-03-22 2007-06-19 Michael F. Deering Scalable high performance 3d graphics
KR100571832B1 (en) * 2004-02-18 2006-04-17 삼성전자주식회사 Method and apparatus for integrated modeling of 3D object considering its physical features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004086643A (en) 2002-08-28 2004-03-18 Nippon Hoso Kyokai <Nhk> Data acquiring device for computer graphics and data acquiring method and data acquiring program for computer graphics
KR20050049623A (en) * 2003-11-22 2005-05-27 한국전자통신연구원 Apparatus and method for display of stereo-scopic video
JP2009163588A (en) 2008-01-09 2009-07-23 Hitachi Ltd Image signal processor, image resolution enhancement method, and image display device
JP2009169727A (en) * 2008-01-17 2009-07-30 Ricoh Co Ltd Image processing apparatus, image processing method and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9898853B2 (en) 2014-09-01 2018-02-20 Samsung Electronics Co., Ltd. Rendering apparatus and method

Also Published As

Publication number Publication date
KR20120138185A (en) 2012-12-24
WO2012173304A1 (en) 2012-12-20

Similar Documents

Publication Publication Date Title
US9202303B2 (en) System and method for compositing path color in path rendering
EP2780891B1 (en) Tessellation in tile-based rendering
JP2008176788A (en) Three-dimensional graphics accelerator and its pixel distributing method
KR20100004119A (en) Post-render graphics overlays
US7307628B1 (en) Diamond culling of small primitives
US9747718B2 (en) System, method, and computer program product for performing object-space shading
US9754407B2 (en) System, method, and computer program product for shading using a dynamic object-space grid
US7292242B1 (en) Clipping with addition of vertices to existing primitives
JP6392370B2 (en) An efficient re-rendering method for objects to change the viewport under various rendering and rasterization parameters
US9129443B2 (en) Cache-efficient processor and method of rendering indirect illumination using interleaving and sub-image blur
JP6293921B2 (en) Changes in effective resolution due to screen position by changing rasterization parameters
US9607428B2 (en) Variable resolution virtual reality display system
US6664971B1 (en) Method, system, and computer program product for anisotropic filtering and applications thereof
TWI629663B (en) Computer graphics system and a graphics processing method
JPH10302079A (en) Solid texture mapping processor and three-dimensional image generating device using the processor
US8624894B2 (en) Apparatus and method of early pixel discarding in graphic processing unit
CN105404393A (en) Low-latency virtual reality display system
US7528831B2 (en) Generation of texture maps for use in 3D computer graphics
US20040085310A1 (en) System and method of extracting 3-D data generated for 2-D display applications for use in 3-D volumetric displays
US9836816B2 (en) Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport
KR101556835B1 (en) Stereoscopic conversion for shader based graphics content
US20090046098A1 (en) Primitive binning method for tile-based rendering
US10089790B2 (en) Predictive virtual reality display system with post rendering correction
JP6389274B2 (en) Adjusting gradients for texture mapping to non-orthogonal grids
CN101281656B (en) Method and apparatus for mapping texture onto 3-dimensional object model

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20160122

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20170123

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20180122

Year of fee payment: 6

LAPS Lapse due to unpaid annual fee