KR101661931B1 - Method and Apparatus For Rendering 3D Graphics - Google Patents

Method and Apparatus For Rendering 3D Graphics Download PDF

Info

Publication number
KR101661931B1
KR101661931B1 KR1020100013432A KR20100013432A KR101661931B1 KR 101661931 B1 KR101661931 B1 KR 101661931B1 KR 1020100013432 A KR1020100013432 A KR 1020100013432A KR 20100013432 A KR20100013432 A KR 20100013432A KR 101661931 B1 KR101661931 B1 KR 101661931B1
Authority
KR
South Korea
Prior art keywords
data
area
next frame
frame
rendering
Prior art date
Application number
KR1020100013432A
Other languages
Korean (ko)
Other versions
KR20110093404A (en
Inventor
장경자
정석윤
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020100013432A priority Critical patent/KR101661931B1/en
Priority to US12/860,479 priority patent/US8970580B2/en
Publication of KR20110093404A publication Critical patent/KR20110093404A/en
Application granted granted Critical
Publication of KR101661931B1 publication Critical patent/KR101661931B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Image Generation (AREA)
  • Software Systems (AREA)

Abstract

A 3D graphics rendering device and method therefor are provided.
The rendering unit may generate the rendering data of the current frame using the rendering data of the previous frame.
The update preparation unit predicts a screen area to be updated in the next frame by using the object information of the current frame and the rendering data of the current frame or the object information of the next frame and extracts the data to be rendered of the predicted area from the current frame or the next frame can do.
The rendering unit may render the extracted data to generate an area to be updated in the next frame.

Figure R1020100013432

Description

TECHNICAL FIELD [0001] The present invention relates to a 3D rendering device,

The present invention relates to a three-dimensional graphics rendering apparatus and method thereof, and relates to a three-dimensional graphics rendering apparatus and method for rendering rendering efficiency by predicting and rendering an area to be updated in a next frame.

Recently, a device for displaying three-dimensional graphic data on the screen has attracted attention. For example, the market size of a device to which a UI (User Interface) application used in a mobile device, an e-book application, an application for simulating a product used in a shopping mall on the Internet, and the like are applied is on the increase.

The above-described applications require fast rendering. On the other hand, most scenes displayed by the application often undergo partial changes. For example, when a plurality of icons are arranged in a matrix on a mobile device, the user can move only one row or one column, and the icons located in the remaining rows or columns do not change.

However, the existing 3D rendering technology renders all three-dimensional graphic data of the changed scene even if the scene changes. Thus, in the case of the mobile device described above, when the user shifts the icons located in one row to the left or right, the mobile device also displays the icons of the other rows that are not shifted together. In other words, since all three-dimensional graphic data are rendered every time a scene is changed, the conventional method performs a redundant operation to render and display, and requires a lot of time and a lot of memory space.

A rendering unit for generating rendering data of a current frame using rendering data of a previous frame; And predicting a screen area to be updated in a next frame by using object information of objects constituting the current frame and rendering data of the current frame or object information of a next frame, And a rendering preparation unit for extracting data to be rendered of the next frame, wherein the rendering unit renders the extracted data to generate the updated area of the next frame.

The object information includes the object ID, the type of each object, data of each object, or change information of each object data.

The update preparation unit predicts an area to be updated corresponding to the object to be processed from the rendering data of the current frame if the current object to be processed is a dynamic transformation object whose coordinate, An update prediction unit; And a data preparation unit for extracting and preparing data to be rendered of the area to be updated from the current frame or the next frame.

Wherein if the current object to be processed exists in the current frame and does not exist in the next frame, the update predicting unit removes the object to be processed from the object belonging to the area to be updated, Extracts data to be rendered of the area to be updated from which the object to be deleted has been removed from the current frame or the next frame.

The rendering unit rasterizes the data to be rendered extracted from the data preparation unit.

If the current object to be processed is a dynamic change object whose geometry data changes in the next frame, calculates a region where the object to be currently processed is located in the next frame using the change information of the object to be currently processed, The preparation unit extracts data to be rendered of the calculated area from the current frame or the next frame.

The rendering unit performs geometry processing and rasterization processing using the data to be rendered extracted from the data preparing unit and the change information of the object to be currently processed.

The change information of the current object to be processed is one of conversion information or animation path information showing a change between the current frame and the next frame.

And a storage unit for storing the geometrically processed data.

Further comprising an area distributor for tile-binning the geometrically processed data, sorting the geometrically processed data into regions, and outputting the geometrically processed data belonging to each of the classified regions to the storage unit.

Wherein the update preparation unit updates an update area corresponding to the object to be processed from the rendering data of the current frame if the current object to be processed is a static change object whose color, A prediction unit; And a data preparation unit for extracting and preparing data to be rendered of the area to be updated from the current frame or the next frame.

The update predicting unit searches the rendering data of the current frame for an area to which the static change object belongs and predicts the retrieved area as the area to be updated.

The rendering unit only lights and rasterizes or rasterizes the data to be rendered extracted from the data preparation unit.

The data to be rendered is applied to one of an object-based rendering method rendered on an object-by-object basis, or a region or a tile-based rendering method of rendering a result of geometry processing of an object and rendering the object on a region-by-region basis.

Wherein the update preparation unit judges that the object that is not present in the current frame and newly generated in the next frame is a dynamic change object and processes the area to be updated in the next frame by using the change information of the object of the next frame A prediction unit; And a data preparation unit for extracting and preparing data to be rendered of the predicted updated area from the next frame.

Wherein the update predicting unit determines an object newly generated in the next frame as a generation object and calculates a region where the generation object is located in the next frame using change information of the object of the next frame, And extracts data to be rendered of the calculated area from the next frame.

The rendering unit performs geometry processing and rasterization processing using the data to be rendered extracted from the data preparation unit and the change information of the object to be processed.

The area corresponding to the fixed object reuses the rendering data of the previous frame.

The update preparation unit compares the object to be processed in the next frame with the object processed in the current frame by using the object information and the scene information, and classifies the type of the object to be processed.

Receiving object information and object information, which are information of objects constituting a current frame, and the current frame; Generating rendering data of the received current frame using rendering data of a previous frame; Estimating a screen area to be updated in a next frame using object information of the current frame and rendering information of the current frame or object information of a next frame; Extracting data to be rendered in the area to be updated from the current frame or the next frame; And rendering the extracted data to generate the updated area of the next frame.

Wherein if the current object to be processed is a dynamic change object whose coordinate, position, or rotation is changed among the objects of the current frame, the predicting step may predict the updated area corresponding to the current object to be processed from the rendering data of the current frame, do.

Wherein the predicting step removes the object data from object data belonging to the area to be updated if the current object to be processed exists in the current frame and does not exist in the next frame, Extracts data to be rendered of the area to be updated from which the object has been removed from the current frame or the next frame.

The rendering step rasterizes the extracted data to be rendered.

Wherein if the current object to be processed is a dynamic change object in which the geometric data changes in the next frame, the predictor may be a region in which the object to be currently processed is located in the next frame And the extracting step extracts data to be rendered of the calculated area from the current frame or the next frame.

The rendering step performs geometry processing and rasterization processing using the data to be rendered extracted from the data preparation unit and the change information of the object to be processed at present.

Wherein the predicting step searches the rendering data of the current frame for a region to which the static change object belongs if the current object to be processed of the current frame is a static change object whose color, The extracting step extracts data to be rendered of the area to be updated from the current frame or the next frame.

The rendering step may only render and rasterize or rasterize the extracted data to be rendered.

Wherein the predicting step determines that the object not present in the current frame but newly generated in the next frame is a dynamic change object and predicts an area to be updated in the next frame using change information of the object of the next frame .

Wherein the predicting step determines a new object to be generated in the next frame as a generated object and calculates a region where the object to be processed is located in the next frame using change information of the object of the next frame, Extracts data to be rendered of the calculated area from the next frame.

The rendering step performs geometry processing and rasterization processing using the extracted data to be rendered and change information of the object to be processed.

The area corresponding to the fixed object reuses the rendering data of the previous frame.

And classifying the object to be processed by comparing the object to be processed in the next frame with the object processed in the current frame by using the object information and the scene information.

According to the proposed embodiment, it is possible to reduce the time required for clearing the buffer by clearing the entire buffer only when the first frame is input without clearing all buffers every time a new frame is input.

In addition, according to the present invention, the buffer clear area can be minimized by storing the data stored in the buffer by area.

According to the proposed embodiment, the area to be updated in the next frame is predicted and only the buffer of the area to be updated is cleared, so that the object data of the updated area is rendered, thereby minimizing the time and computation amount required for buffer clearing and rendering And can perform rendering in a low power environment.

According to the proposed embodiment, only the area to be updated is rendered, and the remaining area is displayed on the screen using the previous rendering result, so that the rendering area can be minimized and thus the rendering processing speed can be improved.

1 is a block diagram illustrating a 3D graphics rendering apparatus according to an embodiment of the present invention.
FIG. 2 is a diagram for explaining an embodiment for predicting an area to be updated in an area where a static change object exists;
3 is a diagram for explaining another embodiment for predicting an area to be updated in an area in which a static change object exists;
4 is a diagram for explaining an embodiment for predicting an area to be updated in a dynamic change area having a type B dynamic change object,
5 is a diagram for explaining an embodiment for predicting an area to be updated in a dynamic change area in which a dynamic change object of type C exists;
6 is a flowchart briefly explaining a rendering method according to the proposed embodiment,
FIG. 7 is a flowchart illustrating a process of rendering a first frame among the rendering methods according to the present invention. FIG.
FIG. 8 is a flowchart illustrating a process of predicting an area to be updated using a current frame among the rendering methods according to the present invention. FIG.
FIG. 9 is a flowchart illustrating a process of predicting an area to be updated using a current frame among the rendering methods according to another embodiment of the present invention.
10 and 11 are flowcharts for explaining a process of preparing data to be rendered in the dynamic change area according to the proposed embodiment,
FIG. 12 is a flowchart illustrating a process of preparing data to be rendered in a static change area according to an embodiment of the present invention. FIG.
13 is a flowchart illustrating a rendering method for each area to be updated according to the proposed embodiment,
14 is a flowchart for explaining a method of classifying an object type.

Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram illustrating a 3D graphics rendering apparatus according to an embodiment of the present invention.

Referring to FIG. 1, the rendering apparatus 100 may include an application unit 110, an update preparation unit 120, a rendering unit 130, an area distributor 140, and a memory 150.

First, in FIG. 1, a dotted line with a narrow interval (for example, a dotted line between the object information generating unit and the object storage unit) is for the data flow of the first frame or the current processed frame or the next frame.

The dotted line (for example, the line between the update predictor and the object storage) shows the flow of the input data of the second frame or the next frame processed next. Further, a dash-dotted line (for example, a line between the geometry processing unit and the first geometry storage unit) shows the flow of data input to the rendering unit in the second frame or the next frame processed next.

Also, a solid solid line (for example, a geometry processing unit and an area distributor) shows the flow of output data output from the rendering unit. In addition, a dotted line with a large interval (for example, a line between the first color storage unit and the second color storage unit) shows data movement for updating as data corresponding to the update area.

The application unit 110 generates and outputs three-dimensional graphics data, and may output, for example, a frame unit. The object information generation unit 111 may generate object information which is information of objects constituting each frame. The object may be a polygon displayed on the screen, such as a cylinder, a cube, or a sphere. Or the object may be a polygon created using a modeling tool. The generated object information may be stored in the object storage unit 151 of the memory 150 on a frame-by-frame basis.

The object information may include at least one of an object ID, object data, a type of object, change information of the object (geometric data change or color data change), and data necessary for color calculation. The object data includes, for example, the coordinates of a vertex of an object constituting a frame and normal or texture coordinates or a color or texture. The object types include dynamic change objects, static change objects and fixed objects, .

The scene information may include the number of frames, view change information, or a scene graph or a shader program.

A dynamic change object is an object to which transformation information, or animation with a path, is applied, and may be an object whose coordinates, position, or rotation vary in the next frame. Also, when the color value changes, the dynamic change object is included in the dynamic change object. A static change object can be an object whose object's coordinates, position, or rotation do not change, and whose object's texture, color, texture, or light changes in the next frame. The fixed object may be an object that remains unchanged in the next frame.

The generated object may be an object that is not present in the current frame but is added to the next frame. The extinction object may be an object that is in the current frame but disappears in the next frame. The generating object may be a fixed object or a dynamic changing object or a static changing object in the next frame. Further, a fixed object or a dynamic change object or a static change object may become a destruction object in the next frame.

The method of distinguishing objects may be located in the application predecessor or in an update predecessor of the update predecessor. A method of distinguishing object types will be described in detail later with reference to FIG.

The change information of the object may include coordinates, position, or rotation information of the object data that the object changes in the next frame, and may be represented by transformation information (matrix or vector) or animation path. The animation path includes information about the movement path when the object is displayed at a different position in the next frame, and can be expressed by (key, key value). key is the time, and the key value may be a value at the time of the key (e.g., a coordinate value). And may also include information that changes color values.

The data necessary for color calculation may include texture information, lighting information, or variables or constants as data to change the color.

On the other hand, if the frame output from the application unit 110 is the first frame f1, the buffers in the memory 150 can be cleared by a controller (not shown).

The rendering unit 130 may generate rendering data by rendering application data, for example, 3D object data, forming the first frame f1. The rendering unit 130 may include a geometry processing unit 131 and a rasterizing unit 133 for rendering.

The geometry processing part may include a geometry operation in a fixed pipeline or a vertex shader in a programmable shader pipeline. The rasterization part may include a rasterization part of a fixed pipeline or a fragment shader part in a programmable shader pipeline. The 3D object data may be vertex data of a 3D triangle.

The geometry processing unit 131 can generate geometry object data by geometry processing the first frame f1. The geometry processing unit 131 transforms, lights, and viewport maps all the 3D objects of the first frame f1 to generate 2D geometry data, that is, 2D triangle data having a depth value Lt; / RTI > Hereinafter, the geometric object data is referred to as geometrical data.

The rasterizing unit 133 rasterizes the vertex data of the triangle, for example, the 2D triangle data having the depth value input from the geometry processing unit 131 to calculate the depth value of each pixel of the first frame f1 depth) and a color value (color). The depth value of each pixel is determined by comparing the depth of each fragment. This depth comparison can be computed in the geometry processing section or computed in the rasterization section. The rendered pixels are displayed on the display panel 160.

The area distributor 140 may tile the geometry data generated by the geometry processing unit 131 and divide the geometry data into regions. The area distributor 140 may intersect geometric data with a screen area classified by a tile unit, and classify the geometric data belonging to each area. Each classified screen area (hereinafter referred to as 'area') and geometric data located in each area may be stored in the first geometric storage unit 152. Geometry data is divided into regions and stored in order to minimize the area to be cleared when the next frame is rendered.

The memory 150 may store data generated in the rendering apparatus 100. [ Each of the storage units 151 to 158 of the memory 150 may be physically different or may be located in one storage medium. The memory 150 can be logically divided into 150 as shown in FIG.

The object storage unit 151 may store object information and scene information of each frame input from the application unit 110.

The first geometric storage unit 152 may store the geometric data of the current frame f1 distributed by the area distributor 140 for each area. The first geometry storage unit 152 may be a geometry buffer and may store geometry data of the first frame f1 or the current frame. The information stored in the first geometric storage unit 152 may be data of a 2D triangle having ID, coordinates, or area size of each area, a depth value located in each area, or an object for lighting and rasterization Data, an object ID located in each area, a type of object, and change information (conversion information, animation path information, or color data change information) in the next frame of the object.

The first depth storage unit 153 may store the depth value of each pixel of the first frame f1 generated by the geometry processing unit or the rasterization unit 133. The first depth storage unit 153 may store the depth value of each pixel of the first frame f1, The color value of each pixel of the first frame f1. The first depth storage unit 153 may include a depth buffer for storing depth values and a color buffer for storing color values. The first depth storage unit 153 and the first color storage unit 154 may store the depth value and the color value of the current frame for predicting the first frame f1 or the area to be updated.

The update storage unit 155 may store information related to the area to be updated predicted in the update preparation unit 120, and data to be rendered of the area to be updated. The information related to the area to be updated may include an ID of the area, a size or a coordinate, a change of the area, an object ID belonging to the area to be updated, object data, change information, or data necessary for color calculation. The information of the area stored in the update storage unit 155 (in particular, the object data belonging to the area) can be used as rendering data of an area to be updated and rendered by the rendering unit 130. For example, the rendering data of the area to be updated may be stored in the first geometry storage of the data of the first frame f1 or the current frame, or any data in the first depth storage or first color storage 154, .

The second geometry storage unit 156 may store the geometry data of the second frame f2 or the next frame. The object data of the area of the second frame f2 in which the change is based on the first frame f1 is newly processed by the geometry processing unit 131 and the area in which the change has not occurred is the geometry of the first frame f1 Data can be reused. The second geometry storage unit 156 may store the geometry data of the second frame f2 or the frame of the next frame.

The second depth storage unit 157 may store the rasterization result (depth value) for the object data of the area to be updated and the second color storage unit 158 may store the rasterization result Color value) can be stored.

Alternatively, the stencil value store (not shown) may have a value of 1 byte per pixel and may be used for raster operations with a depth buffer storing the pixel depth value and a color buffer storing the color value.

The second frame can be copied to the LCD panel by copying only the rendering result (e.g., color value) of the update area in the rendering result of the first frame.

When the third frame f3 as the third frame is input, the first geometry storage unit 152 is updated with the geometry data of the second frame f2 stored in the second geometry storage unit 156, The first color storage unit 154 may be updated with the depth value of the second frame f2 stored in the second depth storage unit 157 and the first color storage unit 154 may be updated with the depth value of the second frame f2 stored in the second color storage unit 157. [ Can be updated to the color value of the frame f2.

Hereinafter, a process of predicting and rendering an area to be updated in the next frame using the current frame when a next frame is input will be described. The second frame f2 is used as the next frame, and the first frame f1 is used as the current frame.

The first depth storage unit 153 and the first color storage unit 154. The first geometry storage unit 152 and the first depth storage unit 153 are connected to the object storage unit 151. The object storage unit 151 stores object information of an object constituting the current frame f1, ) Stores geometry data, depth values, and color values associated with the current frame f1.

The application unit 110 outputs the object information of the object constituting the next frame f2 and the object information may store the object information of the next frame f2.

The update preparation unit 120 predicts the screen area (or tile) to be updated in the next frame f2 by using the object information of the current frame f1 and the rendering data of the current frame f1 or the object information of the next frame And extract the object data to be rendered of the predicted area from the current frame f1 or the next frame f2. To this end, the update preparation unit 120 may include an update prediction unit 121 and a data preparation unit 123.

The update predicting unit 121 can read the object information of the current frame f1 stored in the object storage unit 151 or the object information of the next frame and predict the object information on a per object basis. The update predicting unit 121 can predict an area to be updated according to the type of each object. The update predicting unit 121 inputs the object ID to be processed, the type of object, object data, data necessary for color calculation, or change information (including geometry data change information or color data change information) from the object storage unit 151 , Geometry data of the current frame f1 may be input from the first geometry storage unit 152. [

The geometric data of the input current frame f1 may include ID, size or coordinates of each area, kind of area, object data intersecting each area, object ID, change information, or data necessary for color calculation .

The information of the area to be updated can be used as information for updating only the predicted area. The information of the area to be updated may include the ID, size or coordinates of the area, the type of area, object data intersecting with the area to be updated, object ID, change information, or data necessary for color calculation.

If the update predictor 121 determines that the object to be currently processed among the objects of the current frame f1 is a fixed object, the update predictor 121 determines that the area corresponding to the object to be currently processed is the area to be maintained in the next frame f2 , The area to be updated of the next object can be predicted.

In addition, since the fixed object is not an object to be changed in the next frame f2, the geometric data or the color value and the depth value calculated in the current frame f1 can be reused when displayed on the display panel 160. [

The types of object data to be updated in the area to be updated are classified into types A, B, and C in Table 2.

If it is determined that the object to be processed is the dynamic change object among the objects of the current frame f1 and the geometry data is changed in the next frame f2, . ≪ / RTI > If the object change information is present, the update predicting unit 121 may determine that the object to be processed at present is an object displayed at another position in the next frame f2.

The update predicting unit 121 searches the geometric data of the current frame f1 for the region to which the current object to be processed (i.e., the dynamic transformation object) belongs and changes the region to be displayed for the object to be processed at the next frame f2 Information can be searched. The update predicting unit 121 may determine the searched areas as areas to be updated and predict the determined updated area from the geometric data of the current frame f1.

More specifically, if the change information of the object to be processed at present is transformation information, the update predicting unit 121 can calculate the coordinates at which the object is located in the next frame f2 using the object transformation information. The update predicting unit 121 can calculate the intersecting area by projecting the calculated coordinates on the current frame f1. Therefore, the calculated intersected area becomes an area to be updated. Further, since the data to be rendered of the area to which the disappearing object belongs (for example, the data to be rendered of the area in which the dynamic change object belongs to the current frame) is related to the type B,

The data preparing unit 123 may extract data to be rendered in the current frame f1 or the next frame f2 using the information of the intersecting area calculated by the update predicting unit 121. [ The data preparation unit 123 applies the transformation information to the geometric operation result coordinates in the current frame f1 since the information of the intersecting region includes the object in the current frame f1 but in the position change, The 3D object data can be calculated and extracted.

5 is a diagram for explaining an embodiment for predicting an area to be updated in an area in which a type C dynamic change object exists. 5, the dynamic change object C among the current frame f1 and the next frame f2 is a sphere E, F, the sphere E disappears in the next frame f2, and the spheres F ). The change information of the object to be processed at present is information necessary for the phrase (E) to move to the position of the phrase (F). The disappearance of the spheres E has been described using FIG. 4 as an example, and a detailed description thereof will be omitted.

The update predicting unit 121 determines that the sphere E in the current frame f1 is to be moved to the regions 23, 24, 33, and 34 of the next frame f2 using the change information, , 24, 33, 34) may be predicted to be the area to be updated.

The data preparing unit 123 acquires the information of the predicted areas 23, 24, 33, and 34 because the information of the object data among the information of the predicted areas 23, 24, 33, ID, size or coordinates, type of region, object data intersecting with the region, color change information which is data necessary for color calculation, and change information can be obtained from the rendered data of the current frame f1. The object data intersecting the area may be the object data of the phrase (E) displayed in the current frame (f1).

On the other hand, the update predicting unit 121 can predict that the phrase F is added in the next frame f2 although it is not present in the current frame f1, using the change information of the areas 23, 24, 33, The data preparation unit 123 may extract 3D object data located in the predicted areas 23, 24, 33, and 34 from the next frame f2 and prepare data to be rendered. The prepared 3D object data may include ID, type, change information of the object, and information necessary for color calculation.

In the case of the above-mentioned type C dynamic change object, since the object data has a position shift, a rotation change, or a coordinate change, and there may be a color data change, the rendering unit 130 outputs the 3D object data Data to be rendered) can be transformed, written, and viewport mapped and rasterized.

On the other hand, when it is determined that the current object to be processed is the dynamic change object among the objects of the current frame f1, the update predicting unit 121 determines that the area corresponding to the object to be processed is the area to be updated, The region can be predicted from the geometric data of the current frame f1.

More specifically, the update predicting unit 121 receives the type of object to be processed (for example, a dynamic transformation object), object data or change information from the object storage unit 151, The geometric data of the current frame f1 can be input. Object information (including vertex information) or object change information may be input in the next frame.

In the next frame, if the vertex coordinate of the object is 0 or the object change information is 0, the object in the next frame may be a disappearing object.

In the next frame, if the vertex coordinate of the object is not 0 and the object change information is not 0, the vertex coordinates at which the object is located in the next frame f2 can be calculated using the transformation information of the object of the current frame. The update predicting unit 121 can calculate the intersection area by projecting the calculated coordinates on the current frame f1. If the area is different from the area in the current frame f1, the object to be processed currently exists in the current frame f1 and may be the object to be moved in the next frame f2. In this case, the object data of the area where the object to be processed currently exists in the current frame will be destroyed in the next frame, and the area to which the object to be currently processed belongs in the current frame may include the type B data.

If the vertex coordinate value to be located in the next frame is compared with the vertex coordinate value of the current frame, if the result (area) obtained by projecting the coordinate value is equal to or includes the area in the current frame f1, The object to be processed may be a type C region. Therefore, it is treated as a type C region.

Also, if the vertex coordinate value to be positioned in the next frame is the same as the vertex coordinate value in the current frame f1 and the projected area is the same, it may be a static change object. If the change information is color data change information, . If there is no coordinate change and no color change, it is processed as a fixed object area.

Let's look at how to deal with areas of type B objects that are dynamic change objects. The geometry data of the current frame f1 includes ID, size or coordinates of each area, kind of area, object ID, object data, transformation information (matrix) or animation path information intersecting with each area, can do. Data necessary for color calculation may also include color change information. Thus, both the case where the color data is not changed and the case where the color data is changed can be included.

The update predicting unit 121 may search the geometric data of the current frame f1 for an area to which an object to be processed currently belongs and predict the searched area as an area to be updated. If the object to be processed at present is an object of type B, the update predicting unit 121 removes object data to be currently processed from an arbitrary object (including a static object or a static or dynamic change object) belonging to the area to be updated You can change the object data set of the region.

The data preparation unit 123 may extract the data to be rendered of the area to be updated from which the object to be processed is removed from the current frame f1 or the next frame f2. The data preparation unit 123 may extract data to be rendered in the current frame f1 or the next frame f2 using the information of the area to be updated predicted by the data predicting unit 121. [ The information of the area to be updated can be used as information for updating only the predicted area. The information of the area to be updated may include the ID, size or coordinates of the area, the type of the area, an object ID that intersects with the area to be updated, object data, change information, or data necessary for color calculation.

4 is a diagram for explaining an embodiment for predicting an area to be updated in case of a type B in which a disappearing object (for example, a dynamic changing object disappears in the next frame). Referring to FIG. 4, it can be seen that the dynamic change object C among the current frame f1 and the next frame f2 is a sphere and disappears in the next frame f2.

The update predicting unit 121 calculates the coordinate value at which the object is located in the next frame f2 using the object transformation information if the object to be currently processed in the current frame f1 is the dynamic transformation object . The update predicting unit 121 can calculate the intersection area by projecting the calculated coordinate value on the current frame f1. It is determined that the object to be processed currently exists in the current frame f1 and the object C disappears in the next frame f2 and the region 31 to which the object C belongs, 32, 41, and 42 are to be updated. At this time, the update predicting unit 121 may change the object data set of the area to be updated by removing the data set of the object to be processed from the object data set belonging to the update area 31, 32, 41,

The data preparation unit 123 extracts the data to be rendered to be updated from the geometry data of the current frame f1 by using the information of the predicted areas 31, 32, 41 and 42 to prepare data necessary for rendering . The data to be prepared, the ID of the area to which the object to be processed currently belongs, the size or coordinate, the type of the area, object data that intersects with the area, color conversion information that is data necessary for color calculation, and change information.

In the case of the object of type B, since there is no change in the geometry data of the object, the rendering unit 130 can rasterize the object data of each extracted area. Thus, geometry processing during the rendering process can be omitted.

On the other hand, if it is determined that the object to be processed currently among the objects of the current frame f1 is a static change object, the update predicting unit 121 determines that the area corresponding to the object to be processed is the area to be updated, The region can be predicted from the rendered data of the current frame f1.

More specifically, the type of the object (static change object), the object ID, the object data, or the color change information may be input from the object storage unit 151 to the update prediction unit 121. In this case, since the object to be processed at present is a static change object, the area to which the object to be processed currently belongs may include the type B object data. The color change information of the object may be change information for one of directly changing the color value, changing the texture image, or changing the lighting component.

The update predicting unit 121 may search the area to which the current object to be processed (i.e., the static changing object) belongs and predict the searched area as the area to be updated, among the geometric data of the current frame f1.

The data preparing unit 123 may extract the predicted data to be rendered of the area to be updated from the current frame f1 or the next frame f2. The data preparation unit 123 may extract the data to be rendered using the information of the area to be updated in the current frame f1 or the next frame f2.

2 is a diagram for explaining an embodiment for predicting an area to be updated in an area in which a static change object exists. Referring to Fig. 2, the geometric data of the current frame f1 and the next frame f2 remain unchanged. However, it can be seen that the color value of the static changing object among the current frame f1 and the next frame f2 is changed by lighting. If the object to be processed at present in the current frame f1 is a static change object, the update predicting unit 121 determines that the current frame f1 is a static change object, that is, the region 11, 12, 21, 22, 23, 30, 31, 32, 33 , 40, 41, and 42 are to be updated.

The data preparing unit 123 may prepare the data to be rendered of the predicted areas 11, 12, 21, 22, 23, 30, 31, 32, 33, 40, 41, It is possible to prepare the data to be rendered by extracting from the geometric data of the frame f1. The prepared data may include ID, size or coordinates of the area to which the object to be processed belongs, type of area, object data intersecting with the area, and color change information.

In the case of the static change object, since there is no geometric information change (position shift, coordinate change or rotation change) of the object, the rendering unit 130 can perform lighting and rasterization on the extracted object data have. This may omit the transforming process during the rendering process.

3 is a diagram for explaining another embodiment for predicting an area to be updated in an area where a static change object exists. Referring to FIG. 3, it can be seen that the color value of the static change object among the current frame f1 and the next frame f2 is changed by a texture or a material property. The update predicting unit 121 can predict that the region 31, 32, 41, or 42 to which the object belongs is a region to be updated, if the object to be currently processed in the current frame f1 is a static change object.

The data preparation unit 123 extracts object data to be rendered by extracting data to be rendered of the area to be updated from the geometric data of the current frame f1 using the information of the predicted areas 31, 32, 41 and 42 You can prepare. The prepared data may include ID, size or coordinates of the area to which the object to be processed belongs, type of area, object data intersecting with the area, and color change information.

In the case of the static change object, since there is no change in the geometry data of the object, the rendering unit 130 can rasterize the object data of each extracted region.

The rendered data of all the areas to be updated in the next frame f2 generated by the above-described process are stored in the second geometry storage unit 156 and the second depth storage unit 157 and the second color storage unit 158 . The geometry data of the current frame f1 stored in the first geometry storage unit 152 is partially stored in the second geometry storage unit 156 in the next frame f2, Lt; / RTI > Thus, the geometry data of the next frame f2 can be stored in the first geometry storage unit 152. [

When rendering of all the areas to be updated is completed, the depth value of the current frame f1 stored in the first depth storage unit 153 and some of the color values stored in the first color storage unit 154 are stored in the second geometry storage May be updated to the depth value and the color value of the areas to be updated in the next frame f2 stored in the area 156. [ The depth value of the next frame f2 and the color value of the next frame f2 may be stored in the first depth storage unit 153 and the first color storage unit 154, respectively.

The data stored in the first geometry storage unit 152, the first depth storage unit 153, and the first color storage unit 154 can be reused in the rendering result of the next frame, and only the area to be updated can be rendered . Therefore, the time and the amount of computation required for rendering can be minimized.

Hereinafter, a rendering method of the 3D graphics rendering apparatus constructed as above will be described with reference to the drawings.

6 is a flowchart briefly explaining a rendering method according to the proposed embodiment.

In step 610, object information which is information of objects constituting a current frame and a current frame may be received.

In step 620, the rendering data of the received current frame may be generated using the rendering data of the previous frame.

In step 630, the screen area to be updated in the next frame may be predicted using the object information of the current frame and the rendering data of the current frame or the information of the object in the next frame.

In step 640, the data to be rendered of the predicted area may be extracted from the current frame or the next frame.

In operation 650, the extracted data may be rendered to generate an object of an area to be updated in the next frame.

7 is a flowchart illustrating a process of rendering a first frame among the rendering methods according to the present invention. Hereinafter, the first frame is used as an example of a current frame, and the second frame is used as an example of a next frame, and the application of the proposed embodiment is not limited thereto.

In step 710, when the object information of the current frame f1 is outputted from the application unit 110, the object storage unit 151 may store the object information of the current frame f1.

In step 720, all buffers (e.g., 152-158) of memory 150 may be cleared.

In step 730, the geometric processing unit 131 geometrically processes the 3D object constituting the current frame f1 to generate geometric data, which is 2D triangular data, and stores the generated geometric data in the first geometric storage unit 152. In step 730, the generated geometric data may be divided into regions in the area distributor 140 and input to the first geometric storage unit 152.

In step 740, the rasterization unit 133 rasterizes the 2D triangle data input from step 730 to calculate the depth value and the color value of each pixel of the current frame f1, and outputs the calculated depth value And the color value may be stored in the first depth color storage unit 153.

Meanwhile, according to the proposed embodiment, the rendering method can support a method (region-based or tile-based rendering method) and an object-based rendering method (object-based rendering method) .

FIG. 8 is a flowchart illustrating a process of predicting an area to be updated using a current frame among the rendering methods according to the present invention. Figure 8 relates to a method of rendering on a per-area basis.

The object information of each object of the current frame f1 stored in the object storage unit 151 and the geometry object data of the current frame f1 stored in the first geometry storage unit 152 are updated by the update prediction unit 121, As shown in FIG. The object information may include object type, object data, color change information, or geometry change information.

If it is determined in step 810 that the type of the object to be processed is not a fixed object, the update predicting unit 121 may check whether the object is a dynamic change object in step 820.

If it is determined in step 830 that the dynamic change object is a dynamic change object, the update predictor 121 may predict an area to be updated corresponding to the dynamic change object.

If it is determined in step 840 that the object is a static change object, the update prediction unit 121 may predict an area to be updated corresponding to the static change object.

The information of the area to be updated predicted in steps 830 and 840 is information used to update only the predicted area, and includes information such as the ID of the area, the size or the coordinates, the type of the area, object data crossing the area to be updated, , And data necessary for color calculation.

In step 850, the data preparation unit 123 may prepare the data to be rendered by merging the object data of the areas to be updated. Table 1 shows the types of objects merged in one area to be updated and an example of data to be prepared corresponding thereto.

Merged in one area to be updated
Object type
Data to render ready
One Create object (say id N)
Generated objects: static objects, dynamic transformation objects, static transformation objects
Object data (data with id N) added
2 A fixed object (say, id N)
(For example, when a fixed object is first created)
Reuse rendering results (geometry buffer, depth buffer, color buffer) corresponding to fixed object (id N)
3 A static change object (say, id N)
(For example, if the static change object in the previous frame has the same geometry data operation result but the color has changed)
Use the geometric operation data corresponding to the static change object (id N) of the previous frame
4 A dynamic change object (say, id N)
(For example, when the dynamic change object of the previous frame changes the geometry data in the next frame to include the area in the previous frame or change to another area)
Added dynamic change object data (id N data)
5 Destroy object (let it be an object that is in the previous frame but disappears in the next frame, for example, id N)
(Eg if there is an object in the previous frame (which can be a fixed object or a dynamic change object or a static change object) and then disappears in the next frame)
Erase object data (id N data), data that was obscured by erased object data is rendered

Object data types to be rendered in the region Previous frame Operation result Reuse aspect Rendering calculation side Object data corresponding to the region Type A Geometry results, depth and color buffers can all be reused No additional operations Fixed object data Type B Only geometry calculation results can be reused Raster operation required Static change object data,
The extinction object data (object data of the area of the current frame among the dynamic change objects)
Type C Only geometry operation result can be reused or calculation from geometric operation is necessary. Raster or geometry operation and raster operation required A dynamic change object (object data belonging to an area where an object to be changed in the next frame is to be located), object data

In step 860, if an area to be updated is predicted for all objects, the rendering unit 130 may render the data to be rendered of the area to be updated in step 870.

On the other hand, if it is not performed for all objects in step 860, the process moves from step 810 to step 880, moving to the next object in step 880.

FIG. 9 is a flowchart illustrating a process of predicting an area to be updated using a current frame among the rendering methods according to another embodiment of the present invention. FIG. 9 shows a method of rendering in units of objects, and steps 800 to 840 in FIG. 8 and steps 900 to 940 in FIG. 9 are the same.

However, in operation 950, the rendering unit 130 may render data on an object-by-object basis in an area to be updated.

If rendering of all objects is not completed in step 960, the process moves from step 970 to step 970, moving to the next object in step 970.

FIG. 10 is a flowchart illustrating a process of preparing data to be rendered in the dynamic change area according to the proposed embodiment.

In step 1010, the update predicting unit 121 can determine whether the object to be currently processed among the objects of the current frame f1 is a type B dynamic change object. A type B dynamic change object is an object that is in the previous frame but disappears in the next frame.

If it is confirmed as a type B dynamic change object, in step 1020, the update prediction unit 121 may search the geometry data of the current frame f1 for the area to which the object to be processed (dynamic change object) belongs.

In step 1030, the update prediction unit 121 may predict the searched area as an area to be updated.

In step 1040, the update predicting unit 121 may change an object set of an area to be updated by removing an object (dynamic change object) to be currently processed from object data belonging to an area to be updated.

In step 1050, the data preparing unit 123 may extract the data to be rendered of the area to be updated, from which the object to be processed is removed, from the current frame f1 or the next frame f2. The information of the area to be updated may include the ID of the area, the size or coordinates of the area, the type of the area, the object ID crossing the area to be updated, object data, change information, or data necessary for color calculation.

On the other hand, if the type determined in step 1010 is not B, the update prediction unit may enter step 1110. [

11 is a flowchart illustrating a process of preparing data to be rendered in the dynamic change area according to another embodiment of the present invention.

In step 1110, if the current object to be processed among the objects of the current frame f1 is type C, the update predicting unit 121 determines that the change information of the object to be currently processed is the transformation information . A Type C object is a dynamic change object that is not present in the previous frame but is generated in the next frame.

If it is the conversion information, in step 1130, the update predicting unit 121 may calculate the coordinates to be located in the next frame f2 by using the conversion information on the object.

In step 1140, the update predicting unit 121 may project an area of the calculated coordinates on the current frame f1 to confirm the intersecting area.

On the other hand, if the change information of the object is information on the animation path in step 1150, the update predicting unit 121 calculates the coordinates of the object in the next frame f2 using the key value of the animation path in step 1160 .

In step 1170, the update predicting unit 121 can project an area of the calculated coordinates on the current frame f1 to confirm the intersecting area.

In step 1180, the update predicting unit 121 determines the identified crossed area as an area to be updated and predicts the area to be updated from the geometry data of the current frame f1. Since the description related to the object disappearing in the next frame f2 has been described with reference to Fig. 10, detailed description will be omitted.

In step 1190, the data preparation unit 123 may prepare the data to be rendered of the area to be updated predicted in step 1180. The data preparing unit 123 may extract data to be rendered in the current frame f1 or the next frame f2 using the information of the intersecting area.

12 is a flowchart illustrating a process of preparing data to be rendered in the static change area according to the proposed embodiment.

In step 1210, the update predicting unit 121 can determine whether the object to be currently processed among the objects of the current frame f1 is a type B static change object.

If the object is a type B static change object, in step 1220, the update predicting unit 121 may search the geometric data of the current frame f1 or the area to which the object to be processed (i.e., the static change object) .

In step 1230, the update prediction unit 121 may predict the searched area as an area to be updated.

In step 1240, the data preparing unit 123 extracts data to be rendered of the area to be updated from the current frame f1 or the next frame f2 by using the information of the area to be updated.

13 is a flowchart for specifying a rendering method of a region to be updated according to the proposed embodiment.

If the current object to be processed is a fixed object in step 1310, the rendering data corresponding to the area to which the fixed object belongs in the rendering data of the current frame f1 is reused in step 1320, May be displayed on the panel 160.

On the other hand, if the current object to be processed is a type C dynamic change object in step 1330, the region to be updated, to which the current object to be processed belongs, is determined by the renderer 130 in step 1340, f2) of the object data of the area to be changed can be transformed, written, viewport mapped and rasterized.

On the other hand, if the object to be processed at present is a type B dynamic change object in step 1350, since there is no geometric change of the data of the object in the rendering unit 130 in step 1360, It is possible to rasterize the object data of each region extracted in the step. Thus, geometry processing during the rendering process can be omitted.

On the other hand, if the current object to be processed is a type B static change object in step 1370, the updated area to which the object to be currently processed belongs is updated in step 1380 by writing the extracted data corresponding to the area to be updated in the rendering unit 130 And rasterize or rasterize the geometric data. The geometric data can be reused and the color values of the pixels can be changed.

In step 1390, the results rendered in steps 1340, 1360, and 1380 may be stored in the second geometry storage unit 156 and the second depth color storage unit 156.

14 is a flowchart for explaining a method of classifying an object type. The method of classifying the object type may be performed in a separate block (not shown) of the application unit 110 or the update preparation unit 120 in the stage prior to the update prediction unit 121. [

If the input frame is not the first frame in step 1405, the object to be processed in the next frame is compared with the object processed in the current frame in step 1410. The data input in step 1405 includes object information and scene information (number of frames, viewpoint information or scene graph or shader program).

If it is determined in step 1415 that the object ID to be processed in the next frame is identical to the processed object ID in the current frame, it is determined in step 1420 whether the object to be processed in the next frame disappears in the next frame. In operation 1420, it is determined whether object data or object conversion information of the next frame is 0 in the next frame to determine whether the object disappears.

It is determined that object data (middle vertex coordinate value) of the next frame in the next frame is 0, or object conversion information of the next frame is 0, and it is determined that the object data disappears in step 1425. In step 1425, .

On the other hand, if it is determined in step 1420 that the object does not disappear, it is determined in step 1430 whether there is a dynamic change in the object to be processed.

If it is determined in step 1435 that there is a dynamic change in step 1430, it is determined that the type of the object to be processed at present is a dynamic change object.

The three cases judged as dynamic change objects are as follows.

First, (2-1) the object data of the next frame (for example, the vertex coordinate value) and the object data of the currently processed frame (for example, the vertex coordinate value) If the geometric transformation information is different from the geometric transformation information of the object data of the currently processed frame, the object to be processed is judged to be a dynamic transformation object.

(2-2) the object data of the next frame (e.g., the vertex coordinate value) is different from the object data of the currently processed frame (e.g., the vertex coordinate value) And the geometric transformation information of the object data of the currently processed frame are the same.

Third, (2-2) the object data of the next frame (for example, the vertex coordinate value) is different from the object data of the currently processed frame (for example, the vertex coordinate value) Is different from the geometric transformation information of the object data of the currently processed frame.

On the other hand, if it is determined in step 1430 that there is no dynamic change in the object currently processed, it is determined in step 1440 whether there is a static change in the object to be processed. That is, (2-1) the object data of the next frame (e.g., the vertex coordinate value) is the same as the object data of the currently processed frame (e.g., the vertex coordinate value) The geometric transformation information and the geometric transformation information of the object data of the currently processed frame are the same, (2-1-1-1) the color change information of the object data of the next frame and the color change information of the object data of the currently processed frame are If not, it is judged that there is a static conversion.

Accordingly, in step 1445, the object to be processed is judged to be a static object and set.

On the other hand, if the object to be processed in the next frame does not disappear in the next frame in step 1420, and there is no dynamic change and static change in the object to be processed in step 1447, do.

Also, if it is determined in step 1440 that there is no static change in the object to be processed at present, in step 1450, the object to be processed is determined and set as a fixed object. That is, (2-1) the object data of the next frame (e.g., the vertex coordinate value) is the same as the object data of the currently processed frame (e.g., the vertex coordinate value) The geometric transformation information and the geometric transformation information of the object data of the currently processed frame are the same, (2-1-1-1) the color change information of the object data of the next frame and the color change information of the object data of the currently processed frame are If they are the same, the object to be processed at present is determined as a fixed object.

On the other hand, if the object IDs are different from each other in step 1415, it is determined in step 1455 whether the object to be processed in the next frame is an object that did not exist in the current processed frame, that is, a new object.

If it is determined as a new object, in step 1460, the type of object to be processed is set as a generation object.

If it is determined that the object is not a new object, in step 1465, another object of the next frame is set as an object to be processed, and the process proceeds to step 1410.

In addition, if the frame input in step 1405 is the first frame (1th frame), step 1460 is entered.

In step 1470, the update preparation unit 120 performs update preparation using the objects set in steps 1425, 1435, 1445, 1450, and 1460.

In step 1475, the rendering unit 130 performs rendering on the object to be updated and completed. The rendered data is stored in the memory 150 and updated, and is displayed on the display panel 160.

In step 1480, the above-described processing is repeated until the processing for all the objects is completed.

The methods according to embodiments of the present invention may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software.

While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. This is possible.

Therefore, the scope of the present invention should not be limited to the described embodiments, but should be determined by the equivalents of the claims, as well as the claims.

100: rendering device 110:
120: update preparation unit 130: rendering unit
140: area distributor 150: memory

Claims (34)

A rendering unit for generating rendering data of a current frame using rendering data of a previous frame; And
Predicts a screen area to be updated in a next frame by using object information of objects constituting the current frame and rendering information of the current frame or object information of a next frame, An update preparation unit for extracting data to be rendered
/ RTI >
The rendering unit generates the updated area of the next frame by rendering the extracted data,
The update preparation unit,
Determining a type of an object to be processed among the objects constituting the current frame to be one of a static object, a dynamic object, and a fixed object,
If the type of the object is a dynamic object, the rendering unit determines whether a new geometry operation for the object in the next frame should be performed or a geometry operation for the object can be reused in the next frame,
If the type of the object is a static object, the geometry operation for the object is reused in the next frame,
If the type of the object is a fixed object, the geometry operation for the object, the depth buffer, and the color buffer are reused in the next frame,
Wherein when a type of the object is a dynamic object or a static object, a new geometric operation is performed on the object in the next frame.
The method according to claim 1,
Wherein the object information includes the object ID, the type of each object, data of each object, or change information of each object data.
3. The method according to claim 1 or 2,
The update preparation unit,
An update predictor for predicting an area to be updated corresponding to an object to be currently processed from rendering data of the current frame if the object to be currently processed among the objects of the current frame is a dynamic change object whose coordinates, And
A data preparation unit for extracting and preparing data to be rendered of the area to be updated from the current frame or the next frame,
Dimensional graphics rendering device.
The method of claim 3,
If the current object exists in the current frame and does not exist in the next frame, the update predictor removes the object to be processed from the object belonging to the area to be updated,
Wherein the data preparation unit extracts, from the current frame or the next frame, data to be rendered of an area to be updated in which the object to be processed is removed.
5. The method of claim 4,
Wherein the rendering unit rasterizes the data to be rendered extracted from the data preparation unit.
The method of claim 3,
If the current object to be processed is a dynamic change object whose geometry data changes in the next frame, calculates an area where the object to be currently processed is located in the next frame using change information of the object to be currently processed,
Wherein the data preparation unit extracts data to be rendered of the calculated area from the current frame or the next frame.
The method according to claim 6,
Wherein the rendering unit performs geometry processing and rasterization processing using data to be rendered extracted from the data preparing unit and change information of an object to be processed at present.
The method according to claim 6,
Wherein the change information of the object to be processed is one of transformation information or animation path information showing a change between the current frame and the next frame.
delete The method according to claim 1,
A tile binning unit for tile-binning the geometrically processed data, classifying the geometrically processed data into regions, and outputting the geometrically processed data belonging to each of the classified regions to a storage unit,
Dimensional graphics rendering apparatus.
3. The method according to claim 1 or 2,
The update preparation unit,
An update predicting unit for predicting an update area corresponding to an object to be currently processed from the rendering data of the current frame if the object to be currently processed among the objects of the current frame is a static change object whose color, And
A data preparation unit for extracting and preparing data to be rendered of the area to be updated from the current frame or the next frame,
Dimensional graphics rendering device.
12. The method of claim 11,
Wherein the update predicting unit searches the rendering data of the current frame for an area to which the static change object belongs and predicts the retrieved area as the area to be updated.
13. The method of claim 12,
Wherein the rendering unit is adapted to render and rasterize or rasterize the data to be rendered extracted from the data preparation unit.
The method according to claim 1,
Wherein the data to be rendered is applied to one of an object-based rendering method that is rendered on an object-by-object basis, or an area to render a geometry-processed result of an object and render the object by area or a tile-based rendering method.
3. The method according to claim 1 or 2,
The update preparation unit,
An update predicting unit for predicting an object newly generated in the next frame not to be in the current frame as a dynamic change object and for predicting an area to be updated in the next frame using change information of an object of the next frame; And
And a data preparation unit for extracting and preparing data to be rendered of the predicted updated area from the next frame.
16. The method of claim 15,
Wherein the update predicting unit determines the newly generated object as a generation object in the next frame and calculates an area where the generation object is located in the next frame using the change information of the object of the next frame,
And the data preparation unit extracts data to be rendered of the calculated area from the next frame.
17. The method of claim 16,
Wherein the rendering unit performs geometry processing and rasterization processing using data to be rendered extracted from the data preparing unit and change information of the object to be processed.
The method according to claim 1,
And the area corresponding to the fixed object reuses the rendering data of the previous frame.
The method according to claim 1,
Wherein the update preparation unit compares an object to be processed in the next frame with an object processed in the current frame by using the object information and the scene information, and classifies the type of the object to be processed.
Receiving object information and object information, which are information of objects constituting a current frame, and the current frame;
Generating rendering data of the received current frame using rendering data of a previous frame;
Determining a type of an object to be processed among the objects constituting the current frame to be one of a static object, a dynamic object, and a fixed object;
Estimating a screen area to be updated in a next frame using object information of the current frame and rendering information of the current frame or object information of a next frame;
Extracting data to be rendered in the area to be updated from the current frame or the next frame; And
Rendering the extracted data to generate the area to be updated of the next frame
Lt; / RTI >
If the type of the object is a dynamic object, the extracting step determines whether a new geometry operation for the object in the next frame should be performed or whether a geometry operation for the object can be reused in the next frame and,
If the type of the object is a static object, the geometry operation for the object is reused in the next frame,
If the type of the object is a fixed object, the geometry operation for the object, the depth buffer, and the color buffer are reused in the next frame,
Wherein when a type of the object is a dynamic object or a static object, a new geometric operation is performed on the object in the next frame.
21. The method of claim 20,
Wherein the predicting comprises:
A 3D graphics rendering method for predicting an area to be updated corresponding to an object to be processed from rendering data of the current frame if the object to be currently processed among the objects of the current frame is a dynamic transformation object whose coordinates, .
22. The method of claim 21,
Wherein the predicting comprises:
If the current object exists in the current frame and does not exist in the next frame, removes the object data from object data belonging to the area to be updated,
Wherein the extracting comprises:
And extracting data to be rendered of the area to be updated from which the object has been removed, from the current frame or the next frame.
23. The method of claim 22,
Wherein the rendering step rasterizes the extracted data to be rendered.
22. The method of claim 21,
Wherein the predicting comprises:
If the current object to be processed is a dynamic change object whose geometry data changes in the next frame, calculates a region where the object to be currently processed is located in the next frame using change information of the object to be currently processed,
Wherein the extracting comprises:
And extracting data to be rendered of the calculated area from the current frame or the next frame.
25. The method of claim 24,
Wherein the rendering comprises:
And performing geometry processing and rasterization processing using the extracted data to be rendered and change information of an object to be processed at present.
22. The method of claim 21,
Wherein the predicting comprises:
If the current object to be processed in the current frame is a static change object in which the color, texture, or lightness is changed, an area to which the static change object belongs in the rendering data of the current frame is searched, Respectively,
Wherein the extracting comprises:
And extracting data to be rendered of the area to be updated from the current frame or the next frame.
26. The method of claim 25,
Wherein the rendering comprises:
And rendering and rasterizing or rasterizing the extracted data to be rendered.
21. The method of claim 20,
Wherein the predicting comprises:
Wherein the object to be updated in the next frame is determined as a dynamic change object and the region to be updated in the next frame is predicted using change information of the object in the next frame.
29. The method of claim 28,
Wherein the predicting comprises:
A region to be processed in the next frame is calculated using the change information of the object of the next frame,
Wherein the extracting comprises:
And extracting data to be rendered of the calculated area from the next frame.
30. The method of claim 29,
Wherein the rendering comprises:
And performing geometry processing and rasterization processing using the extracted data to be rendered and change information of the object to be processed.
21. The method of claim 20,
Wherein the area corresponding to the fixed object reuses the rendering data of the previous frame.
21. The method of claim 20,
Comparing the object to be processed in the next frame with the object processed in the current frame by using the object information and the scene information, and classifying the type of the object to be processed
Further comprising the steps of:
32. A computer-readable recording medium recording a program for executing the method of any one of claims 20 to 32 in a computer. The method according to claim 1,
The update preparation unit,
Wherein the type of the object is determined as one of a static object, a dynamic object, and a fixed object based on vertex coordinate values, geometric transformation information, and color change information of the object in the current frame and the next frame.
KR1020100013432A 2010-02-12 2010-02-12 Method and Apparatus For Rendering 3D Graphics KR101661931B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020100013432A KR101661931B1 (en) 2010-02-12 2010-02-12 Method and Apparatus For Rendering 3D Graphics
US12/860,479 US8970580B2 (en) 2010-02-12 2010-08-20 Method, apparatus and computer-readable medium rendering three-dimensional (3D) graphics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100013432A KR101661931B1 (en) 2010-02-12 2010-02-12 Method and Apparatus For Rendering 3D Graphics

Publications (2)

Publication Number Publication Date
KR20110093404A KR20110093404A (en) 2011-08-18
KR101661931B1 true KR101661931B1 (en) 2016-10-10

Family

ID=44369342

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100013432A KR101661931B1 (en) 2010-02-12 2010-02-12 Method and Apparatus For Rendering 3D Graphics

Country Status (2)

Country Link
US (1) US8970580B2 (en)
KR (1) KR101661931B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200099117A (en) * 2018-09-04 2020-08-21 씨드로닉스(주) Method for acquiring movement attributes of moving object and apparatus for performing the same
KR20220143617A (en) * 2018-09-04 2022-10-25 씨드로닉스(주) Method for acquiring movement attributes of moving object and apparatus for performing the same
WO2023167396A1 (en) * 2022-03-04 2023-09-07 삼성전자주식회사 Electronic device and control method therefor

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120062563A1 (en) * 2010-09-14 2012-03-15 hi5 Networks, Inc. Pre-providing and pre-receiving multimedia primitives
US8711163B2 (en) * 2011-01-06 2014-04-29 International Business Machines Corporation Reuse of static image data from prior image frames to reduce rasterization requirements
US9384711B2 (en) * 2012-02-15 2016-07-05 Microsoft Technology Licensing, Llc Speculative render ahead and caching in multiple passes
KR101935494B1 (en) * 2012-03-15 2019-01-07 삼성전자주식회사 Grahpic processing device and method for updating grahpic edting screen
US9286122B2 (en) 2012-05-31 2016-03-15 Microsoft Technology Licensing, Llc Display techniques using virtual surface allocation
US9177533B2 (en) 2012-05-31 2015-11-03 Microsoft Technology Licensing, Llc Virtual surface compaction
US9230517B2 (en) 2012-05-31 2016-01-05 Microsoft Technology Licensing, Llc Virtual surface gutters
US9235925B2 (en) 2012-05-31 2016-01-12 Microsoft Technology Licensing, Llc Virtual surface rendering
US9153212B2 (en) * 2013-03-26 2015-10-06 Apple Inc. Compressed frame writeback and read for display in idle screen on case
US9400544B2 (en) 2013-04-02 2016-07-26 Apple Inc. Advanced fine-grained cache power management
US9261939B2 (en) 2013-05-09 2016-02-16 Apple Inc. Memory power savings in idle display case
US9870193B2 (en) * 2013-06-13 2018-01-16 Hiperwall, Inc. Systems, methods, and devices for animation on tiled displays
US9307007B2 (en) 2013-06-14 2016-04-05 Microsoft Technology Licensing, Llc Content pre-render and pre-fetch techniques
KR102116976B1 (en) * 2013-09-04 2020-05-29 삼성전자 주식회사 Apparatus and Method for rendering
KR102122454B1 (en) 2013-10-02 2020-06-12 삼성전자주식회사 Apparatus and Method for rendering a current frame using an image of previous tile
KR102101834B1 (en) * 2013-10-08 2020-04-17 삼성전자 주식회사 Image processing apparatus and method
KR20150042095A (en) * 2013-10-10 2015-04-20 삼성전자주식회사 Apparatus and Method for rendering frame by sorting processing sequence of draw commands
KR102147357B1 (en) * 2013-11-06 2020-08-24 삼성전자 주식회사 Apparatus and Method for managing commands
KR20150093048A (en) * 2014-02-06 2015-08-17 삼성전자주식회사 Method and apparatus for rendering graphics data and medium record of
US9940686B2 (en) * 2014-05-14 2018-04-10 Intel Corporation Exploiting frame to frame coherency in a sort-middle architecture
US9799091B2 (en) 2014-11-20 2017-10-24 Intel Corporation Apparatus and method for efficient frame-to-frame coherency exploitation for sort-last architectures
KR102327144B1 (en) 2014-11-26 2021-11-16 삼성전자주식회사 Graphic processing apparatus and method for performing tile-based graphics pipeline thereof
KR102317091B1 (en) * 2014-12-12 2021-10-25 삼성전자주식회사 Apparatus and method for processing image
KR102370617B1 (en) 2015-04-23 2022-03-04 삼성전자주식회사 Method and apparatus for processing a image by performing adaptive sampling
US10373286B2 (en) 2016-08-03 2019-08-06 Samsung Electronics Co., Ltd. Method and apparatus for performing tile-based rendering
KR102651126B1 (en) * 2016-11-28 2024-03-26 삼성전자주식회사 Graphic processing apparatus and method for processing texture in graphics pipeline
US10672367B2 (en) * 2017-07-03 2020-06-02 Arm Limited Providing data to a display in data processing systems
US10580106B2 (en) * 2018-02-28 2020-03-03 Basemark Oy Graphics processing method utilizing predefined render chunks
GB2585944B (en) * 2019-07-26 2022-01-26 Sony Interactive Entertainment Inc Apparatus and method for data generation
KR20190106852A (en) * 2019-08-27 2019-09-18 엘지전자 주식회사 Method and xr device for providing xr content
US11468627B1 (en) 2019-11-08 2022-10-11 Apple Inc. View dependent content updated rates

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100376207B1 (en) * 1994-08-15 2003-05-01 제너럴 인스트루먼트 코포레이션 Method and apparatus for efficient addressing of DRAM in video expansion processor
KR100682456B1 (en) * 2006-02-08 2007-02-15 삼성전자주식회사 Method and system of rendering 3-dimensional graphics data for minimising rendering area
US20070097138A1 (en) * 2005-11-01 2007-05-03 Peter Sorotokin Virtual view tree
US20100021060A1 (en) * 2008-07-24 2010-01-28 Microsoft Corporation Method for overlapping visual slices

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6195098B1 (en) 1996-08-02 2001-02-27 Autodesk, Inc. System and method for interactive rendering of three dimensional objects
KR100354824B1 (en) 1999-11-22 2002-11-27 신영길 A real-time rendering method and device using temporal coherency
JP2002015328A (en) 2000-06-30 2002-01-18 Matsushita Electric Ind Co Ltd Method for rendering user interactive scene of object base displayed using scene description
US7289131B2 (en) 2000-12-22 2007-10-30 Bracco Imaging S.P.A. Method of rendering a graphics image
KR100657962B1 (en) 2005-06-21 2006-12-14 삼성전자주식회사 Apparatus and method for displaying 3-dimension graphics
WO2008115195A1 (en) * 2007-03-15 2008-09-25 Thomson Licensing Methods and apparatus for automated aesthetic transitioning between scene graphs
KR100924122B1 (en) 2007-12-17 2009-10-29 한국전자통신연구원 Ray tracing device based on pixel processing element and method thereof
GB0810205D0 (en) * 2008-06-04 2008-07-09 Advanced Risc Mach Ltd Graphics processing systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100376207B1 (en) * 1994-08-15 2003-05-01 제너럴 인스트루먼트 코포레이션 Method and apparatus for efficient addressing of DRAM in video expansion processor
US20070097138A1 (en) * 2005-11-01 2007-05-03 Peter Sorotokin Virtual view tree
KR100682456B1 (en) * 2006-02-08 2007-02-15 삼성전자주식회사 Method and system of rendering 3-dimensional graphics data for minimising rendering area
US20100021060A1 (en) * 2008-07-24 2010-01-28 Microsoft Corporation Method for overlapping visual slices

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200099117A (en) * 2018-09-04 2020-08-21 씨드로닉스(주) Method for acquiring movement attributes of moving object and apparatus for performing the same
KR102454878B1 (en) * 2018-09-04 2022-10-17 씨드로닉스(주) Method for acquiring movement attributes of moving object and apparatus for performing the same
KR20220143617A (en) * 2018-09-04 2022-10-25 씨드로닉스(주) Method for acquiring movement attributes of moving object and apparatus for performing the same
KR102596388B1 (en) * 2018-09-04 2023-11-01 씨드로닉스(주) Method for acquiring movement attributes of moving object and apparatus for performing the same
WO2023167396A1 (en) * 2022-03-04 2023-09-07 삼성전자주식회사 Electronic device and control method therefor

Also Published As

Publication number Publication date
US20110199377A1 (en) 2011-08-18
KR20110093404A (en) 2011-08-18
US8970580B2 (en) 2015-03-03

Similar Documents

Publication Publication Date Title
KR101661931B1 (en) Method and Apparatus For Rendering 3D Graphics
US11657565B2 (en) Hidden culling in tile-based computer generated images
US11922534B2 (en) Tile based computer graphics
JP5847159B2 (en) Surface patch tessellation in tile-based rendering systems
KR102122454B1 (en) Apparatus and Method for rendering a current frame using an image of previous tile
US10032308B2 (en) Culling objects from a 3-D graphics pipeline using hierarchical Z buffers
KR101257849B1 (en) Method and Apparatus for rendering 3D graphic objects, and Method and Apparatus to minimize rendering objects for the same
US10229524B2 (en) Apparatus, method and non-transitory computer-readable medium for image processing based on transparency information of a previous frame
JP5634104B2 (en) Tile-based rendering apparatus and method
US8917281B2 (en) Image rendering method and system
JP4948273B2 (en) Information processing method and information processing apparatus
EP2728551B1 (en) Image rendering method and system
KR20160068204A (en) Data processing method for mesh geometry and computer readable storage medium of recording the same
JP7100624B2 (en) Hybrid rendering with binning and sorting of preferred primitive batches
KR20150042095A (en) Apparatus and Method for rendering frame by sorting processing sequence of draw commands
KR20150027638A (en) Apparatus and Method for rendering
JP2006113909A (en) Image processor, image processing method and image processing program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20190814

Year of fee payment: 4