KR101661931B1 - Method and Apparatus For Rendering 3D Graphics - Google Patents
Method and Apparatus For Rendering 3D Graphics Download PDFInfo
- Publication number
- KR101661931B1 KR101661931B1 KR1020100013432A KR20100013432A KR101661931B1 KR 101661931 B1 KR101661931 B1 KR 101661931B1 KR 1020100013432 A KR1020100013432 A KR 1020100013432A KR 20100013432 A KR20100013432 A KR 20100013432A KR 101661931 B1 KR101661931 B1 KR 101661931B1
- Authority
- KR
- South Korea
- Prior art keywords
- data
- area
- next frame
- frame
- rendering
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Image Generation (AREA)
- Software Systems (AREA)
Abstract
A 3D graphics rendering device and method therefor are provided.
The rendering unit may generate the rendering data of the current frame using the rendering data of the previous frame.
The update preparation unit predicts a screen area to be updated in the next frame by using the object information of the current frame and the rendering data of the current frame or the object information of the next frame and extracts the data to be rendered of the predicted area from the current frame or the next frame can do.
The rendering unit may render the extracted data to generate an area to be updated in the next frame.
Description
The present invention relates to a three-dimensional graphics rendering apparatus and method thereof, and relates to a three-dimensional graphics rendering apparatus and method for rendering rendering efficiency by predicting and rendering an area to be updated in a next frame.
Recently, a device for displaying three-dimensional graphic data on the screen has attracted attention. For example, the market size of a device to which a UI (User Interface) application used in a mobile device, an e-book application, an application for simulating a product used in a shopping mall on the Internet, and the like are applied is on the increase.
The above-described applications require fast rendering. On the other hand, most scenes displayed by the application often undergo partial changes. For example, when a plurality of icons are arranged in a matrix on a mobile device, the user can move only one row or one column, and the icons located in the remaining rows or columns do not change.
However, the existing 3D rendering technology renders all three-dimensional graphic data of the changed scene even if the scene changes. Thus, in the case of the mobile device described above, when the user shifts the icons located in one row to the left or right, the mobile device also displays the icons of the other rows that are not shifted together. In other words, since all three-dimensional graphic data are rendered every time a scene is changed, the conventional method performs a redundant operation to render and display, and requires a lot of time and a lot of memory space.
A rendering unit for generating rendering data of a current frame using rendering data of a previous frame; And predicting a screen area to be updated in a next frame by using object information of objects constituting the current frame and rendering data of the current frame or object information of a next frame, And a rendering preparation unit for extracting data to be rendered of the next frame, wherein the rendering unit renders the extracted data to generate the updated area of the next frame.
The object information includes the object ID, the type of each object, data of each object, or change information of each object data.
The update preparation unit predicts an area to be updated corresponding to the object to be processed from the rendering data of the current frame if the current object to be processed is a dynamic transformation object whose coordinate, An update prediction unit; And a data preparation unit for extracting and preparing data to be rendered of the area to be updated from the current frame or the next frame.
Wherein if the current object to be processed exists in the current frame and does not exist in the next frame, the update predicting unit removes the object to be processed from the object belonging to the area to be updated, Extracts data to be rendered of the area to be updated from which the object to be deleted has been removed from the current frame or the next frame.
The rendering unit rasterizes the data to be rendered extracted from the data preparation unit.
If the current object to be processed is a dynamic change object whose geometry data changes in the next frame, calculates a region where the object to be currently processed is located in the next frame using the change information of the object to be currently processed, The preparation unit extracts data to be rendered of the calculated area from the current frame or the next frame.
The rendering unit performs geometry processing and rasterization processing using the data to be rendered extracted from the data preparing unit and the change information of the object to be currently processed.
The change information of the current object to be processed is one of conversion information or animation path information showing a change between the current frame and the next frame.
And a storage unit for storing the geometrically processed data.
Further comprising an area distributor for tile-binning the geometrically processed data, sorting the geometrically processed data into regions, and outputting the geometrically processed data belonging to each of the classified regions to the storage unit.
Wherein the update preparation unit updates an update area corresponding to the object to be processed from the rendering data of the current frame if the current object to be processed is a static change object whose color, A prediction unit; And a data preparation unit for extracting and preparing data to be rendered of the area to be updated from the current frame or the next frame.
The update predicting unit searches the rendering data of the current frame for an area to which the static change object belongs and predicts the retrieved area as the area to be updated.
The rendering unit only lights and rasterizes or rasterizes the data to be rendered extracted from the data preparation unit.
The data to be rendered is applied to one of an object-based rendering method rendered on an object-by-object basis, or a region or a tile-based rendering method of rendering a result of geometry processing of an object and rendering the object on a region-by-region basis.
Wherein the update preparation unit judges that the object that is not present in the current frame and newly generated in the next frame is a dynamic change object and processes the area to be updated in the next frame by using the change information of the object of the next frame A prediction unit; And a data preparation unit for extracting and preparing data to be rendered of the predicted updated area from the next frame.
Wherein the update predicting unit determines an object newly generated in the next frame as a generation object and calculates a region where the generation object is located in the next frame using change information of the object of the next frame, And extracts data to be rendered of the calculated area from the next frame.
The rendering unit performs geometry processing and rasterization processing using the data to be rendered extracted from the data preparation unit and the change information of the object to be processed.
The area corresponding to the fixed object reuses the rendering data of the previous frame.
The update preparation unit compares the object to be processed in the next frame with the object processed in the current frame by using the object information and the scene information, and classifies the type of the object to be processed.
Receiving object information and object information, which are information of objects constituting a current frame, and the current frame; Generating rendering data of the received current frame using rendering data of a previous frame; Estimating a screen area to be updated in a next frame using object information of the current frame and rendering information of the current frame or object information of a next frame; Extracting data to be rendered in the area to be updated from the current frame or the next frame; And rendering the extracted data to generate the updated area of the next frame.
Wherein if the current object to be processed is a dynamic change object whose coordinate, position, or rotation is changed among the objects of the current frame, the predicting step may predict the updated area corresponding to the current object to be processed from the rendering data of the current frame, do.
Wherein the predicting step removes the object data from object data belonging to the area to be updated if the current object to be processed exists in the current frame and does not exist in the next frame, Extracts data to be rendered of the area to be updated from which the object has been removed from the current frame or the next frame.
The rendering step rasterizes the extracted data to be rendered.
Wherein if the current object to be processed is a dynamic change object in which the geometric data changes in the next frame, the predictor may be a region in which the object to be currently processed is located in the next frame And the extracting step extracts data to be rendered of the calculated area from the current frame or the next frame.
The rendering step performs geometry processing and rasterization processing using the data to be rendered extracted from the data preparation unit and the change information of the object to be processed at present.
Wherein the predicting step searches the rendering data of the current frame for a region to which the static change object belongs if the current object to be processed of the current frame is a static change object whose color, The extracting step extracts data to be rendered of the area to be updated from the current frame or the next frame.
The rendering step may only render and rasterize or rasterize the extracted data to be rendered.
Wherein the predicting step determines that the object not present in the current frame but newly generated in the next frame is a dynamic change object and predicts an area to be updated in the next frame using change information of the object of the next frame .
Wherein the predicting step determines a new object to be generated in the next frame as a generated object and calculates a region where the object to be processed is located in the next frame using change information of the object of the next frame, Extracts data to be rendered of the calculated area from the next frame.
The rendering step performs geometry processing and rasterization processing using the extracted data to be rendered and change information of the object to be processed.
The area corresponding to the fixed object reuses the rendering data of the previous frame.
And classifying the object to be processed by comparing the object to be processed in the next frame with the object processed in the current frame by using the object information and the scene information.
According to the proposed embodiment, it is possible to reduce the time required for clearing the buffer by clearing the entire buffer only when the first frame is input without clearing all buffers every time a new frame is input.
In addition, according to the present invention, the buffer clear area can be minimized by storing the data stored in the buffer by area.
According to the proposed embodiment, the area to be updated in the next frame is predicted and only the buffer of the area to be updated is cleared, so that the object data of the updated area is rendered, thereby minimizing the time and computation amount required for buffer clearing and rendering And can perform rendering in a low power environment.
According to the proposed embodiment, only the area to be updated is rendered, and the remaining area is displayed on the screen using the previous rendering result, so that the rendering area can be minimized and thus the rendering processing speed can be improved.
1 is a block diagram illustrating a 3D graphics rendering apparatus according to an embodiment of the present invention.
FIG. 2 is a diagram for explaining an embodiment for predicting an area to be updated in an area where a static change object exists;
3 is a diagram for explaining another embodiment for predicting an area to be updated in an area in which a static change object exists;
4 is a diagram for explaining an embodiment for predicting an area to be updated in a dynamic change area having a type B dynamic change object,
5 is a diagram for explaining an embodiment for predicting an area to be updated in a dynamic change area in which a dynamic change object of type C exists;
6 is a flowchart briefly explaining a rendering method according to the proposed embodiment,
FIG. 7 is a flowchart illustrating a process of rendering a first frame among the rendering methods according to the present invention. FIG.
FIG. 8 is a flowchart illustrating a process of predicting an area to be updated using a current frame among the rendering methods according to the present invention. FIG.
FIG. 9 is a flowchart illustrating a process of predicting an area to be updated using a current frame among the rendering methods according to another embodiment of the present invention.
10 and 11 are flowcharts for explaining a process of preparing data to be rendered in the dynamic change area according to the proposed embodiment,
FIG. 12 is a flowchart illustrating a process of preparing data to be rendered in a static change area according to an embodiment of the present invention. FIG.
13 is a flowchart illustrating a rendering method for each area to be updated according to the proposed embodiment,
14 is a flowchart for explaining a method of classifying an object type.
Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.
1 is a block diagram illustrating a 3D graphics rendering apparatus according to an embodiment of the present invention.
Referring to FIG. 1, the
First, in FIG. 1, a dotted line with a narrow interval (for example, a dotted line between the object information generating unit and the object storage unit) is for the data flow of the first frame or the current processed frame or the next frame.
The dotted line (for example, the line between the update predictor and the object storage) shows the flow of the input data of the second frame or the next frame processed next. Further, a dash-dotted line (for example, a line between the geometry processing unit and the first geometry storage unit) shows the flow of data input to the rendering unit in the second frame or the next frame processed next.
Also, a solid solid line (for example, a geometry processing unit and an area distributor) shows the flow of output data output from the rendering unit. In addition, a dotted line with a large interval (for example, a line between the first color storage unit and the second color storage unit) shows data movement for updating as data corresponding to the update area.
The
The object information may include at least one of an object ID, object data, a type of object, change information of the object (geometric data change or color data change), and data necessary for color calculation. The object data includes, for example, the coordinates of a vertex of an object constituting a frame and normal or texture coordinates or a color or texture. The object types include dynamic change objects, static change objects and fixed objects, .
The scene information may include the number of frames, view change information, or a scene graph or a shader program.
A dynamic change object is an object to which transformation information, or animation with a path, is applied, and may be an object whose coordinates, position, or rotation vary in the next frame. Also, when the color value changes, the dynamic change object is included in the dynamic change object. A static change object can be an object whose object's coordinates, position, or rotation do not change, and whose object's texture, color, texture, or light changes in the next frame. The fixed object may be an object that remains unchanged in the next frame.
The generated object may be an object that is not present in the current frame but is added to the next frame. The extinction object may be an object that is in the current frame but disappears in the next frame. The generating object may be a fixed object or a dynamic changing object or a static changing object in the next frame. Further, a fixed object or a dynamic change object or a static change object may become a destruction object in the next frame.
The method of distinguishing objects may be located in the application predecessor or in an update predecessor of the update predecessor. A method of distinguishing object types will be described in detail later with reference to FIG.
The change information of the object may include coordinates, position, or rotation information of the object data that the object changes in the next frame, and may be represented by transformation information (matrix or vector) or animation path. The animation path includes information about the movement path when the object is displayed at a different position in the next frame, and can be expressed by (key, key value). key is the time, and the key value may be a value at the time of the key (e.g., a coordinate value). And may also include information that changes color values.
The data necessary for color calculation may include texture information, lighting information, or variables or constants as data to change the color.
On the other hand, if the frame output from the
The
The geometry processing part may include a geometry operation in a fixed pipeline or a vertex shader in a programmable shader pipeline. The rasterization part may include a rasterization part of a fixed pipeline or a fragment shader part in a programmable shader pipeline. The 3D object data may be vertex data of a 3D triangle.
The
The
The
The
The object storage unit 151 may store object information and scene information of each frame input from the
The first
The first
The
The second
The second
Alternatively, the stencil value store (not shown) may have a value of 1 byte per pixel and may be used for raster operations with a depth buffer storing the pixel depth value and a color buffer storing the color value.
The second frame can be copied to the LCD panel by copying only the rendering result (e.g., color value) of the update area in the rendering result of the first frame.
When the third frame f3 as the third frame is input, the first
Hereinafter, a process of predicting and rendering an area to be updated in the next frame using the current frame when a next frame is input will be described. The second frame f2 is used as the next frame, and the first frame f1 is used as the current frame.
The first
The
The
The
The geometric data of the input current frame f1 may include ID, size or coordinates of each area, kind of area, object data intersecting each area, object ID, change information, or data necessary for color calculation .
The information of the area to be updated can be used as information for updating only the predicted area. The information of the area to be updated may include the ID, size or coordinates of the area, the type of area, object data intersecting with the area to be updated, object ID, change information, or data necessary for color calculation.
If the
In addition, since the fixed object is not an object to be changed in the next frame f2, the geometric data or the color value and the depth value calculated in the current frame f1 can be reused when displayed on the
The types of object data to be updated in the area to be updated are classified into types A, B, and C in Table 2.
If it is determined that the object to be processed is the dynamic change object among the objects of the current frame f1 and the geometry data is changed in the next frame f2, . ≪ / RTI > If the object change information is present, the
The
More specifically, if the change information of the object to be processed at present is transformation information, the
The
5 is a diagram for explaining an embodiment for predicting an area to be updated in an area in which a type C dynamic change object exists. 5, the dynamic change object C among the current frame f1 and the next frame f2 is a sphere E, F, the sphere E disappears in the next frame f2, and the spheres F ). The change information of the object to be processed at present is information necessary for the phrase (E) to move to the position of the phrase (F). The disappearance of the spheres E has been described using FIG. 4 as an example, and a detailed description thereof will be omitted.
The
The
On the other hand, the
In the case of the above-mentioned type C dynamic change object, since the object data has a position shift, a rotation change, or a coordinate change, and there may be a color data change, the
On the other hand, when it is determined that the current object to be processed is the dynamic change object among the objects of the current frame f1, the
More specifically, the
In the next frame, if the vertex coordinate of the object is 0 or the object change information is 0, the object in the next frame may be a disappearing object.
In the next frame, if the vertex coordinate of the object is not 0 and the object change information is not 0, the vertex coordinates at which the object is located in the next frame f2 can be calculated using the transformation information of the object of the current frame. The
If the vertex coordinate value to be located in the next frame is compared with the vertex coordinate value of the current frame, if the result (area) obtained by projecting the coordinate value is equal to or includes the area in the current frame f1, The object to be processed may be a type C region. Therefore, it is treated as a type C region.
Also, if the vertex coordinate value to be positioned in the next frame is the same as the vertex coordinate value in the current frame f1 and the projected area is the same, it may be a static change object. If the change information is color data change information, . If there is no coordinate change and no color change, it is processed as a fixed object area.
Let's look at how to deal with areas of type B objects that are dynamic change objects. The geometry data of the current frame f1 includes ID, size or coordinates of each area, kind of area, object ID, object data, transformation information (matrix) or animation path information intersecting with each area, can do. Data necessary for color calculation may also include color change information. Thus, both the case where the color data is not changed and the case where the color data is changed can be included.
The
The
4 is a diagram for explaining an embodiment for predicting an area to be updated in case of a type B in which a disappearing object (for example, a dynamic changing object disappears in the next frame). Referring to FIG. 4, it can be seen that the dynamic change object C among the current frame f1 and the next frame f2 is a sphere and disappears in the next frame f2.
The
The
In the case of the object of type B, since there is no change in the geometry data of the object, the
On the other hand, if it is determined that the object to be processed currently among the objects of the current frame f1 is a static change object, the
More specifically, the type of the object (static change object), the object ID, the object data, or the color change information may be input from the object storage unit 151 to the
The
The
2 is a diagram for explaining an embodiment for predicting an area to be updated in an area in which a static change object exists. Referring to Fig. 2, the geometric data of the current frame f1 and the next frame f2 remain unchanged. However, it can be seen that the color value of the static changing object among the current frame f1 and the next frame f2 is changed by lighting. If the object to be processed at present in the current frame f1 is a static change object, the
The
In the case of the static change object, since there is no geometric information change (position shift, coordinate change or rotation change) of the object, the
3 is a diagram for explaining another embodiment for predicting an area to be updated in an area where a static change object exists. Referring to FIG. 3, it can be seen that the color value of the static change object among the current frame f1 and the next frame f2 is changed by a texture or a material property. The
The
In the case of the static change object, since there is no change in the geometry data of the object, the
The rendered data of all the areas to be updated in the next frame f2 generated by the above-described process are stored in the second
When rendering of all the areas to be updated is completed, the depth value of the current frame f1 stored in the first
The data stored in the first
Hereinafter, a rendering method of the 3D graphics rendering apparatus constructed as above will be described with reference to the drawings.
6 is a flowchart briefly explaining a rendering method according to the proposed embodiment.
In
In
In
In
In
7 is a flowchart illustrating a process of rendering a first frame among the rendering methods according to the present invention. Hereinafter, the first frame is used as an example of a current frame, and the second frame is used as an example of a next frame, and the application of the proposed embodiment is not limited thereto.
In
In
In
In
Meanwhile, according to the proposed embodiment, the rendering method can support a method (region-based or tile-based rendering method) and an object-based rendering method (object-based rendering method) .
FIG. 8 is a flowchart illustrating a process of predicting an area to be updated using a current frame among the rendering methods according to the present invention. Figure 8 relates to a method of rendering on a per-area basis.
The object information of each object of the current frame f1 stored in the object storage unit 151 and the geometry object data of the current frame f1 stored in the first
If it is determined in
If it is determined in
If it is determined in
The information of the area to be updated predicted in
In
Object type
Generated objects: static objects, dynamic transformation objects, static transformation objects
(For example, when a fixed object is first created)
(For example, if the static change object in the previous frame has the same geometry data operation result but the color has changed)
(For example, when the dynamic change object of the previous frame changes the geometry data in the next frame to include the area in the previous frame or change to another area)
(Eg if there is an object in the previous frame (which can be a fixed object or a dynamic change object or a static change object) and then disappears in the next frame)
The extinction object data (object data of the area of the current frame among the dynamic change objects)
In
On the other hand, if it is not performed for all objects in
FIG. 9 is a flowchart illustrating a process of predicting an area to be updated using a current frame among the rendering methods according to another embodiment of the present invention. FIG. 9 shows a method of rendering in units of objects, and steps 800 to 840 in FIG. 8 and
However, in
If rendering of all objects is not completed in
FIG. 10 is a flowchart illustrating a process of preparing data to be rendered in the dynamic change area according to the proposed embodiment.
In
If it is confirmed as a type B dynamic change object, in
In
In
In
On the other hand, if the type determined in
11 is a flowchart illustrating a process of preparing data to be rendered in the dynamic change area according to another embodiment of the present invention.
In
If it is the conversion information, in
In
On the other hand, if the change information of the object is information on the animation path in
In
In
In
12 is a flowchart illustrating a process of preparing data to be rendered in the static change area according to the proposed embodiment.
In
If the object is a type B static change object, in
In
In
13 is a flowchart for specifying a rendering method of a region to be updated according to the proposed embodiment.
If the current object to be processed is a fixed object in
On the other hand, if the current object to be processed is a type C dynamic change object in
On the other hand, if the object to be processed at present is a type B dynamic change object in
On the other hand, if the current object to be processed is a type B static change object in
In
14 is a flowchart for explaining a method of classifying an object type. The method of classifying the object type may be performed in a separate block (not shown) of the
If the input frame is not the first frame in
If it is determined in
It is determined that object data (middle vertex coordinate value) of the next frame in the next frame is 0, or object conversion information of the next frame is 0, and it is determined that the object data disappears in
On the other hand, if it is determined in
If it is determined in
The three cases judged as dynamic change objects are as follows.
First, (2-1) the object data of the next frame (for example, the vertex coordinate value) and the object data of the currently processed frame (for example, the vertex coordinate value) If the geometric transformation information is different from the geometric transformation information of the object data of the currently processed frame, the object to be processed is judged to be a dynamic transformation object.
(2-2) the object data of the next frame (e.g., the vertex coordinate value) is different from the object data of the currently processed frame (e.g., the vertex coordinate value) And the geometric transformation information of the object data of the currently processed frame are the same.
Third, (2-2) the object data of the next frame (for example, the vertex coordinate value) is different from the object data of the currently processed frame (for example, the vertex coordinate value) Is different from the geometric transformation information of the object data of the currently processed frame.
On the other hand, if it is determined in
Accordingly, in
On the other hand, if the object to be processed in the next frame does not disappear in the next frame in
Also, if it is determined in
On the other hand, if the object IDs are different from each other in
If it is determined as a new object, in
If it is determined that the object is not a new object, in
In addition, if the frame input in
In
In
In
The methods according to embodiments of the present invention may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software.
While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. This is possible.
Therefore, the scope of the present invention should not be limited to the described embodiments, but should be determined by the equivalents of the claims, as well as the claims.
100: rendering device 110:
120: update preparation unit 130: rendering unit
140: area distributor 150: memory
Claims (34)
Predicts a screen area to be updated in a next frame by using object information of objects constituting the current frame and rendering information of the current frame or object information of a next frame, An update preparation unit for extracting data to be rendered
/ RTI >
The rendering unit generates the updated area of the next frame by rendering the extracted data,
The update preparation unit,
Determining a type of an object to be processed among the objects constituting the current frame to be one of a static object, a dynamic object, and a fixed object,
If the type of the object is a dynamic object, the rendering unit determines whether a new geometry operation for the object in the next frame should be performed or a geometry operation for the object can be reused in the next frame,
If the type of the object is a static object, the geometry operation for the object is reused in the next frame,
If the type of the object is a fixed object, the geometry operation for the object, the depth buffer, and the color buffer are reused in the next frame,
Wherein when a type of the object is a dynamic object or a static object, a new geometric operation is performed on the object in the next frame.
Wherein the object information includes the object ID, the type of each object, data of each object, or change information of each object data.
The update preparation unit,
An update predictor for predicting an area to be updated corresponding to an object to be currently processed from rendering data of the current frame if the object to be currently processed among the objects of the current frame is a dynamic change object whose coordinates, And
A data preparation unit for extracting and preparing data to be rendered of the area to be updated from the current frame or the next frame,
Dimensional graphics rendering device.
If the current object exists in the current frame and does not exist in the next frame, the update predictor removes the object to be processed from the object belonging to the area to be updated,
Wherein the data preparation unit extracts, from the current frame or the next frame, data to be rendered of an area to be updated in which the object to be processed is removed.
Wherein the rendering unit rasterizes the data to be rendered extracted from the data preparation unit.
If the current object to be processed is a dynamic change object whose geometry data changes in the next frame, calculates an area where the object to be currently processed is located in the next frame using change information of the object to be currently processed,
Wherein the data preparation unit extracts data to be rendered of the calculated area from the current frame or the next frame.
Wherein the rendering unit performs geometry processing and rasterization processing using data to be rendered extracted from the data preparing unit and change information of an object to be processed at present.
Wherein the change information of the object to be processed is one of transformation information or animation path information showing a change between the current frame and the next frame.
A tile binning unit for tile-binning the geometrically processed data, classifying the geometrically processed data into regions, and outputting the geometrically processed data belonging to each of the classified regions to a storage unit,
Dimensional graphics rendering apparatus.
The update preparation unit,
An update predicting unit for predicting an update area corresponding to an object to be currently processed from the rendering data of the current frame if the object to be currently processed among the objects of the current frame is a static change object whose color, And
A data preparation unit for extracting and preparing data to be rendered of the area to be updated from the current frame or the next frame,
Dimensional graphics rendering device.
Wherein the update predicting unit searches the rendering data of the current frame for an area to which the static change object belongs and predicts the retrieved area as the area to be updated.
Wherein the rendering unit is adapted to render and rasterize or rasterize the data to be rendered extracted from the data preparation unit.
Wherein the data to be rendered is applied to one of an object-based rendering method that is rendered on an object-by-object basis, or an area to render a geometry-processed result of an object and render the object by area or a tile-based rendering method.
The update preparation unit,
An update predicting unit for predicting an object newly generated in the next frame not to be in the current frame as a dynamic change object and for predicting an area to be updated in the next frame using change information of an object of the next frame; And
And a data preparation unit for extracting and preparing data to be rendered of the predicted updated area from the next frame.
Wherein the update predicting unit determines the newly generated object as a generation object in the next frame and calculates an area where the generation object is located in the next frame using the change information of the object of the next frame,
And the data preparation unit extracts data to be rendered of the calculated area from the next frame.
Wherein the rendering unit performs geometry processing and rasterization processing using data to be rendered extracted from the data preparing unit and change information of the object to be processed.
And the area corresponding to the fixed object reuses the rendering data of the previous frame.
Wherein the update preparation unit compares an object to be processed in the next frame with an object processed in the current frame by using the object information and the scene information, and classifies the type of the object to be processed.
Generating rendering data of the received current frame using rendering data of a previous frame;
Determining a type of an object to be processed among the objects constituting the current frame to be one of a static object, a dynamic object, and a fixed object;
Estimating a screen area to be updated in a next frame using object information of the current frame and rendering information of the current frame or object information of a next frame;
Extracting data to be rendered in the area to be updated from the current frame or the next frame; And
Rendering the extracted data to generate the area to be updated of the next frame
Lt; / RTI >
If the type of the object is a dynamic object, the extracting step determines whether a new geometry operation for the object in the next frame should be performed or whether a geometry operation for the object can be reused in the next frame and,
If the type of the object is a static object, the geometry operation for the object is reused in the next frame,
If the type of the object is a fixed object, the geometry operation for the object, the depth buffer, and the color buffer are reused in the next frame,
Wherein when a type of the object is a dynamic object or a static object, a new geometric operation is performed on the object in the next frame.
Wherein the predicting comprises:
A 3D graphics rendering method for predicting an area to be updated corresponding to an object to be processed from rendering data of the current frame if the object to be currently processed among the objects of the current frame is a dynamic transformation object whose coordinates, .
Wherein the predicting comprises:
If the current object exists in the current frame and does not exist in the next frame, removes the object data from object data belonging to the area to be updated,
Wherein the extracting comprises:
And extracting data to be rendered of the area to be updated from which the object has been removed, from the current frame or the next frame.
Wherein the rendering step rasterizes the extracted data to be rendered.
Wherein the predicting comprises:
If the current object to be processed is a dynamic change object whose geometry data changes in the next frame, calculates a region where the object to be currently processed is located in the next frame using change information of the object to be currently processed,
Wherein the extracting comprises:
And extracting data to be rendered of the calculated area from the current frame or the next frame.
Wherein the rendering comprises:
And performing geometry processing and rasterization processing using the extracted data to be rendered and change information of an object to be processed at present.
Wherein the predicting comprises:
If the current object to be processed in the current frame is a static change object in which the color, texture, or lightness is changed, an area to which the static change object belongs in the rendering data of the current frame is searched, Respectively,
Wherein the extracting comprises:
And extracting data to be rendered of the area to be updated from the current frame or the next frame.
Wherein the rendering comprises:
And rendering and rasterizing or rasterizing the extracted data to be rendered.
Wherein the predicting comprises:
Wherein the object to be updated in the next frame is determined as a dynamic change object and the region to be updated in the next frame is predicted using change information of the object in the next frame.
Wherein the predicting comprises:
A region to be processed in the next frame is calculated using the change information of the object of the next frame,
Wherein the extracting comprises:
And extracting data to be rendered of the calculated area from the next frame.
Wherein the rendering comprises:
And performing geometry processing and rasterization processing using the extracted data to be rendered and change information of the object to be processed.
Wherein the area corresponding to the fixed object reuses the rendering data of the previous frame.
Comparing the object to be processed in the next frame with the object processed in the current frame by using the object information and the scene information, and classifying the type of the object to be processed
Further comprising the steps of:
The update preparation unit,
Wherein the type of the object is determined as one of a static object, a dynamic object, and a fixed object based on vertex coordinate values, geometric transformation information, and color change information of the object in the current frame and the next frame.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020100013432A KR101661931B1 (en) | 2010-02-12 | 2010-02-12 | Method and Apparatus For Rendering 3D Graphics |
US12/860,479 US8970580B2 (en) | 2010-02-12 | 2010-08-20 | Method, apparatus and computer-readable medium rendering three-dimensional (3D) graphics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020100013432A KR101661931B1 (en) | 2010-02-12 | 2010-02-12 | Method and Apparatus For Rendering 3D Graphics |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20110093404A KR20110093404A (en) | 2011-08-18 |
KR101661931B1 true KR101661931B1 (en) | 2016-10-10 |
Family
ID=44369342
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020100013432A KR101661931B1 (en) | 2010-02-12 | 2010-02-12 | Method and Apparatus For Rendering 3D Graphics |
Country Status (2)
Country | Link |
---|---|
US (1) | US8970580B2 (en) |
KR (1) | KR101661931B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200099117A (en) * | 2018-09-04 | 2020-08-21 | 씨드로닉스(주) | Method for acquiring movement attributes of moving object and apparatus for performing the same |
KR20220143617A (en) * | 2018-09-04 | 2022-10-25 | 씨드로닉스(주) | Method for acquiring movement attributes of moving object and apparatus for performing the same |
WO2023167396A1 (en) * | 2022-03-04 | 2023-09-07 | 삼성전자주식회사 | Electronic device and control method therefor |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120062563A1 (en) * | 2010-09-14 | 2012-03-15 | hi5 Networks, Inc. | Pre-providing and pre-receiving multimedia primitives |
US8711163B2 (en) * | 2011-01-06 | 2014-04-29 | International Business Machines Corporation | Reuse of static image data from prior image frames to reduce rasterization requirements |
US9384711B2 (en) * | 2012-02-15 | 2016-07-05 | Microsoft Technology Licensing, Llc | Speculative render ahead and caching in multiple passes |
KR101935494B1 (en) * | 2012-03-15 | 2019-01-07 | 삼성전자주식회사 | Grahpic processing device and method for updating grahpic edting screen |
US9286122B2 (en) | 2012-05-31 | 2016-03-15 | Microsoft Technology Licensing, Llc | Display techniques using virtual surface allocation |
US9177533B2 (en) | 2012-05-31 | 2015-11-03 | Microsoft Technology Licensing, Llc | Virtual surface compaction |
US9230517B2 (en) | 2012-05-31 | 2016-01-05 | Microsoft Technology Licensing, Llc | Virtual surface gutters |
US9235925B2 (en) | 2012-05-31 | 2016-01-12 | Microsoft Technology Licensing, Llc | Virtual surface rendering |
US9153212B2 (en) * | 2013-03-26 | 2015-10-06 | Apple Inc. | Compressed frame writeback and read for display in idle screen on case |
US9400544B2 (en) | 2013-04-02 | 2016-07-26 | Apple Inc. | Advanced fine-grained cache power management |
US9261939B2 (en) | 2013-05-09 | 2016-02-16 | Apple Inc. | Memory power savings in idle display case |
US9870193B2 (en) * | 2013-06-13 | 2018-01-16 | Hiperwall, Inc. | Systems, methods, and devices for animation on tiled displays |
US9307007B2 (en) | 2013-06-14 | 2016-04-05 | Microsoft Technology Licensing, Llc | Content pre-render and pre-fetch techniques |
KR102116976B1 (en) * | 2013-09-04 | 2020-05-29 | 삼성전자 주식회사 | Apparatus and Method for rendering |
KR102122454B1 (en) | 2013-10-02 | 2020-06-12 | 삼성전자주식회사 | Apparatus and Method for rendering a current frame using an image of previous tile |
KR102101834B1 (en) * | 2013-10-08 | 2020-04-17 | 삼성전자 주식회사 | Image processing apparatus and method |
KR20150042095A (en) * | 2013-10-10 | 2015-04-20 | 삼성전자주식회사 | Apparatus and Method for rendering frame by sorting processing sequence of draw commands |
KR102147357B1 (en) * | 2013-11-06 | 2020-08-24 | 삼성전자 주식회사 | Apparatus and Method for managing commands |
KR20150093048A (en) * | 2014-02-06 | 2015-08-17 | 삼성전자주식회사 | Method and apparatus for rendering graphics data and medium record of |
US9940686B2 (en) * | 2014-05-14 | 2018-04-10 | Intel Corporation | Exploiting frame to frame coherency in a sort-middle architecture |
US9799091B2 (en) | 2014-11-20 | 2017-10-24 | Intel Corporation | Apparatus and method for efficient frame-to-frame coherency exploitation for sort-last architectures |
KR102327144B1 (en) | 2014-11-26 | 2021-11-16 | 삼성전자주식회사 | Graphic processing apparatus and method for performing tile-based graphics pipeline thereof |
KR102317091B1 (en) * | 2014-12-12 | 2021-10-25 | 삼성전자주식회사 | Apparatus and method for processing image |
KR102370617B1 (en) | 2015-04-23 | 2022-03-04 | 삼성전자주식회사 | Method and apparatus for processing a image by performing adaptive sampling |
US10373286B2 (en) | 2016-08-03 | 2019-08-06 | Samsung Electronics Co., Ltd. | Method and apparatus for performing tile-based rendering |
KR102651126B1 (en) * | 2016-11-28 | 2024-03-26 | 삼성전자주식회사 | Graphic processing apparatus and method for processing texture in graphics pipeline |
US10672367B2 (en) * | 2017-07-03 | 2020-06-02 | Arm Limited | Providing data to a display in data processing systems |
US10580106B2 (en) * | 2018-02-28 | 2020-03-03 | Basemark Oy | Graphics processing method utilizing predefined render chunks |
GB2585944B (en) * | 2019-07-26 | 2022-01-26 | Sony Interactive Entertainment Inc | Apparatus and method for data generation |
KR20190106852A (en) * | 2019-08-27 | 2019-09-18 | 엘지전자 주식회사 | Method and xr device for providing xr content |
US11468627B1 (en) | 2019-11-08 | 2022-10-11 | Apple Inc. | View dependent content updated rates |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100376207B1 (en) * | 1994-08-15 | 2003-05-01 | 제너럴 인스트루먼트 코포레이션 | Method and apparatus for efficient addressing of DRAM in video expansion processor |
KR100682456B1 (en) * | 2006-02-08 | 2007-02-15 | 삼성전자주식회사 | Method and system of rendering 3-dimensional graphics data for minimising rendering area |
US20070097138A1 (en) * | 2005-11-01 | 2007-05-03 | Peter Sorotokin | Virtual view tree |
US20100021060A1 (en) * | 2008-07-24 | 2010-01-28 | Microsoft Corporation | Method for overlapping visual slices |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6195098B1 (en) | 1996-08-02 | 2001-02-27 | Autodesk, Inc. | System and method for interactive rendering of three dimensional objects |
KR100354824B1 (en) | 1999-11-22 | 2002-11-27 | 신영길 | A real-time rendering method and device using temporal coherency |
JP2002015328A (en) | 2000-06-30 | 2002-01-18 | Matsushita Electric Ind Co Ltd | Method for rendering user interactive scene of object base displayed using scene description |
US7289131B2 (en) | 2000-12-22 | 2007-10-30 | Bracco Imaging S.P.A. | Method of rendering a graphics image |
KR100657962B1 (en) | 2005-06-21 | 2006-12-14 | 삼성전자주식회사 | Apparatus and method for displaying 3-dimension graphics |
WO2008115195A1 (en) * | 2007-03-15 | 2008-09-25 | Thomson Licensing | Methods and apparatus for automated aesthetic transitioning between scene graphs |
KR100924122B1 (en) | 2007-12-17 | 2009-10-29 | 한국전자통신연구원 | Ray tracing device based on pixel processing element and method thereof |
GB0810205D0 (en) * | 2008-06-04 | 2008-07-09 | Advanced Risc Mach Ltd | Graphics processing systems |
-
2010
- 2010-02-12 KR KR1020100013432A patent/KR101661931B1/en active IP Right Grant
- 2010-08-20 US US12/860,479 patent/US8970580B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100376207B1 (en) * | 1994-08-15 | 2003-05-01 | 제너럴 인스트루먼트 코포레이션 | Method and apparatus for efficient addressing of DRAM in video expansion processor |
US20070097138A1 (en) * | 2005-11-01 | 2007-05-03 | Peter Sorotokin | Virtual view tree |
KR100682456B1 (en) * | 2006-02-08 | 2007-02-15 | 삼성전자주식회사 | Method and system of rendering 3-dimensional graphics data for minimising rendering area |
US20100021060A1 (en) * | 2008-07-24 | 2010-01-28 | Microsoft Corporation | Method for overlapping visual slices |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200099117A (en) * | 2018-09-04 | 2020-08-21 | 씨드로닉스(주) | Method for acquiring movement attributes of moving object and apparatus for performing the same |
KR102454878B1 (en) * | 2018-09-04 | 2022-10-17 | 씨드로닉스(주) | Method for acquiring movement attributes of moving object and apparatus for performing the same |
KR20220143617A (en) * | 2018-09-04 | 2022-10-25 | 씨드로닉스(주) | Method for acquiring movement attributes of moving object and apparatus for performing the same |
KR102596388B1 (en) * | 2018-09-04 | 2023-11-01 | 씨드로닉스(주) | Method for acquiring movement attributes of moving object and apparatus for performing the same |
WO2023167396A1 (en) * | 2022-03-04 | 2023-09-07 | 삼성전자주식회사 | Electronic device and control method therefor |
Also Published As
Publication number | Publication date |
---|---|
US20110199377A1 (en) | 2011-08-18 |
KR20110093404A (en) | 2011-08-18 |
US8970580B2 (en) | 2015-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101661931B1 (en) | Method and Apparatus For Rendering 3D Graphics | |
US11657565B2 (en) | Hidden culling in tile-based computer generated images | |
US11922534B2 (en) | Tile based computer graphics | |
JP5847159B2 (en) | Surface patch tessellation in tile-based rendering systems | |
KR102122454B1 (en) | Apparatus and Method for rendering a current frame using an image of previous tile | |
US10032308B2 (en) | Culling objects from a 3-D graphics pipeline using hierarchical Z buffers | |
KR101257849B1 (en) | Method and Apparatus for rendering 3D graphic objects, and Method and Apparatus to minimize rendering objects for the same | |
US10229524B2 (en) | Apparatus, method and non-transitory computer-readable medium for image processing based on transparency information of a previous frame | |
JP5634104B2 (en) | Tile-based rendering apparatus and method | |
US8917281B2 (en) | Image rendering method and system | |
JP4948273B2 (en) | Information processing method and information processing apparatus | |
EP2728551B1 (en) | Image rendering method and system | |
KR20160068204A (en) | Data processing method for mesh geometry and computer readable storage medium of recording the same | |
JP7100624B2 (en) | Hybrid rendering with binning and sorting of preferred primitive batches | |
KR20150042095A (en) | Apparatus and Method for rendering frame by sorting processing sequence of draw commands | |
KR20150027638A (en) | Apparatus and Method for rendering | |
JP2006113909A (en) | Image processor, image processing method and image processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20190814 Year of fee payment: 4 |