WO1998053425A1 - Processeur d'image et procede correspondant - Google Patents
Processeur d'image et procede correspondant Download PDFInfo
- Publication number
- WO1998053425A1 WO1998053425A1 PCT/JP1998/002262 JP9802262W WO9853425A1 WO 1998053425 A1 WO1998053425 A1 WO 1998053425A1 JP 9802262 W JP9802262 W JP 9802262W WO 9853425 A1 WO9853425 A1 WO 9853425A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image processing
- polygon
- image
- information
- rearrangement
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims description 15
- 238000000034 method Methods 0.000 claims description 33
- 230000008707 rearrangement Effects 0.000 claims description 26
- 239000000470 constituent Substances 0.000 claims 2
- 239000012634 fragment Substances 0.000 abstract description 38
- 239000000872 buffer Substances 0.000 description 60
- 238000010586 diagram Methods 0.000 description 23
- 239000000463 material Substances 0.000 description 20
- 230000015654 memory Effects 0.000 description 16
- 238000009877 rendering Methods 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 7
- 239000000203 mixture Substances 0.000 description 6
- 238000013507 mapping Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 101100490563 Caenorhabditis elegans adr-1 gene Proteins 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000002156 mixing Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000007591 painting process Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/80—Shading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
Definitions
- the present invention relates to an image processing device and an image processing method I for a computer.
- fragments In the field of computer graphics, in order to achieve high image processing capability, the display screen is divided into small rectangular areas with fixed dimensions (hereinafter referred to as “fragments”), and processing is performed for each of these fragments.
- fragments In the field of computer graphics, in order to achieve high image processing capability, the display screen is divided into small rectangular areas with fixed dimensions (hereinafter referred to as “fragments”), and processing is performed for each of these fragments.
- (2) calculate the area covered by the polygon of interest in the fragment and fill it. For example, as shown in Fig. 17, for flag ⁇ 1, first fill polygon ⁇ (Fig. 17 (b)), then fill polygon B (see (c)), and finally fill polygon C (see Fig. 17). (D)) c Repeat this process until the last polygon has been processed or the flag has been completely covered. This is processed for all the fragments.
- the processing of the image element is performed by the above-described procedures (1) and (2).
- the present invention has been made to solve the above-mentioned problem, and enables efficient and high-speed processing of image elements in a computer graphic system.
- the purpose is to realize quality images.
- One way to achieve this is to divide the screen into small areas (fragments).
- the objective is to efficiently search for polygons included in the region of interest.
- the present invention provides an apparatus and a method for realizing a high-quality image at a lower cost than a conventional system in a computer graphics system. Disclosure of the invention
- An image processing apparatus is an image processing apparatus that divides a screen into a predetermined size and performs processing for each of the divided areas.
- First rearranging means for rearranging second rearranging means for rearranging information of image components in a horizontal direction with respect to a scanning line, and an image processing unit for performing image processing based on the rearranged information of the image components.
- the first rearrangement unit performs rearrangement based on a minimum or maximum value of the image components in the vertical direction, and
- the rearrangement means performs rearrangement based on a minimum or maximum value of the image components in the horizontal direction at a vertical coordinate of an area to be processed by the first rearrangement means.
- the first rearranging unit and the second rearranging unit perform link processing for linking the rearranged information of the image components to each other.
- the first rearrangement unit and the second rearrangement unit perform a link update process for invalidating an unnecessary part in an area corresponding to the image component. is there.
- the image processing unit may perform image processing for each of the divided areas by dividing an object to be processed into an opaque polygon, a polygon with transparent pixels, and a translucent polygon.
- the processing is performed in the order of the opaque polygon, the polygon with the transparent pixel, and the polygon with the translucent.
- An image processing method is directed to an image processing method in which a screen is divided into a predetermined size and processing is performed for each of the divided areas.
- sorting is performed based on a minimum value of the image component in the vertical direction
- the first sorting is performed.
- sorting is performed based on the minimum value of the image components in the horizontal direction.
- a link update process for invalidating an unnecessary portion in an area corresponding to the image component is performed. It is.
- an object to be processed in the image processing step, in the image processing for each of the divided areas, is divided into an opaque polygon, a polygon with transparent pixels, and a polygon with semi-transparency.
- the processing is performed in the order of an opaque polygon, the polygon with transparent pixels, and the polygon with semi-transparency.
- FIG. 1 is a schematic functional block diagram of an image processing apparatus according to Embodiment 1 of the present invention.
- FIG. 2 is a functional block diagram of the geometry processor of the image processing device according to the first embodiment of the present invention.
- FIG. 3 is a functional block diagram of a fill processor of the image processing device according to Embodiment 1 of the present invention.
- FIG. 4 is a functional block diagram of the texture processor of the image processing device according to the first embodiment of the present invention.
- FIG. 5 is a functional block diagram of a shading processor of the image processing device according to the first embodiment of the present invention.
- FIG. 6 is a flowchart showing the overall processing of the first embodiment of the present invention.
- FIG. 1 is a schematic functional block diagram of an image processing apparatus according to Embodiment 1 of the present invention.
- FIG. 2 is a functional block diagram of the geometry processor of the image processing device according to the first embodiment of the present invention.
- FIG. 3 is a functional block diagram of a fill processor of the image processing
- FIG. 7 is an explanatory diagram of processing relating to the Y index buffer and the Y sort buffer according to the first embodiment of the present invention.
- FIG. 8 shows a Y index according to the first embodiment of the present invention.
- FIG. 4 is an explanatory diagram of processing relating to a buffer and a Y sort buffer.
- FIG. 9 is an explanatory diagram of processing relating to (1) an index buffer and (2) a sort buffer according to Embodiment 1 of the present invention.
- FIG. 10 is an explanatory diagram of processing relating to the “index buffer” and the “soft buffer” according to the first embodiment of the present invention.
- FIG. 11 is a diagram showing an example of a fragment that is first effective for each polygon according to the first embodiment of the present invention.
- FIG. 12 is an explanatory diagram of the link update process according to the first embodiment of the present invention.
- FIG. 13 is an explanatory diagram of the link update processing according to the first embodiment of the present invention.
- FIG. 14 is a diagram illustrating the relationship between a fragment and polygons ⁇ , ⁇ , and C for explaining conventional processing.
- FIG. 15 is a diagram showing a flag corresponding to polygon ⁇ for explaining the conventional processing.
- FIG. 16 is a diagram showing the contents of the buffer memory for explaining the conventional processing.
- FIG. 17 is an explanatory diagram of the painting process of the conventional process.
- Embodiment 1 of the present invention will be described.
- FIG. 1 is a schematic functional block diagram of an image processing apparatus according to Embodiment 1 of the present invention.
- reference numeral 1 denotes a central processing unit (CPU), which operates on an object in a virtual space, obtains information on the object, and performs various controls.
- Reference numeral 2 denotes a geometry processor, which performs high-speed geometric transformations (vector operations) such as polygon coordinate transformation, clipping, and perspective transformation in three-dimensional computer graphics, and luminance calculation.
- 2a is a polygon / material Z light buffer memory (polygon / matrial / light buf fer RAM), and when the geometry processor 2 performs processing, the valid polygon data for one frame, material data, Righte — a buffer for storing evenings.
- a polygon is a polyhedron that forms a solid in virtual space.
- the breakdown of the data stored in the buffer memory 2a is as follows. Polygon link information, coordinate information, and other attribute information
- the fill port processor 3 is a fill processor that performs hidden surface removal processing.
- the fill port processor 3 paints the polygon in the area, and obtains the information of the nearest polygon for each pixel.
- Texture mapping is the process of creating an image by pasting a pattern (texture) defined separately from the shape on the surface of the object whose shape is defined.
- 4 a is a texture memory (texture RAM) in which a texture map to be processed by the texture processor 4 is stored.
- Shading is a method of expressing a polygon-like object as a shadow, taking into account the normal vector of the polygon, the position and color of the light source, the position of the viewpoint, and the direction of the line of sight.
- the shading processor 5 calculates the brightness of each pixel in the area.
- 5a is a frame buffer in which the image data of one screen is stored. After the data is sequentially read from the frame buffer 5a and converted from digital data to an analog signal, Then, it is supplied to a display (not shown) such as a CRT, a liquid crystal display device, or a plasma display device.
- Reference numeral 6 denotes a program work / polygon buffer RAM for storing a program of the CPU 1 and commands (a polygon database, a display list, etc.) to the graphic processor.
- This buffer memory 6 is also the work memory of the CPU 1.
- Perform so-called rendering In rendering, each area is processed in order from the top left of the screen. In practice, geometries place objects in virtual space coordinates and perform perspective transformation on the screen. Rendering creates a picture based on the data defined on the screen coordinates. The rendering process is repeated for the number of regions.
- FIG. 2 is a functional block diagram of the geometry processor 2.
- reference numeral 21 denotes a data dispatcher, which reads and analyzes commands from the buffer memory 6 and controls the vector engine 22 and the clipping engine 24 based on the analysis results to process the commands.
- the output data is output to the sort engine 27.
- Reference numeral 22 denotes a vector engine, which performs a vector operation.
- the vector to be handled will be stored at Vectares evening 23.
- Reference numeral 23 denotes a vector register, which stores the vector operation performed by the vector engine 22.
- Reference numeral 24 denotes a clipping engine, which performs clipping.
- Reference numeral 25 denotes a Y sort index, which stores a Y index used when performing Y sorting by the sort engine 27.
- 26 is the X-sort index (X-sort INDEX). Stores the X index used for sorting.
- Reference numeral 27 denotes a sort engine, which searches the buffer 6 for polygons included in the fragment of interest by performing X sorting and Y sorting. The searched polygons are stored in the buffer memory 2a and sent to the file processor 3 for rendering. The sort engine 27 also controls the polygon TAG 28 and the polygon cache 34.
- a polygon TAG (polygon TAG) 28 is a buffer for storing the T TAG of the polygon cache 34.
- FIG. 3 is a functional block diagram of the fill processor 3.
- reference numeral 31 denotes a cache controller, which controls material caches 42, 45, 51b, 52a, 53a and a write cache 51a to be described later.
- Reference numeral 32 denotes a material TAG, which stores a tag of a material cache 42, 45, 51b, 52a, 53a and a write cache 51a, which will be described later.
- Reference numeral 33 denotes a light TAG, which is a buffer for storing a tag of a later-described write cache 51a.
- Reference numeral 34 denotes a polygon cache, which is a cache memory for storing polygon data.
- 35 is an initial parameter calculator, which calculates the initial value of DDA.
- Reference numeral 36 denotes a Z comparator array, which performs a Z comparison between polygons for hidden surface removal processing, and embeds a polygon ID and an internal division ratio t0, tl, t2.
- One Z-comparator stores data on polygons. For example, polygon ID, iz, tO, tl, t2, window, stenci 1, shadow.
- 37 is a vertex parameter buffer, which is a buffer for storing the parameters at the vertices of the polygon. It has a size of 64 polygons corresponding to the Z comparator array 36.
- Reference numeral 38 denotes an interpolator (inte ⁇ olater) which interpolates pixel parameters based on the calculation results t0, tl, t2, and iz of the Z compare array 36 and the contents of the vertex parameter buffer 37. .
- FIG. 4 is a functional block diagram of the texture processor 4.
- reference numeral 41 denotes a density calculator which calculates a blend ratio for fog or depth cutting.
- Reference numeral 42 denotes a material cache, which stores data relating to depth information.
- Reference numeral 43 denotes a window register, which is a buffer for storing information about a window. For example,
- An address generator 44 calculates an address on the texture map from the texture coordinates Tx, Ty and L ⁇ D.
- Reference numeral 45 denotes a material cache, which stores data on materials.
- Reference numeral 46 denotes a TLMMI calculator (TLMMI calculator, TLMMI: Tri Linear MIP Map Interpolation) which performs trilinear interpolation.
- Mip maps are a technique for anti-aliasing during texture mapping, that is, eliminating jagged textures. This is based on the following principle. Originally, the color (brightness) of the object plane projected on one pixel must be the average value of the colors of the corresponding matching areas. Otherwise the jaggies will be noticeable and the quality of the texture will be significantly reduced. On the other hand, performing the process of finding the average each time results in an excessive calculation load, which takes time and requires a high-speed processor. Hang. The mipmap is to solve this. In the mipmap,
- the color of the mapping area corresponding to one pixel In order to simplify the calculation of the color (brightness) of the mapping area corresponding to one pixel, prepare a plurality of matching data with a multiple width of 2 in advance. The size of all the mapping areas corresponding to one pixel will be between any two of these multiples of two. By comparing these two data, the color of the corresponding matting area is determined. For example, if there is a screen A of 1 ⁇ and a screen B of 1 ⁇ 2, the pixels of screen A and B corresponding to each pixel of screen C of 1 ⁇ 1.5 are respectively obtained. At this time, the color of the pixel on the screen C is a color intermediate between the pixels on the screen A and the pixels on the screen B.
- 47 is a color converter, which performs color conversion at 4 bit texel.
- Reference numeral 48 denotes a color pallet, which stores color information at the time of 4 bit texels.
- the color palette 48 stores the colors used when writing graphics. The color that can be used for one pixel is determined according to the contents of the color palette 48.
- FIG. 5 is a functional block diagram of the shading processor 5.
- reference numeral 51 denotes an intensity processor, which performs an intensity calculation on the polygon after texture mapping.
- 5 1a is a light cache, which stores light information.
- 5 1b is a material cache, which stores information about the material. Shinies, Material specular, material emission.
- 5 1 c is a window register, which stores information about the window. Screen center, Focus, Scene ambient, etc.
- Numeral 52 is a modulate processor which associates polygon colors with texture colors, performs luminance modulation and fog processing.
- 52a is a material cache, which stores information on the material. For example, Polygon color, Texture mode, etc.
- 52b is a window register, which contains information about windows. This is a buffer for storing information. Fog color.
- Reference numeral 53 denotes a blend processor, which performs blending with the data on the color buffer 54 and writes the result to the color buffer 54.
- the blend processor 53 blends the current pixel color with the pixel color of the frame buffer based on the value of the pre-registered area, and writes the result into the frame buffer of the bank indicated by the light bank register.
- 53a is a material cache, which stores information on the material. such as blend mode.
- Numeral 54 denotes a color buffer, which is an 8 ⁇ 8 color buffer having the same size as the fragment. It has a double bank structure.
- Reference numeral 55 denotes a plot processor which writes the data on the color buffer 54 to the frame buffer 5a.
- Reference numeral 56 denotes a bitmap processor, which performs bitmap processing.
- Reference numeral 57 denotes a display controller, which reads out data from the frame buffer 5a, supplies it to a DAC (Digital to Analog Converter), and displays it on a display (not shown).
- DAC Digital to Analog Converter
- the polygon is not divided for each fragment, but instead, the polygon is divided into a vertical direction (hereinafter, Y direction) and a horizontal direction (hereinafter, X direction) with respect to the scanning line.
- Y direction vertical direction
- X direction horizontal direction
- the process of rearranging polygons it is possible to search for polygons included in each fragment. Then, by using this information to process polygons for each flag currently focused on, processing can be performed without performing polygon division processing. Therefore, the processing time can be reduced, and the processing can be realized by a device having a small storage device capacity.
- polygons are divided into opaque polygons, translucent polygons and transparent pixel polygons, and translucent polygons are processed last. By doing so, it became possible to process polygons with translucency.
- FIG. 6 shows the overall processing flow of the first embodiment of the present invention.
- the polygon data to be sent to the rendering unit is sorted by the minimum Y coordinate value of the polygon and written to the buffer (ST 1).
- the polygon data that matches the Y coordinate value of the currently focused flag is sorted by the minimum X coordinate value of the polygon's focused Y area (ST2).
- All opaque polygons that match the XY coordinates of the currently focused fragment Render all polygons with transparent pixels that match the XY coordinates of the currently focused fragment (ST4).
- the translucent polygons that match the XY coordinates of the currently focused flag are rendered in Z-coordinate order (ST6).
- the color data in the buffer is written to the frame buffer (ST 7).
- the polygon data is first sorted by Y, and then this sorted data is sorted by the minimum value of X in the attention area Y before processing each row. From this result, the position of the first processed fragment of the polygon can be determined. This tells us which fragment the polygon is in first.
- each polygon is subjected to link update processing (described later) after the processing of the first included flag, unnecessary polygons are removed from the link after the next flag, and the required polygons are replaced with the next fragment. Is included in the information of the polygon included in the. Therefore, if you follow all the links up to that point, you can search for all polygons that fall within the fragment. Divide the polygon for each fragment Even without creating a new polygon and processing it, it is possible to read the polygon information included in each fragment and perform processing.
- link update processing refers to processing in which after polygon processing is completed for the currently focused flag, it is checked whether the polygon becomes invalid after the next fragment, and if invalid, the polygon is removed from the link. is there.
- the rendering processing unit will be described below.
- polygons are divided into opaque polygons, polygons with transparent pixels, and polygons with translucency.
- Figure 7 shows the structure of the Y index buffer (YINDEX buffer) and the Y sort buffer (Ysortbuffer).
- Y index buffer the head address of the link of the polygon list in which the minimum value Ymin of the Y coordinate value is entered in each fragment row is stored.
- the polygon data is stored in the Y sort buffer in the order of input.
- LINK Y parameter in the Y sort buffer the address of the next polygon in the same line is stored. Therefore, if you want to know the polygons that fall into the specified line, you only need to follow the link from the corresponding YINDEX address to the polygon whose LINKY is END.
- FIG. 7 The example of FIG. 7 will be described. Looking at the address 0 in the Y-index buffer for the top fragment line, you can see that the content is "EMPTY" and there is no polygon. The same is true for the second row. Looking at the Y-index buffer address 2 for the third line, these contents are "ADR8" and "EMPTY”. Therefore, when ADR 8 of the Y sort buffer is accessed, the contents of its LINKY are “ADR 5”. So next I access ADR 5. Similarly, ADR 3 and ADR 1 are accessed in the same order. And the link ends because the contents of the LINK of ADR 1 are "END". Hereinafter, processing is performed for all the fragment lines.
- the LINK Y of the polygon data is “ADR8” which is the original YI NDEX value, and the “ADR 1” which is the address of the polygon data just written is stored in the second of the Y index buffer. 1 ”is written. This state is shown in FIG.
- FIG. 10 is a diagram for explaining rearrangement in the X direction.
- the reordering process in the X direction is the same as that in the Y direction.
- Xmin in that row is obtained, and the value is used to perform reordering.
- the polygon in which the polygon becomes valid first that is, the polygon in which the flag of interest becomes valid first can be found. For example, as shown in FIG. 11, among the polygons having the smallest Y coordinate values, the flag having the smallest X coordinate value is selected.
- the polygons are not divided for each flag, but instead, the polygons are arranged in the vertical direction and the horizontal direction with respect to the scanning line.
- an image processing apparatus that divides a screen into a predetermined size and performs processing for each of the divided areas
- information of image components is The image elements are rearranged in the vertical direction with respect to the scanning lines, and the information of the image elements is rearranged in the horizontal direction with respect to the scanning lines, and the image processing is performed based on the rearranged information of the image elements.
- the image processing for each of the divided areas is divided into an opaque polygon, a polygon with a transparent pixel, and a translucent polygon, and the opaque polygon, the transparent pixel Since the processing is performed in the order of the attached polygon and the translucent attached polygon, the processing of the translucent polygon can be performed even when the polygon has texture data.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
- Image Processing (AREA)
- Image Input (AREA)
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP98921774A EP0984392B1 (en) | 1997-05-22 | 1998-05-22 | Image processor and image processing method |
DE69824725T DE69824725D1 (de) | 1997-05-22 | 1998-05-22 | Bildprozessor und bildverarbeitungsverfahren |
KR19997010806A KR20010012841A (ko) | 1997-05-22 | 1998-05-22 | 화상 처리 장치 및 화상 처리 방법 |
US09/424,424 US6680741B1 (en) | 1997-05-22 | 1998-05-22 | Image processor and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP9/131831 | 1997-05-22 | ||
JP9131831A JPH10320573A (ja) | 1997-05-22 | 1997-05-22 | 画像処理装置及び画像処理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1998053425A1 true WO1998053425A1 (fr) | 1998-11-26 |
Family
ID=15067125
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1998/002262 WO1998053425A1 (fr) | 1997-05-22 | 1998-05-22 | Processeur d'image et procede correspondant |
Country Status (8)
Country | Link |
---|---|
US (1) | US6680741B1 (ja) |
EP (1) | EP0984392B1 (ja) |
JP (1) | JPH10320573A (ja) |
KR (1) | KR20010012841A (ja) |
CN (1) | CN1122945C (ja) |
DE (1) | DE69824725D1 (ja) |
TW (1) | TW375719B (ja) |
WO (1) | WO1998053425A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2355633A (en) * | 1999-06-28 | 2001-04-25 | Pixelfusion Ltd | Processing graphical data |
WO2005059829A1 (en) * | 2003-12-16 | 2005-06-30 | Nhn Corporation | A method of adjusting precision of image data which inter-locked with video signals throughput of a terminal and a system thereof |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2362552B (en) * | 1999-06-28 | 2003-12-10 | Clearspeed Technology Ltd | Processing graphical data |
US6728862B1 (en) | 2000-05-22 | 2004-04-27 | Gazelle Technology Corporation | Processor array and parallel data processing methods |
JP3466173B2 (ja) * | 2000-07-24 | 2003-11-10 | 株式会社ソニー・コンピュータエンタテインメント | 画像処理システム、デバイス、方法及びコンピュータプログラム |
JP3966832B2 (ja) | 2003-04-28 | 2007-08-29 | 株式会社東芝 | 描画処理装置、及び、描画処理方法 |
US7573599B2 (en) * | 2004-05-20 | 2009-08-11 | Primax Electronics Ltd. | Method of printing geometric figures |
US7519233B2 (en) * | 2005-06-24 | 2009-04-14 | Microsoft Corporation | Accumulating transforms through an effect graph in digital image processing |
WO2007064280A1 (en) | 2005-12-01 | 2007-06-07 | Swiftfoot Graphics Ab | Computer graphics processor and method for rendering a three-dimensional image on a display screen |
JP4621617B2 (ja) * | 2006-03-28 | 2011-01-26 | 株式会社東芝 | 図形描画装置、図形描画方法、及びプログラム |
JP5194282B2 (ja) * | 2007-07-13 | 2013-05-08 | マーベル ワールド トレード リミテッド | カラープリンタのためのレーザ振動ミラー支持体に関する方法および装置 |
CN101127207B (zh) * | 2007-09-26 | 2010-06-02 | 北大方正集团有限公司 | 一种提高灰度字形显示质量的方法及装置 |
JP2011086235A (ja) * | 2009-10-19 | 2011-04-28 | Fujitsu Ltd | 画像処理装置、画像処理方法および画像処理プログラム |
US20130300740A1 (en) * | 2010-09-13 | 2013-11-14 | Alt Software (Us) Llc | System and Method for Displaying Data Having Spatial Coordinates |
CN102572205B (zh) * | 2011-12-27 | 2014-04-30 | 方正国际软件有限公司 | 一种图像处理方法、装置及系统 |
CN113628102A (zh) * | 2021-08-16 | 2021-11-09 | 广东三维家信息科技有限公司 | 实体模型消隐方法、装置、电子设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0371378A (ja) * | 1989-08-11 | 1991-03-27 | Daikin Ind Ltd | 透明感表示方法およびその装置 |
JPH03201081A (ja) * | 1989-12-28 | 1991-09-02 | Nec Corp | 画像生成装置 |
JPH04170686A (ja) * | 1990-11-05 | 1992-06-18 | Canon Inc | 画像処理装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4885703A (en) * | 1987-11-04 | 1989-12-05 | Schlumberger Systems, Inc. | 3-D graphics display system using triangle processor pipeline |
-
1997
- 1997-05-22 JP JP9131831A patent/JPH10320573A/ja not_active Withdrawn
-
1998
- 1998-05-22 DE DE69824725T patent/DE69824725D1/de not_active Expired - Lifetime
- 1998-05-22 US US09/424,424 patent/US6680741B1/en not_active Expired - Fee Related
- 1998-05-22 TW TW087108115A patent/TW375719B/zh active
- 1998-05-22 WO PCT/JP1998/002262 patent/WO1998053425A1/ja active IP Right Grant
- 1998-05-22 CN CN98805330A patent/CN1122945C/zh not_active Expired - Fee Related
- 1998-05-22 EP EP98921774A patent/EP0984392B1/en not_active Expired - Lifetime
- 1998-05-22 KR KR19997010806A patent/KR20010012841A/ko not_active Application Discontinuation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0371378A (ja) * | 1989-08-11 | 1991-03-27 | Daikin Ind Ltd | 透明感表示方法およびその装置 |
JPH03201081A (ja) * | 1989-12-28 | 1991-09-02 | Nec Corp | 画像生成装置 |
JPH04170686A (ja) * | 1990-11-05 | 1992-06-18 | Canon Inc | 画像処理装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP0984392A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2355633A (en) * | 1999-06-28 | 2001-04-25 | Pixelfusion Ltd | Processing graphical data |
WO2005059829A1 (en) * | 2003-12-16 | 2005-06-30 | Nhn Corporation | A method of adjusting precision of image data which inter-locked with video signals throughput of a terminal and a system thereof |
US7834874B2 (en) | 2003-12-16 | 2010-11-16 | Nhn Corporation | Method of improving the presentation of image data which inter-locked with video signals throughput of a terminal and a system thereof |
Also Published As
Publication number | Publication date |
---|---|
JPH10320573A (ja) | 1998-12-04 |
US6680741B1 (en) | 2004-01-20 |
CN1122945C (zh) | 2003-10-01 |
EP0984392A4 (en) | 2001-04-25 |
TW375719B (en) | 1999-12-01 |
KR20010012841A (ko) | 2001-02-26 |
DE69824725D1 (de) | 2004-07-29 |
CN1275225A (zh) | 2000-11-29 |
EP0984392B1 (en) | 2004-06-23 |
EP0984392A1 (en) | 2000-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6204856B1 (en) | Attribute interpolation in 3D graphics | |
JP2769427B2 (ja) | 一連のグラフィック・プリミティブ用のデータを処理するための方法 | |
US6961065B2 (en) | Image processor, components thereof, and rendering method | |
US6038031A (en) | 3D graphics object copying with reduced edge artifacts | |
US6771264B1 (en) | Method and apparatus for performing tangent space lighting and bump mapping in a deferred shading graphics processor | |
WO1998053425A1 (fr) | Processeur d'image et procede correspondant | |
US20120256942A1 (en) | Floating point computer system with blending | |
JP2001357410A (ja) | 別々に生成された3次元イメージを合成するグラフィックス・システム | |
US6057851A (en) | Computer graphics system having efficient texture mapping with perspective correction | |
US20050243101A1 (en) | Image generation apparatus and image generation method | |
WO1997005576A1 (en) | Method and apparatus for span and subspan sorting rendering system | |
KR20050030595A (ko) | 화상 처리 장치 및 그 방법 | |
WO1997005576A9 (en) | Method and apparatus for span and subspan sorting rendering system | |
JP4198087B2 (ja) | 画像生成装置および画像生成方法 | |
US6501481B1 (en) | Attribute interpolation in 3D graphics | |
JP3979162B2 (ja) | 画像処理装置およびその方法 | |
US6518969B2 (en) | Three dimensional graphics drawing apparatus for drawing polygons by adding an offset value to vertex data and method thereof | |
US7256796B1 (en) | Per-fragment control for writing an output buffer | |
US7372466B2 (en) | Image processing apparatus and method of same | |
JP3104643B2 (ja) | 画像処理装置及び画像処理方法 | |
US8576219B2 (en) | Linear interpolation of triangles using digital differential analysis | |
JP3209140B2 (ja) | 画像処理装置 | |
JPH10261095A (ja) | 画像処理装置及び画像処理方法 | |
JPH1131236A (ja) | ポリゴンデータのソート方法及びこれを用いた画像処理装置 | |
JPH05282428A (ja) | 3次元コンピュータグラフィクス用図形データ作成方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 98805330.6 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1998921774 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1019997010806 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 1998921774 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09424424 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1019997010806 Country of ref document: KR |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1019997010806 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 1998921774 Country of ref document: EP |