CN102822871B - In rendering system based on segment, need-based texture renders - Google Patents

In rendering system based on segment, need-based texture renders Download PDF

Info

Publication number
CN102822871B
CN102822871B CN201180014841.2A CN201180014841A CN102822871B CN 102822871 B CN102822871 B CN 102822871B CN 201180014841 A CN201180014841 A CN 201180014841A CN 102822871 B CN102822871 B CN 102822871B
Authority
CN
China
Prior art keywords
segment
texture
veining
scene
dynamically
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201180014841.2A
Other languages
Chinese (zh)
Other versions
CN102822871A (en
Inventor
J·W·豪森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagination Technologies Ltd
Original Assignee
Imagination Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imagination Technologies Ltd filed Critical Imagination Technologies Ltd
Priority to CN201610652483.2A priority Critical patent/CN106296790B/en
Publication of CN102822871A publication Critical patent/CN102822871A/en
Application granted granted Critical
Publication of CN102822871B publication Critical patent/CN102822871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

Provide and a kind of dynamically render the method and apparatus that texture carries out shadowed and veining to computer graphic image in rendering system based on segment for using.Obtaining scene space geometry for dynamically rendering texture and being transferred to segment unit, this segment unit obtains the scene space geometry of the scene with reference to texture.The scene space geometry with reference to the scene dynamically rendering texture can also be obtained and be passed to described segment unit.Described segment unit uses the object data obtained from described scene space geometry to detect the reference to the yet to be rendered region dynamically rendering texture.Then these regions are dynamically rendered.

Description

In rendering system based on segment, need-based texture renders
Technical field
The present invention relates to three dimensional computer graphics rendering system, particularly relate to in rendering system based on segment texture render the method and apparatus being associated.
Background technology
Surface performs in real-time computer graphics to render be very common, and then in rendering subsequently, surface is used as texture, and the surface after i.e. rendering becomes new texture or the texture of " dynamically rendering ".Such as in order to render the scene including that scene is videoed, it is first environment texture mapping (map) by scene rendering.Then this pinup picture is used for producing the reflection of environment during the shadowed of object on object.This can be used for such as display map in mirror image is built.
It is common additionally for Environment (mapping) in the Modern computer graphics application use the technology being referred to as shadow map.The degree of depth of object from light-source angle in scene is rendered to texture mapping by shadow map technology.These texture mapping are used to determine that each pixel about light source is whether in shade during object subsequently renders.This is by comparing just in the degree of depth of coloured object pixel be stored in " shadow map " texture the degree of depth of equivalence position and complete, such as, if the degree of depth of object is more than the degree of depth in shadow map, so it is after relative to another object of light source, so this pixel render period, should not apply the radiation response of light source.Texture for being associated with shadow map is the biggest such as 2048x2048 or bigger, and the rendering to typically require of better quality has bigger texture dimensions, and this is the most common.
Should be noted that in above technology, there is multiple change and above both as example, and the scope of the present invention is not formed on these technology.
In Modern Graphic is applied, these of texture rendered and significant percentage of available memory bandwidth can be used to carry out render scenes with interactive content frame rate from the thing of its reading subsequently.The data being rendered to these textures further for many are not used subsequently, and this is not abnormal.Such as Fig. 1 shows and is subdivided into the texture 100 previously rendered of region T0 to T23 and by the region 100 of texure 100 pinup picture triangularity 120.It can be seen that segment T3, T8, T9, T14 to T16, T21 and T22 of the texture the most previously rendered need the pinup picture being rasterized into meeting texture to triangle 120.
Rendering system based on segment is well-known.Image is subdivided into multiple rectangular block or segment by these systems.Complete this procedural mode and veining and the shadowed performed subsequently schematically shows in fig. 2.
First, pel/order extraction unit 201 from memory search order and primitive data and passes it to geometry (geometry) processing unit 202.It uses known method to transmit pel and order data to screen space.
Then these data are provided to segment unit 203, and object data is inserted into for the rectangular area of one group of definition or the list object of each rectangular area of segment or segment by this segment unit 203 from screen space geometry.List object for each segment comprises the pel being completely or partially present in this segment.This list exists for each segment on screen, although can not have data in some list object.These list objects are extracted by segment parameter extraction unit 205, one segment ground of 205 1 segments of this segment parameter extraction unit removes unit (HSR) 206 to implicit surfaces provides list object, and implicit surfaces removes unit (HSR) 206 and removes the surface (usually because they are by another shaded surface) that will not can help to final scene.The data of each pel in HSR cell processing segment only transmission visible pixels are to shadowed unit 208.
Shadowed unit obtains data from HSR and uses veining unit 210 by it for extracting texture and each pixel using known technology to be applied in viewable objects by shadowed.Then the data of veining and shadowed are fed to tile buffer 212 on sheet by shadowed unit.Because data are temporarily stored on sheet in picture buffer, it is eliminated with temporarily storing the external memory bandwidth being associated.
The most each segment is the most textured and shadowed, and result data is written into outer scene buffer 214.
Summary of the invention
The preferred embodiment of the present invention provide a kind of enable rendering system rasterizing based on segment and only the grain surface rendered is stored in subsequently render the local method and apparatus that will use.The rasterizing stage of this grain surface those dynamically rendered by institute's texturing surface is performed the segment stage as described above is postponed till them and is completed by the point with reference to (reference).Use " texture based on demand (demand) renders " (as, from Fig. 1 100) rendering of scene may be only referring to the zonule of texture each in each segment, system can render one or more veining segment to render neutrality i.e. use main at the point introducing little write-back cache (cache).Local property due to reference, it is likely that coloured data are positioned at continue in high-speed buffer subsystem any segment for demand, decreases and the rendering and memory broadband that use subsequently is the most relevant of texture the most significantly.
Accompanying drawing explanation
Example with reference to accompanying drawing describes the preferred embodiment of the present invention in detail, wherein now:
Fig. 1 schematically shows the zonule of the texture previously rendered can be how by referring next to the veining for object;
Fig. 2 shows the schematic diagram of the most known rendering system based on segment;
Fig. 3 shows and embodies the operation that the need-based texture of the present invention renders;
Fig. 4 shows the modification for rendering system based on segment rendered for need-based texture in embodiments of the present invention;
Fig. 5 shows modification request being implemented to the texture pipeline (pipeline) that need-based texture renders in embodiment of the present invention.
Detailed description of the invention
Fig. 3 shows the operation of demand model texture grid in system based on segment.Application 300 firstly generates the geometry 305 of the image for the texture that will be rendered in 100 such as Fig. 1.This geometry uses widely-known technique to be treated to generate the screen space geometry being passed to segment unit 315 at 310, and segment unit 315 generates described segment screen space parameter 320 for rendering system based on segment.Should be noted that significant notation form 358 indicates the current state (being rendered) of each segment in texture.This form is commonly stored in memory and comprises the status indication for indicating the most coloured each segment of each segment, and all marks are initially all removed does not has segment to be rendered for instruction.Whenever a segment is rendered, and the mark corresponding to this segment is set.
Then applying and be switched to rendering of home court scape at 330 by the main scene geometry 335 of generation, then main scene geometry 335 is processed into screen space by geometry processor 340 and carries out segment at 345 to produce the list object for each segment.Then the main scenario parameters 350 as result is rasterized at 355 next segment.During rasterizing process, the region of rasterizing hardware (not shown) detection dynamic texture, the most requested but do not appear in the environmental map in texture storage.These are corresponding to the veining of the type discussed above with reference to Fig. 1.By reading segment significant notation 358, rasterizing program determines that each veining segment (as shown in fig. 1) has been rendered.When rasterizing program determines that veining segment is not the most rendered, rasterizing program is switched to, from rasterizing home court scape, the parameter that rasterizing renders the veining segment of 320 needs for texture, and texture renders 320 and is associated with the region needed.Rasterizing subsequently processes 375 generations for being written into the texture image data of the T3380 (from Fig. 1) of cache 360.It is subsequently used for the corresponding significant notation of veining segment to be set and then rasterizing hardware switches back into rasterizing home court scape 355 at 385.This process is recycled and reused for all regions of scene, and all regions of scene are found requested according to dynamic texture.Such as from region T3, T8, T9, T14, T15, T16, T21 and T22 of Fig. 1.The remaining area that should be noted that dynamic texture is not rasterized into thus has saved great memory broadband and processing cost.
Should be noted that the veining tile data of write cache can be written back to memory (not shown) or abandon when it is driven away from cache 360.If in the case of data are dropped this segment (such as T3) by referring again to; he could be subsequently supplied the most as described above.This method only allows the biggest grain surface to be represented by the memory being associated with the geometrical structure parameter of segment.In the case of data texturing is discarded rather than being written back to memory, corresponding significant notation 358 is eliminated for indicating this texture no longer to occur.
Fig. 4 shows the system that the pattern based on segment using dynamic textureization to implement texturizing surfaces as discussed above renders.Should be noted that segment/geometry process unit is with shown in figure 2 identical and be not shown here.Segment parameter extraction unit 410 is extracted the parameter list of segment according to rendering system based on common segment and object data is delivered to implicit surfaces removes unit (HSR) 420.Well-known method is used to remove the surface (usually because they are by another shaded surface) that will not can help to final scene.Each pel in HSR cell processing segment and will be only used in the data on visible surface at pixel and be delivered to shadowed unit 430.
Shadowed unit 430 obtains object from HSR unit 420 and uses widely-known technique to each pixel application shadowed each viewable objects and veining, and these widely-known techniques include sending veining request to texture sampling unit (TSU) 460.
Figure 5 illustrates TSU.Texture addressing unit 500 obtains texture sampling asks and uses well-known method to calculate the X for each texture blending and Y address.X and Y address are sent to segment address calculation 550, and this segment address calculation 550 determines the address of the segment residing for the texture blending of request.This calculating generally removes low-order bit bit to form segment X and Y address from X and Y address, then these values are multiplied together and add plot to form the address of " the segment significant notation " that be stored in the effective form of segment (Fig. 4 480), the effective form of this segment store in memory, it shall be noted that be the additive method that can use address computation.The address of significant notation is passed to significant notation extraction unit 560, and this significant notation extraction unit 560 significant notation form from memory retrieves the mark specified.Then this mark is issued as " not existing " signal 570.Should be noted that significant notation extraction unit 560 should be operated by memory cache to improve his execution.Significant notation is also delivered to address translator 520 by significant notation extraction unit, and this address translator 520 uses well-known method that X, Y address are converted into linear memory address.If significant notation instruction veining segment does not exists, then address conversioning unit stops performing.If significant notation instruction veining segment exists, then the texture address of calculating is delivered to texture cache unit 530 by address conversioning unit, and texture cache unit 530 is from either internally or externally memory search data texturing if desired.The data of retrieval are passed to texture filter unit 540, and this texture filter unit 540 uses widely-known technique to filter the data returned, the shadowed unit 430 that the data after the filtration obtained are returned in Fig. 4.
If mark instruction veining segment is not the most rasterized into, then " there is not " the environment changing unit (CSU) 400 that signal 570 is sent in Fig. 4, this CSU 400 indicates needs to be switched to render the segment (i.e. veining is unavailable) of " missing " veining.This occurs for the veining dynamically rendered of such as environment veining.
Then all unit in CSU instruction rasterizing program are switched to the veining (i.e. missing the segment of veining) of rasterizing request.Should be noted that CSU can be with the single segment missed of rasterizing or multiple multiple segments being positioned in the region missing veining.CSU 400 can interrupt implementing as hardware module, the programmable processor/microcontroller of difference or use " main frame " processor and device.
When each segment of system rasterizing CSU instruction, each veining segment completed is exported memory via cache 470 by buffer 440.Generally this cache will use the cache types of well-known " write-back buffer ", in order to when the segment rasterizing of CSU instruction, data are located locally within cache.Then CSU is that the segment renewal segment significant notation 480 of rasterizing is to indicate them to exist.Cache will store the texture of dynamic pinup picture now, and this texture requests is for being marked as the gridding asking the segment of this texture.
To request segment gridding has processed when, CSU gridding unit is switched back into process original render and allow the address translator 520 in Fig. 5 continue described above to cache element send texture address.When the veining segment of rasterizing is located locally within cache now, cache access now to extract, with those, the bandwidth of memory being associated in order to reduce for their any texture blending, i.e. store the texture of dynamic pinup picture in the caches.
Should be noted that when conceding space for new tile data, cache may alternatively be the buffer storage abandoning segment rather than the buffer storage that they write back to memory.In such cases referring again to the segment abandoned by needs use said process be again rasterized into.

Claims (4)

1. one kind is used for using dynamically rendering texture in rendering system based on segment to computer graphical Image carries out the method for shadowed and veining, and the method comprising the steps of:
Obtain treating the screen space geometry of the texture dynamically rendered;
Transmit the screen space geometry of described texture to segment unit, to produce multiple veining figure The parametric texture of each in block;
Obtain with reference to described in treat the screen space geometry of the scene of texture that dynamically rendered;
Transmit the described screen space geometry extremely described segment unit of described scene, multiple to produce The list object of each in scene segment;
Use described list object to render the scene segment from described segment unit, described in render bag Include:
The veining sent for described texture to texture sampling unit is asked;
Calculate the X for texture blending and Y address;
Determine described texture blending is in which veining segment;
The corresponding significant notation for described veining segment is read from the form of memory, with Determine that described veining segment has been rendered;And
If described significant notation indicates described veining segment yet to be rendered, the most dynamically render institute State the data texturing of veining segment.
Method the most according to claim 1, the method farther includes: by dynamically render Data texturing writes memory by cache.
3. one kind is used for using dynamically rendering texture in rendering system based on segment to computer graphical Image carries out the equipment of shadowed and veining, and this equipment includes:
For obtaining treating the device of the screen space geometry of texture dynamically rendered;
For transmitting the described screen space geometry of described texture to segment unit, multiple to produce The device of the parametric texture of each in veining segment;
For obtain with reference to described in the screen space geometry of the scene of texture treating dynamically to be rendered Device;
For transmitting the described screen space geometry extremely described segment unit of described scene, to produce The device of the list object of each in multiple scene segments;
For using described list object to render the device of the scene segment from described segment unit, Described rendering includes:
The veining sent for described texture to texture sampling unit is asked;
Calculate the X for texture blending and Y address;
Determine described texture blending is in which veining segment;
The corresponding significant notation for described veining segment is read from the form of memory, with Determine that described veining segment has been rendered;And
If described significant notation indicates described veining segment yet to be rendered, the most dynamically render institute State the data texturing of veining segment.
Equipment the most according to claim 3, this equipment farther includes: for by dynamic wash with watercolours The data texturing of dye is by the device of cache write memory.
CN201180014841.2A 2010-03-19 2011-03-18 In rendering system based on segment, need-based texture renders Active CN102822871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610652483.2A CN106296790B (en) 2010-03-19 2011-03-18 Method and apparatus for shading and texturing computer graphics images

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB1004676.1 2010-03-19
GB1004676.1A GB2478909B (en) 2010-03-19 2010-03-19 Demand based texture rendering in a tile based rendering system
PCT/GB2011/000385 WO2011114112A1 (en) 2010-03-19 2011-03-18 Demand based texture rendering in a tile based rendering system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201610652483.2A Division CN106296790B (en) 2010-03-19 2011-03-18 Method and apparatus for shading and texturing computer graphics images

Publications (2)

Publication Number Publication Date
CN102822871A CN102822871A (en) 2012-12-12
CN102822871B true CN102822871B (en) 2016-09-07

Family

ID=42228050

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201180014841.2A Active CN102822871B (en) 2010-03-19 2011-03-18 In rendering system based on segment, need-based texture renders
CN201610652483.2A Active CN106296790B (en) 2010-03-19 2011-03-18 Method and apparatus for shading and texturing computer graphics images

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201610652483.2A Active CN106296790B (en) 2010-03-19 2011-03-18 Method and apparatus for shading and texturing computer graphics images

Country Status (5)

Country Link
US (1) US20110254852A1 (en)
EP (2) EP3144897B1 (en)
CN (2) CN102822871B (en)
GB (1) GB2478909B (en)
WO (1) WO2011114112A1 (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9069433B2 (en) * 2012-02-10 2015-06-30 Randall Hunt Method and apparatus for generating chain-link fence design
GB2506706B (en) 2013-04-02 2014-09-03 Imagination Tech Ltd Tile-based graphics
GB2520365B (en) 2013-12-13 2015-12-09 Imagination Tech Ltd Primitive processing in a graphics processing system
GB2520366B (en) * 2013-12-13 2015-12-09 Imagination Tech Ltd Primitive processing in a graphics processing system
CN103995684B (en) * 2014-05-07 2017-01-25 广州瀚阳工程咨询有限公司 Method and system for synchronously processing and displaying mass images under ultrahigh resolution platform
US9760968B2 (en) 2014-05-09 2017-09-12 Samsung Electronics Co., Ltd. Reduction of graphical processing through coverage testing
GB2526598B (en) 2014-05-29 2018-11-28 Imagination Tech Ltd Allocation of primitives to primitive blocks
GB2524120B (en) * 2014-06-17 2016-03-02 Imagination Tech Ltd Assigning primitives to tiles in a graphics processing system
GB2524121B (en) 2014-06-17 2016-03-02 Imagination Tech Ltd Assigning primitives to tiles in a graphics processing system
US9842428B2 (en) 2014-06-27 2017-12-12 Samsung Electronics Co., Ltd. Dynamically optimized deferred rendering pipeline
CN105321196A (en) * 2014-07-21 2016-02-10 上海羽舟网络科技有限公司 3D image processing method and system
US9232156B1 (en) 2014-09-22 2016-01-05 Freescale Semiconductor, Inc. Video processing device and method
GB2534567B (en) 2015-01-27 2017-04-19 Imagination Tech Ltd Processing primitives which have unresolved fragments in a graphics processing system
GB2546810B (en) 2016-02-01 2019-10-16 Imagination Tech Ltd Sparse rendering
GB2546811B (en) * 2016-02-01 2020-04-15 Imagination Tech Ltd Frustum rendering
GB201713052D0 (en) * 2017-08-15 2017-09-27 Imagination Tech Ltd Single pass rendering for head mounted displays
US10424074B1 (en) 2018-07-03 2019-09-24 Nvidia Corporation Method and apparatus for obtaining sampled positions of texturing operations
US10672185B2 (en) 2018-07-13 2020-06-02 Nvidia Corporation Multi-rate shading using replayed screen space tiles
US10950305B1 (en) * 2018-11-02 2021-03-16 Facebook Technologies, Llc Selective pixel output
CN110866965A (en) * 2019-11-14 2020-03-06 珠海金山网络游戏科技有限公司 Mapping drawing method and device for three-dimensional model
CN110990104B (en) * 2019-12-06 2023-04-11 珠海金山数字网络科技有限公司 Texture rendering method and device based on Unity3D
WO2022131949A1 (en) * 2020-12-14 2022-06-23 Huawei Technologies Co., Ltd. A device for performing a recursive rasterization

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101176119A (en) * 2005-03-21 2008-05-07 高通股份有限公司 Tiled prefetched and cached depth buffer

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4225861A (en) * 1978-12-18 1980-09-30 International Business Machines Corporation Method and means for texture display in raster scanned color graphic
US5371839A (en) * 1987-02-27 1994-12-06 Hitachi, Ltd. Rendering processor
US6795072B1 (en) * 1999-08-12 2004-09-21 Broadcom Corporation Method and system for rendering macropixels in a graphical image
US6859209B2 (en) * 2001-05-18 2005-02-22 Sun Microsystems, Inc. Graphics data accumulation for improved multi-layer texture performance
US6914610B2 (en) * 2001-05-18 2005-07-05 Sun Microsystems, Inc. Graphics primitive size estimation and subdivision for use with a texture accumulation buffer
GB2452300B (en) * 2007-08-30 2009-11-04 Imagination Tech Ltd Predicated geometry processing in a tile based rendering system
US8174534B2 (en) * 2007-12-06 2012-05-08 Via Technologies, Inc. Shader processing systems and methods
GB0810205D0 (en) * 2008-06-04 2008-07-09 Advanced Risc Mach Ltd Graphics processing systems

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101176119A (en) * 2005-03-21 2008-05-07 高通股份有限公司 Tiled prefetched and cached depth buffer

Also Published As

Publication number Publication date
US20110254852A1 (en) 2011-10-20
GB2478909B (en) 2013-11-06
EP3144897B1 (en) 2020-01-01
EP2548177A1 (en) 2013-01-23
CN106296790A (en) 2017-01-04
EP2548177B1 (en) 2016-12-14
CN106296790B (en) 2020-02-14
GB201004676D0 (en) 2010-05-05
WO2011114112A1 (en) 2011-09-22
GB2478909A (en) 2011-09-28
EP3144897A1 (en) 2017-03-22
CN102822871A (en) 2012-12-12

Similar Documents

Publication Publication Date Title
CN102822871B (en) In rendering system based on segment, need-based texture renders
US11657565B2 (en) Hidden culling in tile-based computer generated images
US20220392154A1 (en) Untransformed display lists in a tile based rendering system
JP5847159B2 (en) Surface patch tessellation in tile-based rendering systems
KR101681056B1 (en) Method and Apparatus for Processing Vertex
US11250620B2 (en) Graphics processing
CN111754381B (en) Graphics rendering method, apparatus, and computer-readable storage medium
KR20040069500A (en) Pixel cache, 3D graphic accelerator using it, and method therefor
GB2500284A (en) Object list tile based computer graphics using modified primitives
US9710933B2 (en) Method and apparatus for processing texture
US10432914B2 (en) Graphics processing systems and graphics processors
GB2509113A (en) Tessellating Patches Of Surface Data In Tile Based Computer Graphics Rendering
US20170124748A1 (en) Method of and apparatus for graphics processing
US10192348B2 (en) Method and apparatus for processing texture
KR101227155B1 (en) Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image
DE102021111079B4 (en) PROCEDURE FOR RAY-CONE TRACING AND TEXTURE FILTERS
Smit et al. A shared-scene-graph image-warping architecture for VR: Low latency versus image quality
CN117274541A (en) 3D experience system based on visual field transformation
CN118365762A (en) 8K ultra-clear image rendering method based on graphics
KR20160077559A (en) Real time interal imaging method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant