CN101281655A - Method for drafting pelage-shaped pattern in profile region with accelerative GPU - Google Patents

Method for drafting pelage-shaped pattern in profile region with accelerative GPU Download PDF

Info

Publication number
CN101281655A
CN101281655A CNA2008101130045A CN200810113004A CN101281655A CN 101281655 A CN101281655 A CN 101281655A CN A2008101130045 A CNA2008101130045 A CN A2008101130045A CN 200810113004 A CN200810113004 A CN 200810113004A CN 101281655 A CN101281655 A CN 101281655A
Authority
CN
China
Prior art keywords
hair
gpu
fin
texture
summit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101130045A
Other languages
Chinese (zh)
Other versions
CN100585636C (en
Inventor
杨刚
吴恩华
孙汉秋
王文成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Software of CAS
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN200810113004A priority Critical patent/CN100585636C/en
Publication of CN101281655A publication Critical patent/CN101281655A/en
Application granted granted Critical
Publication of CN100585636C publication Critical patent/CN100585636C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Generation (AREA)

Abstract

The invention discloses a GPU accelerated outline hair type figure drawing method, belonging to the computer photo-realism graphic field. The invention includes generating the Fin vein showing the outline hair type figure and the corresponding hair type figure tangential vein; judging whether each point of the object model in the GPU apex plotter is in the outline area or not; in the pixel plotter plotting the Fin slice of each side in the outline area, and generating the hair type figure effect of the outline area. The hair type figure plotted by the invention can be used in the fields of computer simulation, the virtual reality and the electronic games and the like.

Description

The hair-like pattern drawing method of profile region that GPU quickens
Technical field
The present invention relates to a kind of hair-like pattern drawing method of profile region that quickens by GPU, more particularly, the present invention relates to a kind of based on the zonal texture microtomy, utilize the programmable functions of GPU and the method that efficient processing performance improves the hair-like graphic plotting speed of contour area thereof, belong to the computing machine photo realism graphic and learn the field.Hair-like graphic modeling result can be used for fields such as Computer Simulation, virtual reality, electronic game.
Background technology
1.1 the sense of reality of hair is drawn
The sense of reality drafting of hair all is the research focus in the computer graphics all the time.The method that many classes are represented hair has appearred so far.Wherein the most intuitively class methods be adopt geometric element every hair is all showed (G.Miller.From Wire-Frame to Furry Animals[C] //Proceedings of Graphics Interface.Mahwah, NJ:Lawrence Erlbaum Associates, 1988:138-146; A.LeBlanc, R.Turner and D.Thalmann.Rendering hair using pixel blending and shadow buffers[J] .The Journal of Visualization andComputer Animation, 1991,2 (1): 92-97; Y.Watanabe and Y.Suenega.A trigonal prism-basedmethod for hair image generation[J] .IEEE Computer Graphics and Application, 1992,12 (1): 47-53; J.Berney and J.Redd.Stuart Little:A Tale of Fur, Costumes, Performance, andIntegration:Breathing Real Life Into a Digital Character[OL] .SIGGRAPH 2000 Course Note#14).But because the hair individual amount on the body surface is huge, these methods based on how much are difficult to the speed that reaches higher, can't satisfy the requirement of real-time rendering.
Be different from these explicit methods of geometry, Kajiya etc. (J.T.Kajiya and T.L.Kay.Rendering Fur withThree Dimensional Textures[C] //Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH.New York:ACM Press, 1989:271-280) proposed to utilize the body texture to represent the method for hair, obtained expression effect very true to nature in 1989.But, make speed very slow because this method has adopted the drafting mode of ray cast.
Meyer etc. (A.Meyer and F.Neyret.Interactive Volumetric Textures[C] //Proceedings ofEurographics Workshop on Rendering ' 98.Vienna:Springer-Verlag, 1998:157-168) effect that the method for the multi-level texture section of employing blend rendering is come the expression body texture on the basis of Kajiya method has improved drafting efficient greatly.
Based on this zonal texture microtomy of Meyer, Lengyel etc. (J.Lengyel.Real-time fur[C] //Proceedings of Eurographics Workshop on Rendering ' 00.Vienna:Springer-Verlag, 2000:243-256; J.Lengyel, E.Praun, A.Finkelstein et al.Real-Time Fur over Arbitrary Surfaces[C] //Proceedings of ACM 2001Symposium on Interactive 3D Graphics.New York:ACM Press, 2001:227-232) realized the real-time rendering of undercoat.They are expressed as a series of clathrums parallel with body surface with the lint surface of object, the corresponding translucent hair texture of mapping on every layer of grid; These translucent clathrums are carried out blend rendering just can produce the undercoat effect.The Lengyel method can generate hair effect true to nature in real time, has important use and is worth.
On Lengyel method basis, (Yang Gang such as Yang, Sun Hanqiu, Wang Wencheng, Wu Enhua draws [J] based on the sense of reality hair of GPU. the software journal, 2006,17 (3): 577-586) at the characteristics of zonal texture dicing method, proposed the analogy method of hair, further strengthened the sense of reality that hair is drawn from shade.In addition, Yang etc. (" adopting the undercoat real-time rendering of non-homogeneous texture layer ", Yang Gang, Sun Hanqiu, Wang Wencheng, Wu Enhua. computer-aided design (CAD) and graphics journal, 2007,19 (4): 430-435) also proposed to adopt the hair presentation technology of non-homogeneous texture layer, improved efficient and dirigibility that hair is represented.
But, cut into slices at above employing zonal texture and to represent in the method for hair, exist the problem that profile place hair is represented poor effect.(J.Lengyel such as Lengyel, E.Praun, A.Finkelstein et al.Real-Time Fur over ArbitrarySurfaces[C] //Proceedings of ACM 2001 Symposium on Interactive 3D Graphics.New York:ACM Press, 2001:227-232) overcome this problem by the mode of on the contour of object limit, adding " Fin section ".But the generation and the data transmission thereof of Fin section need to consume a large amount of time, and very efficient is drawn in influence.The present invention has promptly proposed to utilize GPU to quicken the method that the profile hair is drawn at this problem.This method is equally applicable at the model silhouette zone performance hair-like figure approximate with hair, as lawn, vegetation, carpet etc.
To specifically introduce the problem of zonal texture dicing method and existence thereof below, and briefly introduce the programmable functions of GPU.
1.2 hair method for expressing and problem thereof based on the zonal texture section
(J.Lengyel such as Lengyel, E.Praun, A.Finkelstein et al.Real-Time Fur over ArbitrarySurfaces[C] //Proceedings of ACM 2001 Symposium on Interactive 3D Graphics.New York:ACM Press, 2001:227-232) and Yang (Yang Gang, Sun Hanqiu, Wang Wencheng, Wu Enhua, sense of reality hair based on GPU is drawn [J]. software journal, 2006,17 (3): 577-586; " adopting the undercoat real-time rendering of non-homogeneous texture layer ", Yang Gang, Sun Hanqiu, Wang Wencheng, Wu Enhua. computer-aided design (CAD) and graphics journal, 2007,19 (4): 430-435) grade has all adopted the zonal texture microtomy to represent sense of reality undercoat.They with the lint of object surface with series of parallel in body surface, translucent texture layer is represented.Its key step is as shown in Figure 1: at pretreatment stage, (Fig. 1 a), the sampling of then this sheet hair being carried out horizontal direction generates the translucent 2 d texture (being called the shell texture, as Fig. 1 b) of multilayer to adopt particIe system to generate how much hairs of a slice.In render phase, generate the grid surface that multilayer is parallel to body surface, every layer of grid surface shines upon the shell texture of corresponding level, these grid surfaces carried out the alpha blend rendering according to order from inside to outside just produced fluffy effect.Every layer of grid surface all forms (Fig. 1 c) by the master mould summit is offset along normal direction.
Adopt this zonal texture dicing method, can draw out undercoat effect true to nature fast.But owing to exist the space between the texture grid layer, when sight line and surface mesh when parallel (this situation generally occurs near the body outline), just may come out in space between clathrum, make the people feel that the transparency of profile place hair is too high, influences the sense of reality that hair is drawn.Lengyel adopts the mode of adding square texture section (being called the Fin section) on silhouette edge to overcome this problem.Each Fin cuts into slices perpendicular to body surface, and is mapped with the texture (being called the Fin texture) of cluster hair, thereby the space that can cover the texture grid interlayer reaches good drafting effect.But, need before drafting, all carry out silhouette edge and detect each bar limit of object in order to generate the Fin section; Every the detected limit that is in contour area all will generate corresponding Fin slice of data for it; And these Fin slice of datas are conveyed into the GPU from CPU draw.This process tends to consume the time that whole hair is drawn flow process 30%, and very efficient is drawn in influence.
1.3GPU programmable functions
In recent years, and graphic process unit (Graphic Processing Unit, architecture GPU) is brought in constant renewal in, and processing power constantly strengthens.Because GPU has adopted the framework of parallel processing, makes the computing velocity of GPU on graphics process be higher than CPU far away.The more important thing is that GPU has had programmability from strength to strength, this makes the user related operation to be handled to transfer among the GPU by programming and finishes.In GPU, the user can carry out two parts that have of programming Control: summit renderer (Vertex shader) and pixel renderer (Pixel shader).When drawing object, how much vertex datas of object at first are admitted in the renderer of summit, and the user can programme to the summit renderer and carry out various processing to object vertex; The vertex data of handling through the summit renderer generates pixel data and enters the pixel renderer behind rasterisation, the user can programme in this part and finish processing to pixel color, to produce the drafting effect of expecting.
At present, utilize the programmable functions of GPU that calculation task is transferred to finish among the GPU by CPU and become a kind of very effective acceleration strategy.Yang etc. [9] have utilized the computing power of GPU in the hair drawing process.In its method, calculate in the renderer of summit the position of object multi-layer net face, and the calculating of hair illumination is then programmed in the pixel renderer and finished.The present invention will further utilize the processing power of GPU to finish the detection of silhouette edge and the generation of Fin section, to improve the efficient that hair is drawn.
Summary of the invention
The objective of the invention is to solve existing low based on profile hair in the hair method for drafting of zonal texture microtomy (being the represented hair of Fin section) formation efficiency, the problem that render speed is slow proposes a kind of hair of profile efficiently method for drafting.The hair-like figure approximate that this method is equally applicable to adopt the Fin slicing mode to show with hair.Therefore, the rendered object of hereinafter mentioning " hair " has been understood to include all rendering techniques " hair-like figure " identical with hair, such as, lawn, fine hair or the like.
The inventive method utilizes the programmable functions of GPU and efficient processing power thereof to quicken the calculating and the generation of Fin section, thereby quickens the drafting of profile hair.This method is transferred to the generation of the detection of silhouette edge and Fin section fully among the GPU and is carried out.Make full use of the powerful computation capability of GPU so on the one hand, removed the data transmission burden from CPU to GPU on the other hand from, thereby improved the processing speed of Fin section greatly, improved the drafting efficient of sense of reality hair.
The outline line of object is meant sideline tangent with sight line on the body surface, also just object forward surface and back to the separatrix of face.The contour area of object can be defined as the zone in the certain limit around outline line.
For using the Fin section, need before drafting, carry out profile region and detect every limit of object model (forming) by polygonal patch, have only the detected limit that is in contour area just need generate the Fin section for it.Detection method commonly used is to calculate direction of visual lines V and dot product as normal direction N in front, when the absolute value of this dot product during less than certain threshold value beta, thinks that promptly this limit is in contour area.Direction of visual lines V adopts viewpoint Eye limit mid point E therewith PosThe vector that line became.This process can be expressed as with formula:
| V *N *|<β, wherein V=Eye-E Pos(1)
V in the following formula *And N *The vector of unit length of representing vectorial V and N respectively.The value of β can be specified by the user.In inventor's experiment, β is taken as 0.2 can obtain good effect.The detected limit that belongs to profile region just can generate a quadrilateral Fin section for it.The Fin section vertically is placed on the front, and it highly is the hair length on the front.
In the method for Lengyel and Yang, the generation of the detection of above silhouette edge and Fin section is all carried out in CPU.And among the present invention, the inventor is by the suitable data transmission policies of design, the generation of the detection of silhouette edge and Fin section transferred to fully finished in the summit renderer among the GPU; And utilize the pixel renderer to finish the texture and the drafting of Fin section.Describe in detail below.
2.1GPU the silhouette edge that quickens detects and the Fin section generates
Though the summit renderer of GPU can carry out summit computing flexibly, but can't directly be used for finishing the detection of silhouette edge and the generation of Fin section.This be because: (1) needs to increase a series of vertex datas and represent the quad patch that these are newly-generated in order to generate Fin section.But the summit renderer does not still possess the function that generates new summit.What (2) handle in the renderer of summit all is isolated one by one summit, and the notion on " limit " can't not carried out the judgement of silhouette edge.
In order to overcome this two problems, the inventor is that every limit on the model generates a cover Fin slice of data in advance.As shown in Figure 2, limit e (is v 1v 2) on Fin section by four summit V 1, V 2, V 3, V 4Form, the Fin slice of data of limit e just thus the data on four summits form.The data on each summit also will comprise two " side informations " except the position coordinates and Fin texture coordinate that comprise the summit: i.e. the normal direction E of limit e NormalPoint midway E with limit e PosE wherein NormalCan calculate the normal direction N of two adjacent surfaces about the limit for this reason 1And N 2And, i.e. E Normal=N 1+ N 2If total limit number of current model is edgeNum, then the Fin slice of data on all limits just comprises the data on edgeNum*4 cover summit altogether.When drawing, the inventor all sends into this edgeNum*4 cover vertex data in the renderer of summit and handles.In the renderer of summit,, just can judge affiliated limit, current summit and whether be in contour area by computing formula (1) according to the normal vector on limit and these two data of point midway on limit.If then this vertex information just can participate in subsequently rasterisation and draw calculation; Otherwise just directly this summit is moved on to outside the ken (ken z coordinate range after the projective transformation is [1,1], so we are as long as be located at the z coordinate on this summit and get final product outside [1,1], such as vertex position being set as (0,0 ,-2)).Moving on to the outer quadrilateral sheet of the ken will be reduced automatically, no longer carries out pixel operations such as rasterisation.
Adopt this strategy, in fact be equivalent to before silhouette edge detects, generate all quadrilateral Fin sections in advance, and the section on those non-silhouette edges is just removed in the effect of summit renderer, has so just walked around first problem; By then making us overcome second problem for each summit additional " side information ".
As previously mentioned, the Fin slice of data of Sheng Chenging comprises the data on edgeNum*4 cover summit altogether in advance, and also includes extra " side information " in every cover vertex data.If when each the drafting, all will transmit these data, will take the suitable time to GPU.Fortunately, these Fin slice of datas all are static, can't change; So we can utilize the Vertex Buffer Object function of GPU, and this part static data is resided in the video memory, have saved the burden of data transmission fully.
Thus, in the method, CPU will no longer carry out the judgment task on any desired contour limit.We only need organize all quadrilateral slice of datas in advance, and it is deposited in the video memory.Relying on GPU to handle fully when drawing then gets final product.Than method in the past, the inventive method has been transferred to the judgement of silhouette edge among the GPU and has been carried out, and has removed the real-time generation of Fin section and the data transmission burden of CPU → GPU from, has played the effect of quickening.The speed of pair two kinds of methods of the table 1 among the embodiment is added up.As can be seen, in the generation and drafting of Fin, this method has improved 10-15 doubly than existing method on speed.
2.2 the drafting of profile hair
After generating the Fin section, can be to Fin section carrying out draw calculation in the pixel renderer.The Fin section except mapping Fin texture, also will be shone upon " the tangential texture " of width of cloth record hair tangent vector, to be used for the calculating of hair illumination when drawing.As shown in Figure 3, Fig. 3 a is the Fin texture, and Fig. 3 b is the synoptic diagram of hair tangent vector.The value record of each point the tangent vector of this some place hair in the tangential texture.Fin texture and tangential texture thereof all generate by how much hairs of sample rendering at pretreatment stage.The Fin texture size that the present invention adopts is 512*128.When being certain piece Fin section mapping Fin texture, do not need to shine upon whole strip Fin texture, only need choose a section of Fin texture and shine upon and get final product according to the width of this Fin section.The two ends, the left and right sides of noticing Fin texture among Fig. 3 can be seamless spliced, can guarantee that like this hair texture in each Fin section is continuous.
When drawing in the pixel renderer, at first obtain the colouring information of hair from the Fin texture, the hair of obtaining current pixel from tangential texture is tangential.Finish the drafting of hair then according to formula 2.Formula 2 is (J.Lengyel such as Lengyel, E.Praun, A.Finkelstein et al.Real-Time Fur over Arbitrary Surfaces[C] //Proceedings of ACM2001 Symposium on Interactive 3D Graphics.New York:ACM Press, 2001:227-232) the optical illumination model at hair of Cai Yonging.
FurLighting=ka+kd*C hair*(1-(T·L) 2) pd/2+ks*(1-(T·H) 2) ps/2 (2)
In the formula, T is the tangential of hair, C HairIt is the color of hair; L is a radiation direction, and H is the angle separated time direction of light and sight line angle; Ka, kd and ks are surround lightings, the color of scattered light and Gao Guang; Pd and ps are the indexes of scattered light and Gao Guang.
When drawing, also need the transparency of Fin section be multiply by a regulatory factor α.α=1-|V *N *|, V wherein *Be direction of visual lines, and N *For working as normal direction in front, V *And N *It all is vector of unit length.When Fin cut into slices more near outline line, its α value was more near 1; Otherwise α is more little.This just makes that the Fin section is thin out gradually to the fringe region of profile region from outline line, has alleviated Fin section and zonal texture conflicting on the drafting color, can produce more level and smooth drafting effect.Shown among Fig. 4 and adopted the profile hair of this method to draw effect.
Description of drawings
Fig. 1. adopt the principle sketch of the hair method for expressing of zonal texture microtomy;
Fig. 1 (a). schematic how much hairs;
Fig. 1 (b). sampling generates n layer shell texture, n=16;
Fig. 1 (c). generate the multi-layer net face and represent hair;
Fig. 1 (d). add quadrilateral Fin section on the model meshes surface;
Wherein, 1-master mould grid; 2-translation summit forms outer grid; 3-is perpendicular to the Fin section of model surface;
Fig. 2 .Fin section and related data synoptic diagram thereof;
Fig. 3 .Fin texture and hair tangent vector synoptic diagram;
Fig. 3 (a) .Fin texture synoptic diagram;
Fig. 3 (b). hair tangent vector synoptic diagram;
Wherein, the hair tangent vector at 4-point A place; The hair tangent vector at 5-point B place;
Fig. 4. the sample hair generates the hair texture;
Fig. 4 (a). the sample hair that adopts particle method to generate
Fig. 4 (b). the multilayer hair texture of generation.Be respectively the 0th, 4,9,14 layers of hair texture from left to right; Last row is the color (RGB component) of hair texture, and following row is each self-corresponding alpha component;
Fig. 5. the hair effect synoptic diagram that the embodiment of the invention is drawn;
Fig. 5 (a). the hair of ring model is drawn effect;
Fig. 5 (b). the hair of rabbit model is drawn effect;
Fig. 5 (c). the hair of camel model is drawn effect;
Wherein left column figure is the drafting effect (can see that profile place hair is excessively transparent, often can directly see model silhouette) of not adding Fin; Middle row figure is the Fin hair effect of being drawn; Right row figure is the effect behind the interpolation Fin.
Embodiment
Below in conjunction with embodiment the inventive method is described further.
The microcomputer of present embodiment is configured to P4 3.0G CPU, 1G internal memory, GeForce6800GT video card, 256M video memory.Draw the hair object according to following step:
Pretreatment stage:
(1), generates multilayer hair texture according to as shown in fig. 1 method.
During concrete the execution, at first adopt the method for particIe system above a square area, to generate a slice sample hair body, along this foursquare normal direction this piece hair body is carried out segmentation then and draw to produce the translucent hair texture of multilayer (being called the shell texture).The shell texture contains four components of RGBA, and RGB is a color value, and A represents opacity.As Fig. 4 (a) is exactly a sample hair that adopts the particIe system method to generate, and can generate 16 layers of hair texture to this piece hair sampling, has only shown wherein four layers among Fig. 4 b.
(2) " the tangential texture " of generation Fin texture and Fin.
To the sample hair among Fig. 4 (a) sample from the side the Fin texture that just can generate hair and " tangential texture " (as shown in Figure 3) thereof.
(3) texture coordinate on generation model surface.
In order to set texture coordinate for each dough sheet summit of model surface with foursquare shell texture to mold surface.The present invention adopts Lapped texture method (E.Praun, A.Finkelstein, and H.Hoppe.Lapped textures[C] //Proceedings of SIGGRAPH ' 2000, Computer Graphics, AnnualConference Series, 2000:465-470) texture coordinate of generation model.Thereby the shell texture evenly and seamlessly can be mapped to model surface.
(4) be that every limit on the model generates a cover Fin slice of data.And utilize the Vertex Buffer Object function of GPU, this part data is resided in (described in 2.1 joints) in the video memory.
Render phase:
(1) adopt GPU to quicken to finish the drafting of profile hair.
This is the main task that the inventive method will be finished.The inventor adopts the summit renderer of GPU to finish the detection of silhouette edge and the generation of Fin section (described in 2.1 joints) according to the Fin slice of data that generates in advance; And in the pixel renderer, finish the profile hair drafting (as 2.3 the joint as described in).
(2) blend rendering multi-layer net face is finished the drafting of body surface hair.
Generate the grid surface that multilayer is parallel to body surface, every layer of grid surface shines upon the shell texture of corresponding level, and these grid surfaces are carried out the alpha blend rendering to generate the hair effect according to order from inside to outside.Every layer of grid surface all forms (as shown in Fig. 1 c) by the master mould summit is offset along normal direction.
When specific implementation, the inventor adopts the summit renderer of GPU to finish the calculating of multi-layer net vertex of surface deviation post, with the auxiliary generation that realizes the multi-layer net face.Subsequently, when drawing the multi-layer net face, the inventor adopts the pixel renderer to finish the calculating of hair optical illumination and the alpha of multi-layer net face mixes.But the detailed process list of references (Yang Gang, Fei Guangzheng, Wu Enhua. generate the hair [J] of band speckle in real time. computer-aided design (CAD) and graphics journal, 2004,16 (9): 1244-1249; Yang Gang, Sun Hanqiu, Wang Wencheng, Wu Enhua draws [J] based on the sense of reality hair of GPU. software journal, 2006,17 (3): 577-586).
Said process is the complete procedure of drawing the object hair.The method of the invention be at the silhouette edge in this process detect and the drafting of profile hair this part and quicken.
According to said process, present embodiment is selected for use three models to carry out hair and is drawn experiment.The drafting time to three models in the table 1 is added up, and the drafting efficient of this method and existing method is compared.The drafting effect of three models as shown in Figure 5.
Table 1
Model Annulus Rabbit Camel
Dough sheet number/limit number 576/864 3065/4599 4072/6112
Adopt the Fin processing time (millisecond) of GPU 0.083 0.243 0.296
Do not adopt the Fin processing time of GPU 1.204 2.964 3.525
Adopt the inventive method integral body to be drawn the raising of efficient 17% 22% 24%
Summit additional data structure take up space (KB) 148.5 790.5 1050.5
When so-called " Fin processing time " comprises the every frame picture of drafting in the 3rd row and the 4th row in the table 1 to the temporal summation of a series of Fin of relating to such as drafting of the transmission of the detection of silhouette edge, Fin slice of data and Fin.By the contrast of this two line data as seen, adopt the inventive method to improve 10-15 doubly than the Fin processing speed that adopts previous methods.The 5th row has then provided and has used the inventive method to make the whole degree of drawing the efficient raising of hair.Its computing method are: (FPS New-FPS Old)/FPS OldFPS wherein NewWhole drawing frames speed when the inventive method is used in expression, and FPS OldBe the frame rate of existing method.
What deserves to be explained is that with respect to prior art, unique cost of the inventive method has been to increase the occupancy of video memory.The inventive method need be in video memory the Fin slice of data on resident all limits, and previous methods only need provide the Fin data of contour area inner edge.The 6th row has been added up the shared video memory space of Fin slice of data when adopting the inventive method in the table 1.If the total limit of model number is edgeNum, the Fin slice of data on all limits just comprises the data on edgeNum*4 cover summit altogether, and every cover vertex data comprises the Fin texture coordinate on this apex coordinate, this summit, the normal direction on limit and the position coordinates of limit mid point.Three coordinate components of position coordinates or normal direction need three float type data to store, and texture coordinate needs two float type data.Then the total space occupancy can be calculated as: edgeNum*4* (3*3+2)=edgeNum*44*4 (byte).As can be seen from the table, with respect to the video memory space that increases day by day at present, these space hold amounts still can be born.
Therefore the inventive method problem of not existing the video memory off-capacity to cause this method to implement.
The present invention proposes a kind of GPU of utilization and quicken the method that the profile hair is drawn.This method is undertaken by for vertex data interpolation " side information " detection computations of silhouette edge having been transferred among the GPU dexterously; And walked around the problem that the summit renderer can't generate new summit by storing the Fin slice of data in advance, removed the data transmission burden of CPU → GPU from.This method has been quickened the processing of profile hair, has effectively improved the integral body of hair and has drawn efficient.
What be worth emphasizing is, the detection on model silhouette limit is a typical problem in the graphics process, not only need the detection of silhouette edge during the draw outline hair, at the drafting (M.Lee that model is carried out the feeling of unreality style, A.K.Michael, et al.Real-Time Nonphotorealistic Rendering[C] //Proceedings of SIGGRAPH ' 97,1997:415-420), all may need to carry out the detection of silhouette edge in many figure problems such as shade generation, occlusion culling.The method of utilizing GPU to quicken the survey of profile frontier inspection among the present invention can directly promote the use of in all pattern treatment procedures that need the silhouette edge detection, is of universal significance.

Claims (9)

1. the hair-like pattern drawing method of profile region that quickens of a GPU, it comprises following steps:
A) generate Fin texture and the tangential texture of corresponding hair-like figure of representing the hair-like figure of contour area;
B) whether each point of judgment object model is in contour area in the renderer of the summit of GPU;
C) Fin on each the bar limit in draw outline zone section in the pixel renderer of GPU, and the hair-like graphical effect of generation contour area.
2. the method for claim 1 is characterized in that, whether each point of the described judgment object model of step b) is in contour area is realized by following method:
Four summits that are the Fin section at place, every limit of object model in CPU generate a cover vertex data, described vertex data comprises: the position coordinates on current summit and Fin texture coordinate, when normal direction information in front, point midway information and viewpoint position information;
Above-mentioned vertex data is sent to the summit renderer of GPU;
In the renderer of summit, whether judge the absolute value of working as the dot product between direction of visual lines in front and the normal direction less than preset threshold, if, think that then the point when the place, front is in contour area, if not, think that then the point when the place, front is not in contour area;
Described direction of visual lines is for arriving the vector of viewpoint when mid point in front.
3. method as claimed in claim 2 is characterized in that utilizing the Vertex Buffer Object function of GPU, and described vertex data is resided in the video memory.
4. the method for claim 1 is characterized in that, described Fin texture of step a) and the tangential texture of corresponding hair generate by drawing how much hairs.
5. method as claimed in claim 2 is characterized in that, described when normal direction information in front for when in front two in abutting connection with the normal direction of dough sheet and.
6. method as claimed in claim 2 is characterized in that, described preset threshold is 0.2.
7. the method for claim 1 is characterized in that, described GPU adopts the framework of parallel processing, and possesses programmable functions.
8. the method for claim 1 is characterized in that, the model of described GPU is Geforce 6800GT.
9. the method for claim 1 is characterized in that, described hair-like figure is hair or lawn or fine hair.
CN200810113004A 2008-05-27 2008-05-27 Method for drafting pelage-shaped pattern in profile region with accelerative GPU Expired - Fee Related CN100585636C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810113004A CN100585636C (en) 2008-05-27 2008-05-27 Method for drafting pelage-shaped pattern in profile region with accelerative GPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810113004A CN100585636C (en) 2008-05-27 2008-05-27 Method for drafting pelage-shaped pattern in profile region with accelerative GPU

Publications (2)

Publication Number Publication Date
CN101281655A true CN101281655A (en) 2008-10-08
CN100585636C CN100585636C (en) 2010-01-27

Family

ID=40014100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810113004A Expired - Fee Related CN100585636C (en) 2008-05-27 2008-05-27 Method for drafting pelage-shaped pattern in profile region with accelerative GPU

Country Status (1)

Country Link
CN (1) CN100585636C (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339475A (en) * 2011-10-26 2012-02-01 浙江大学 Rapid hair modeling method based on surface grids
CN102568032A (en) * 2010-12-09 2012-07-11 中国科学院软件研究所 Method for generation of reality surface deformation model
CN106575445A (en) * 2014-09-24 2017-04-19 英特尔公司 Furry avatar animation
CN113822981A (en) * 2020-06-19 2021-12-21 北京达佳互联信息技术有限公司 Image rendering method and device, electronic equipment and storage medium
CN114202607A (en) * 2020-09-16 2022-03-18 电子技术公司 System and method for hair rasterization

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1667652A (en) * 2005-04-15 2005-09-14 北京大学 Vision convex hull accelerated drafting method based on graph processing chips
US8031936B2 (en) * 2006-03-20 2011-10-04 Accenture Global Services Limited Image processing system for skin detection and localization

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568032A (en) * 2010-12-09 2012-07-11 中国科学院软件研究所 Method for generation of reality surface deformation model
CN102339475A (en) * 2011-10-26 2012-02-01 浙江大学 Rapid hair modeling method based on surface grids
CN102339475B (en) * 2011-10-26 2014-01-29 浙江大学 Rapid hair modeling method based on surface grids
CN106575445A (en) * 2014-09-24 2017-04-19 英特尔公司 Furry avatar animation
CN106575445B (en) * 2014-09-24 2021-02-05 英特尔公司 Fur avatar animation
CN113822981A (en) * 2020-06-19 2021-12-21 北京达佳互联信息技术有限公司 Image rendering method and device, electronic equipment and storage medium
CN113822981B (en) * 2020-06-19 2023-12-12 北京达佳互联信息技术有限公司 Image rendering method and device, electronic equipment and storage medium
CN114202607A (en) * 2020-09-16 2022-03-18 电子技术公司 System and method for hair rasterization

Also Published As

Publication number Publication date
CN100585636C (en) 2010-01-27

Similar Documents

Publication Publication Date Title
Raskar et al. Image precision silhouette edges
CN108510577B (en) Realistic motion migration and generation method and system based on existing motion data
CN100585636C (en) Method for drafting pelage-shaped pattern in profile region with accelerative GPU
AU2006236289A1 (en) Techniques and workflows for computer graphics animation system
Nießner et al. Real‐time rendering techniques with hardware tessellation
Li et al. Vox-surf: Voxel-based implicit surface representation
US9905045B1 (en) Statistical hair scattering model
Bruckner et al. Hybrid visibility compositing and masking for illustrative rendering
CN1776747A (en) GPU hardware acceleration based body drawing method for medical image
Barringer et al. High-quality curve rendering using line sampled visibility
Ryder et al. Survey of real‐time rendering techniques for crowds
CN107689076A (en) A kind of efficient rendering intent during the cutting for system of virtual operation
Hanniel et al. Direct rendering of solid CAD models on the GPU
CN106803278B (en) Virtual character semi-transparent layered sorting method and system
CN110738719A (en) Web3D model rendering method based on visual range hierarchical optimization
CN104915992A (en) A real-time shadow volume rendering method based on femur CT images
Hajagos et al. Fast silhouette and crease edge synthesis with geometry shaders
Speer An updated cross-indexed guide to the ray-tracing literature
Duan et al. Optimization of multi-view texture mapping for reconstructed 3D model
US9646412B1 (en) Pre-sorted order independent transparency
Li An automatic rendering method of line strokes for Chinese landscape painting
Trapp et al. Occlusion management techniques for the visualization of transportation networks in virtual 3D city models
Lin et al. A feature-adaptive subdivision method for real-time 3D reconstruction of repeated topology surfaces
Tavares et al. Efficient approximate visibility of point sets on the GPU
Apaza-Agüero et al. Parameterization and appearance preserving on cubic cells for 3d digital preservation of cultural heritage

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100127

Termination date: 20150527

EXPY Termination of patent right or utility model