WO2017198104A1 - 一种粒子系统的处理方法及装置 - Google Patents

一种粒子系统的处理方法及装置 Download PDF

Info

Publication number
WO2017198104A1
WO2017198104A1 PCT/CN2017/083917 CN2017083917W WO2017198104A1 WO 2017198104 A1 WO2017198104 A1 WO 2017198104A1 CN 2017083917 W CN2017083917 W CN 2017083917W WO 2017198104 A1 WO2017198104 A1 WO 2017198104A1
Authority
WO
WIPO (PCT)
Prior art keywords
particle
particle system
target
information
target particle
Prior art date
Application number
PCT/CN2017/083917
Other languages
English (en)
French (fr)
Inventor
马晓霏
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020187018114A priority Critical patent/KR102047615B1/ko
Publication of WO2017198104A1 publication Critical patent/WO2017198104A1/zh
Priority to US16/052,265 priority patent/US10699365B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present application relates to the field of computer graphics technology, and in particular, to a method and an apparatus for processing a particle system.
  • Particle systems are now commonly used to demonstrate the above-mentioned irregularities, and particle systems can simulate complex motion systems.
  • the particle position update and death detection in the particle system are processed by a CPU (Central Processing Unit), and the GPU (Graphics Processing Unit) displays the particles in the particle system according to the processing result of the CPU. It takes a lot of CPU time, and while the CPU is in the process of processing, the GPU needs to be in the lock wait state. Until the CPU completes the particle generation and location update, the GPU can display according to the updated data of the CPU, resulting in low processing efficiency.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the embodiment of the present application provides a method and a device for processing a particle system, which can improve the processing efficiency of the particle system.
  • the embodiment of the present application provides a method for processing a particle system, including:
  • Receiving overall attribute information of the target particle system sent by the CPU, and the overall attribute information of the target particle system includes a particle display range, a particle life cycle range, a particle speed range, and a generation time;
  • Individual particles of the target particle system are displayed based on particle properties of individual particles in the target particle system.
  • the embodiment of the present application further provides an apparatus for processing a particle system, including:
  • the memory stores a plurality of instruction modules, including an overall attribute information receiving module, a particle attribute initialization module, and a particle display module; when the instruction module is executed by the GPU, executing the following operating:
  • the overall attribute information receiving module is configured to receive overall attribute information of a target particle system sent by the CPU;
  • the particle attribute initialization module is configured to generate particles of the target particle system according to overall attribute information of the target particle system and initialize particle attributes of each particle of the target particle system, wherein the particle attributes of the respective particles include positions of the respective particles Information, speed information, life cycle, and generation time;
  • the particle display module is configured to display each particle of the target particle system according to particle properties of each particle in the target particle system.
  • Embodiments of the present application also provide a non-transitory machine-readable storage medium in which machine readable instructions are stored, the machine readable instructions being executable by a graphics processor GPU to perform the following operations:
  • the overall attribute information of the target particle system includes the particle display range, the particle life cycle range, the particle velocity range, and the generation time;
  • Individual particles of the target particle system are displayed based on particle properties of individual particles in the target particle system.
  • the GPU receives the overall attribute information of the particle system sent by the CPU, generates a particle, and performs display and lifecycle management on the generated particle.
  • the embodiment of the present application substantially reduces data transmission between the GPU and the CPU. The number and frequency of GPU waiting for CPU data transmission are reduced, thereby effectively improving the processing efficiency of the particle system.
  • FIG. 1 is a schematic flow chart of a method for processing a particle system in an embodiment of the present application
  • FIG. 2 is a schematic view showing a pattern effect displayed by the particle system in the embodiment of the present application.
  • FIG. 3 is a schematic diagram showing the text effect displayed by the particle system in the embodiment of the present application.
  • FIG. 4 is a schematic diagram showing a display effect of a particle system combined with a specific game scene in the embodiment of the present application;
  • FIG. 5 is a schematic flow chart of a processing method of a particle system in another embodiment of the present application.
  • FIG. 6 is a schematic flow chart of a method for processing a particle system in another embodiment of the present application.
  • FIG. 7 is a schematic flow chart of a method for processing a particle system in another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a partner algorithm in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a particle system distribution interface in an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a processing device of a particle system in an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a particle attribute update module in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a CPU generated by a CPU in an embodiment of the present application for transmitting a color image to a GPU according to a black and white image;
  • FIG. 13 is a schematic structural diagram of a processing apparatus of a particle system in another embodiment of the present application.
  • the method and the device for processing the particle system in the embodiment of the present application may be implemented by a GPU in a computer system, or may be implemented in a functional system that implements a function similar to the GPU in the computer system.
  • the GPU is used as the embodiment of the present application.
  • the implementation of the present application is described in the functional architecture of the computer system of other embodiments, and other functional structural modules may be used to implement the corresponding steps in the embodiments of the present application.
  • FIG. 1 is a schematic flow chart of a method for processing a particle system in an embodiment of the present application. As shown in the figure, the method includes at least:
  • the particle system in the embodiment of the present application is a graphic display class for effectively simulating an irregular fuzzy object or shape, for example, simulating a fireworks in a screen through a particle system, and passing another particle system on the screen. Simulate a string of characters that change state, and so on.
  • the target particle system is a particle system for rendering a target texture.
  • an irregular object is defined as a group of irregular, randomly distributed particles. Cheng, and each particle has a certain life cycle, they constantly change position, keep moving, fully embodies the nature of irregular objects.
  • the CPU transfers the overall attribute data of the particle system to the GPU, where the overall attribute data includes a range of values of attributes of all the particles in the particle system, and does not need to include attributes of a single particle, and the data transmitted is also It does not increase as the number of particles increases.
  • the overall attribute information of the particle system includes a particle display range (shader shader emission position and range), a particle life cycle range, a particle velocity range, and a generation time.
  • the CPU may pass the overall attribute information of the target particle system to the GPU constant register.
  • the overall attribute information may further include key frame data of the target particle system, or include pattern information of the target particle system, used to initialize particle properties of individual particles of the particle system, or used to subsequently update the target particle.
  • the key frame data of the target particle system includes a display object position, a change speed, or a display color corresponding to at least one key frame.
  • the pattern information of the target particle system carries initial pixel position information and generation time of each pixel.
  • the CPU may periodically send the overall attribute information of the target particle system to the GPU for the GPU to subsequently update the particle attributes of the respective particles of the target particle system.
  • S102 Generate particles of the target particle system according to overall attribute information of the target particle system and initialize particle attributes of respective particles of the target particle system.
  • the particle properties of the individual particles may include position information, velocity information, life cycle, and generation time of each particle.
  • the GPU may randomly determine the position information of each particle in the particle display range according to the particle display range (determining the shader shader emission position and range) in the overall attribute information of the target particle system, that is, the generated particle positions are randomly Distributed in the particle display range; and the GPU can be based on the particles in the overall attribute information of the target particle system
  • the life cycle range randomly determining the life cycle of each particle within the life cycle of the particle, that is, the generated life cycle of each particle is randomly distributed in the life cycle range of the particle; and the GPU can be based on the particle velocity range of the target particle system.
  • the velocity of each particle is randomly determined within the particle velocity range, that is, the generated velocity of each particle is randomly distributed in the particle velocity range; and the GPU can determine the life at the generation time according to the generation time in the overall property information of the target particle system.
  • the generation time of each particle is randomly determined within the period, that is, the generated generation time of each particle is randomly distributed within the life cycle determined by the generation time.
  • the GPU may save the generated location information and generation time of the generated particle in a position rendering texture (PosRT, position RT, where RT is Render Target, render object, representing off-screen rendered texture), where PosRT rgb
  • the channel records the position information of the particle, and the alpha channel of the PosRT records the generation time of the particle; the velocity information and the life cycle of the generated particle are saved in the velocity rendering texture (velocityRT), wherein the velocity of the rgb channel of the velocityRT records the velocity information of the particle, VelocityRT
  • the alpha channel records the life cycle of the particle.
  • the GPU can write particle properties of the particles into the position rendering texture and the velocity rendering texture through a Shader for generating particles.
  • each RT may be in the RGBA32f format, occupying a memory of 0.125-16M, corresponding to a particle attribute that can store 8192-100W particles.
  • the GPU may initialize a particle of each particle of the target particle system according to key frame data of the target particle system. Attributes.
  • the key frame data of the target particle system may include an initial display position, an initial change speed, or an initial display color, and the like.
  • the GPU may determine the position information of the target particle system according to the display object position of the initial key frame, compared to determining the target according to the particle display range in the overall attribute information.
  • the position information of the particle system can further accurately the particle system according to the position of the display object of the initial key frame.
  • the display position of each particle is not limited by the shape of the display range of the shader shader.
  • the GPU can further accurately determine the initial velocity information and display color of each particle of the particle system according to the rate of change of the initial key frame.
  • the GPU can draw the corresponding particle in the screen according to the position information and the speed information of the particle by sampling the position information saved by the particle in the position rendering texture PosRT and the velocity information saved in the speed rendering texture velocityRT, where the particle is
  • the location information determines its location on the screen, and the velocity information can determine the pose, direction, and subsequent updates for the particle display.
  • the GPU can display the particles through a shader for displaying the particles, and the shader for displaying the particles is specifically used to read the position information of the particles from the position rendering texture, and read from the speed rendering texture.
  • the velocity information of the particle is taken, and the corresponding particle is drawn on the screen according to the position information and the velocity information of the obtained particle.
  • Shader is an editable program on the GPU that is used to implement image rendering instead of a fixed rendering pipeline.
  • Shaders include Vertex Shader vertex shaders and Pixel Shader pixel shaders.
  • the Vertex Shader is used for the calculation of the geometric relationship of the vertices
  • the Pixel Shader is used for the calculation of the source color of the film. Due to the editability of the Shader, by sampling the texture RT in the Vertex Shader, sampling the colors in the Pixel Shader, and displaying the corresponding particles, a variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
  • FIG. 2 is a pattern effect displayed by a shader
  • FIG. 3 is a text effect displayed by a shader.
  • the pattern effect and the text effect may be black and white or color.
  • the display effect of the target particle system of the present application can be as shown in FIG. 4.
  • the particle display of the target particle system can be placed at the top of the game scene, that is, the game scene interface is first drawn. Other display objects, and finally on the screen Display the target particle system.
  • the shader display particles may be in a radiation mode or a polymerization mode, wherein the radiation mode is to randomly radiate particles at a random velocity around the center of the emitter position of the shader, and then the particle is aggregated in an initial state.
  • the highest degree, then gradually divergence; the aggregation method is also called gravity, that is, the shader randomly emits particles within a certain range, and then the gravity is set on the preset track or pattern of the screen, and the surrounding particles can be pulled around the track or pattern, then In the initial state, the degree of particle polymerization is very low, and then gradually aggregates around the preset track or pattern to form a display effect of a preset track or pattern.
  • the data is generated and displayed.
  • the embodiment of the present application substantially reduces the data transmission between the GPU and the CPU, and reduces the number and frequency of the GPU waiting for the CPU data transmission. , thereby effectively improving the processing efficiency of the particle system.
  • FIG. 5 is a schematic flow chart of a method for processing a particle system according to another embodiment of the present application. The method as shown in the figure includes at least:
  • the overall attribute information of the particle system includes particle display range (shader shader emission position and range), particle life cycle range, particle velocity range, and generation time.
  • the CPU can pass the overall attribute information of the target particle system to the GPU constant register.
  • the pattern information of the target particle system may include position information of each pixel in the image and a generation time of each pixel.
  • the CPU may write pattern information (eg, a color picture) of the target particle system to a specified storage space, such as a memory, a hard disk, or a video memory, and load the pattern information by the GPU to the specified storage space.
  • the CPU may generate a color image according to a target black and white image, by traversing each pixel in the black and white image one by one, when the pixel color is greater than 0 (non-black) And recording, by the rgb channel of one pixel in the color image, position information of the pixel whose color is greater than 0, and recording, by the alpha channel of the pixel, information such as generation time and display time of the pixel whose color is greater than 0, Thereby, the position and time information of the pixel points whose respective pixel dot colors are larger than 0 are sequentially stored in the respective pixels of the color image.
  • the CPU transmits the pattern information of the obtained color image to the GPU.
  • FIG. 12 An exemplary black and white picture as shown in Figure 12, the left side shows the rgb channel of the black and white picture, the right side shows the alpha channel of the black and white picture, the CPU can be based on the position in the rgb channel of the black and white image and the alpha channel
  • the time information is generated to obtain a right color image, and the color of each pixel of the color image is determined according to the position of the non-zero pixel of the black and white image, and the alpha channel of each pixel records the generation time, display time, and the like of the pixel whose color is greater than 0. information.
  • the black and white image may be an image of a text pattern.
  • the CPU may further generate the color image according to the stereo model image (3D mesh image), and the RGB channel of the pixel of the color image also stores the position coordinates of the vertex position in the stereo model image.
  • the black and white image based on the color pattern generated by the CPU may be a text pattern. Since the image generated by the text pattern has a very small resolution (default 32*32), it can be generated in real time.
  • the GPU extracts corresponding pixel position information and a generation time of the pixel point from the pattern information, and further initializes particle attributes of each particle of the target particle system according to the position and generation time of each pixel according to the overall attribute information of the target particle system. .
  • the GPU can restore the original image corresponding to the pattern information on the screen, such as the target black and white image or the stereo model image described above.
  • the GPU in this embodiment may generate and display particles according to the overall attribute information and pattern information of the target particle system according to the overall attribute information and pattern information of the target particle system sent by the CPU, so as to be extracted according to the pattern information.
  • the pixel position information and the generation time of the pixel point further accurately display the display position and generation time of each particle of the particle system to achieve finer particle display control.
  • FIG. 6 is a schematic flow chart of a processing method of a particle system according to another embodiment of the present application. The method shown in the figure includes at least:
  • the overall attribute information of the particle system includes particle display range (shader shader emission position and range), particle life cycle range, particle velocity range, and generation time.
  • the CPU can pass the overall attribute information of the target particle system to the GPU constant register.
  • the overall attribute information may further include key frame data of the target particle system, or include pattern information of the target particle system and a force state of the target particle system for initializing particle properties of each particle of the particle system. , or the particle properties of each particle used to subsequently update the target particle system.
  • the key frame data of the target particle system includes a display object position, a change speed, or a display color corresponding to at least one key frame.
  • the pattern information of the target particle system carries initial pixel position information and generation time of each pixel.
  • the CPU may periodically send overall attribute information of the target particle system to the GPU for the GPU to subsequently update the particle attributes of the individual particles of the target particle system.
  • S302. Generate particles of the target particle system according to the overall attribute information of the target particle system and initialize particle attributes of the respective particles of the target particle system.
  • the particle properties of the individual particles may include position information, velocity information, life cycle, and generation time of each particle.
  • S304 Determine whether the particle is dead according to the life cycle and the generation time of the particle of the target particle system, and stop displaying the particle if the particle dies.
  • the GPU in this embodiment records the generation time and life cycle of each particle when initializing the particle attribute of the particle, for example, by recording the generation time and life cycle of each particle in the Alpha channel of PosRT and VelocityRT, After the Shader displays the particles, the generation time of each particle can be obtained according to the generation time and the current time of each particle, and the generation time is compared with the life cycle of the particle. If the generation time has reached or exceeded the life cycle, the particle can be determined. Death, which in turn removes the dead particles from the screen and stops displaying the particles.
  • the particle attribute can be divided into a state-dependent particle attribute and a state-independent particle attribute, wherein the state-independent particle attribute is a particle that can be calculated only by a closed function defined by the initial property of the particle and the current time.
  • State-related particle property updates require separate drawing steps, saving the updated particle properties to RT, and displaying the updated particles through the shader.
  • the GPU does not The particle must be updated every frame, but the update period of the particle can be set as needed. For example, the update period of the particle used to simulate the object describing the distance from the perspective can be updated every 2 frames or 3 frames. Once, and so on.
  • the GPU may update state-related particle attributes according to the force state of the target particle system, and the force state of the target particle system may be processed by the CPU and then passed to the GPU, for example, the CPU is in the GPU cycle. While transmitting the overall information of the target particle system, the force state of the target particle system is sent to the GPU together.
  • the GPU may also update particle properties of individual particles of the target particle system based on key frame data of the target particle system.
  • the key frame data of the target particle system may include a display object position, a change speed, or a display color of the at least one key frame corresponding time.
  • the GPU may display the object position according to the time corresponding to the key frame. Determining the position information of each particle of the target particle system, the particle position can be adjusted to be displayed on the display object position of the key frame, and the speed of each particle of the target particle system can be uniformly adjusted according to the change speed according to the corresponding time of the key frame. Information, and the color of each particle of the target particle system can be uniformly adjusted according to the color of the display object according to the time corresponding to the key frame, thereby achieving precise control of each particle of the target particle system.
  • the GPU receives the overall attribute information of the particle system sent by the CPU, generates a particle, and performs display and lifecycle management on the generated particle.
  • the embodiment of the present application substantially reduces data transmission between the GPU and the CPU. The number and frequency of GPU waiting for CPU data transmission are reduced, thereby effectively improving the processing efficiency of the particle system.
  • FIG. 7 is a schematic flow chart of a method for processing a particle system according to another embodiment of the present application, where the method includes at least:
  • the CPU allocates a rendering texture resource to the target particle system.
  • the idle rendering texture resource can be managed by establishing a multi-level order linked list, and then the rendering texture resource is allocated to the target particle system from the idle rendering texture resource according to the partner algorithm.
  • the linked list of the level n manages the RT resource of size 1*2n, that is, the size of the RT block managed by each level linked list is twice the upper level.
  • the RT resource in the linked list of level n can be further divided into multiple sub-blocks, for example, 1*2n can be divided into 2*2n-1 RT resources.
  • RT resource block of size 4 When using the order linked list to manage allocation, when the target particle system requires 4 RTs, it is checked from the order linked list with block size 4. If there are free blocks on the linked list, an RT resource block of size 4 can be directly allocated. To the user, otherwise look up in the linked list of the next level (block size is 8). If the managed RT resource block size is 8 free lists in the linked list, then an idle RT resource block is split into two sizes of 4 An RT resource block, in which one RT resource block of size 4 is allocated to the target particle system, another RT resource block of size 4 is added to the upper-level linked list, and so on.
  • the target particle system When the target particle system releases the RT resource, if there is currently an idle RT resource block of the same size as the RT resource block released by the target particle system, the two RT resource blocks of the same size are merged into the next-level linked list. If the target particle system releases a RT resource block of size 4, if an RT resource block of the same size 4 is found to be idle, then the two RT resource blocks can be merged into one RT resource block of size 8 into management. In the linked list of RT resource blocks of size 8, and so on.
  • S402. Receive overall attribute information of the target particle system sent by the CPU, and allocate the information to the target. Render texture resources for the target particle system.
  • the particle system in the embodiment of the present application is a graphic display class that simulates an irregular fuzzy object or a shape that is very effective, for example, a particle system simulates a fireworks on the screen, and another particle system is on the screen. Simulate a string of characters that change state, and so on.
  • irregular objects are defined as consisting of a large number of irregular, randomly distributed particles, each of which has a certain life cycle. They constantly change shape and keep moving, fully embodying irregular objects. nature.
  • the overall attribute information of the particle system includes a particle display range (shader shader emission range), a particle life cycle range, a particle velocity range, and a generation time.
  • the CPU passes the overall attribute data of the particle system to the GPU, and does not need to include the attributes of a single particle, and the transmitted data does not increase as the number of particles increases.
  • the CPU may also send the maximum particle emissivity and the maximum lifetime of the target particle system to the GPU in the overall attribute information of the target particle system, and the GPU allocates the rendered texture resource to the target particle system.
  • the particle attributes of the respective particles may include position information, speed information, life cycle, and generation time of each particle.
  • the GPU can save the position information and generation time of the generated particles in the position rendering texture (PosRT), wherein the rgb channel of the PosRT records the position information of the particles, the alpha channel of the PosRT records the generation time of the particles, and the speed information of the generated particles and
  • the lifecycle is saved in velocity rendering texture (velocityRT), where VelocityRT's rgb channel records particle velocity information, and velocityRT's alpha channel records the particle's lifetime.
  • the GPU can draw the corresponding particle in the screen according to the position information and the speed information of the particle by sampling the position information saved by the particle in the position rendering texture PosRT and the velocity information saved in the speed rendering texture velocityRT, where the particle is
  • the location information determines its location on the screen, and the velocity information can determine the pose, direction, and subsequent updates for the particle display.
  • the GPU can display the particles through a shader for displaying the particles, and the shader for displaying the particles is specifically used to read the position information of the particles from the position rendering texture, and read from the speed rendering texture.
  • the velocity information of the particle is taken, and the corresponding particle is drawn on the screen according to the position information and the velocity information of the obtained particle.
  • Shader is an editable program on the GPU that is used to implement image rendering instead of a fixed rendering pipeline.
  • Shaders include Vertex Shader vertex shaders and Pixel Shader pixel shaders.
  • the Vertex Shader is used for the calculation of the geometric relationship of the vertices
  • the Pixel Shader is used for the calculation of the source color of the film. Due to the editability of the Shader, by sampling the texture RT in the Vertex Shader, sampling the colors in the Pixel Shader, and displaying the corresponding particles, a variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
  • FIG. 2 is a pattern effect displayed by a shader
  • FIG. 3 is a text effect displayed by a shader.
  • the pattern effect and the text effect may be black and white or color.
  • S405 Determine whether the particle is dead according to the life cycle and the generation time of the particle of the target particle system, and stop displaying the particle if the particle dies.
  • the generation time and life cycle of each particle are recorded in the Alpha channel of PosRT and VelocityRT, and the generation time of each particle is obtained according to the generation time and current time of each particle, and the generation duration is compared with the life cycle of the particle. If the generation time has reached or exceeded the life cycle, it can be determined that the particle is dead, and then the dead particles are removed from the screen to stop displaying the particle.
  • S406 If the particle is still in the life cycle, calculate the attribute change of the particle attribute and the state-related particle in the target particle system according to the force state of the target particle system, and save the attribute change amount in the temporary rendering texture.
  • the particle attribute can be divided into a state-dependent particle attribute and a state-independent particle attribute, wherein the state-independent particle attribute is a particle that can be calculated only by a closed function defined by the initial property of the particle and the current time.
  • the GPU can calculate the attribute change of the particle attribute and the state-related particle in the target particle system according to the force state of the target particle system, and save the attribute change amount in the temporary rendering texture.
  • the attribute change amount includes the position change amount and the speed change amount.
  • the force state of the target particle system may be processed by the CPU and then passed to the GPU. For example, the CPU periodically transmits the overall information of the target particle system to the GPU while the target particle system is under stress. Send it to the GPU together.
  • the position information saved in the position rendering texture before updating is u1
  • the calculated position increment is u
  • the speed before updating is v1
  • the speed information saved in the rendered texture is v1
  • the calculated speed increment is v
  • u, v are saved by temporarily rendering the texture.
  • the update process can be as shown in Figure 8.
  • the gray area is the core of the saving RT algorithm. Since TempRT saves the increment, it does not need to read the saved previous frame result, and TempRT can be released after the update processing. Compared with the classic algorithm, there are two less RTs. Two Add to Pass processing.
  • the GPU receives the overall attribute information of the particle system sent by the CPU, generates a particle, and performs display and lifecycle management on the generated particle.
  • the embodiment of the present application substantially reduces data transmission between the GPU and the CPU. The number and frequency of GPU waiting for CPU data transmission are reduced, thereby effectively improving the processing efficiency of the particle system.
  • the CPU when processing the particle attribute update, the CPU usually needs two or more pairs of RTs to save the position information and the speed information of the current frame and the previous frame of the particle, and in the embodiment of the present application, when the GPU processes the particle attribute update, Simply save the speed information and the position information increments, so you can save at least one position texture and one speed texture, just add a temporary texture, and the temporary texture can be released after the update process, especially in A large number of particle systems can save a lot of memory resources during processing.
  • the device includes at least an overall attribute information receiving module 810, a particle attribute initializing module 820, and a particle display module 830.
  • the overall attribute information receiving module 810 is configured to receive overall attribute information of the target particle system sent by the CPU.
  • the particle system in the embodiment of the present application is a graphic display class for effectively simulating an irregular fuzzy object or shape, for example, simulating a fireworks on a screen through a particle system, and passing through another particle system on the screen. Simulate a string of characters that change state, and so on.
  • an irregular object is defined as consisting of a large number of irregular, randomly distributed particles, each of which has a certain life cycle. They constantly change position and keep moving, fully embodying irregular objects. nature.
  • the CPU passes the overall attribute data of the particle system to the GPU, and does not need to include the attributes of a single particle, and the transmitted data does not increase as the number of particles increases.
  • the overall attribute information of the particle system includes a particle display range (shader shader emission position and range), a particle life cycle range, a particle velocity range, and a generation time.
  • the CPU can pass the overall attribute information of the target particle system to the GPU constant register.
  • the overall attribute information may further include key frame data of the target particle system, or include pattern information of the target particle system, used to initialize particle properties of individual particles of the particle system, or used to subsequently update the target particle.
  • the key frame data of the target particle system includes a display object position, a change speed, or a display color corresponding to at least one key frame.
  • the pattern information of the target particle system carries initial pixel position information and generation time of each pixel.
  • the CPU may periodically send the overall attribute information of the target particle system to the GPU for the GPU to subsequently update the particle attributes of the respective particles of the target particle system.
  • the particle attribute initialization module 820 is configured to generate particles of the target particle system according to the overall attribute information of the target particle system and initialize particle attributes of the respective particles of the target particle system.
  • the particle attribute initialization module 820 can randomly determine the position information of each particle within the particle display range according to the particle display range (determining the shader shader emission position and range) in the overall attribute information of the target particle system, that is, the generated Each particle position is randomly distributed in the particle display range; and the GPU can randomly determine the life cycle of each particle within the life cycle of the particle according to the particle life cycle range in the overall attribute information of the target particle system, that is, each particle generated The life cycle is randomly distributed in the life cycle of the particle; and the GPU can randomly determine the velocity of each particle within the particle velocity range according to the particle velocity range of the target particle system, that is, the velocity of each particle generated.
  • the GPU can randomly determine the generation time of each particle within the life cycle determined by the generation time according to the generation time in the overall attribute information of the target particle system, that is, the generated generation time of each particle Randomly distributed within the life cycle determined by the generation time.
  • the particle attribute initialization module 820 is specifically configured to:
  • the particle attribute initialization module 820 may save the generated position information and the generation time of the generated particle in a position rendering texture (PosRT, position RT, where RT is a Render Target, and the rendering object represents an off-screen rendered texture), wherein the rgb of the PosRT The channel records the position information of the particle, and the alpha channel of the PosRT records the generation time of the particle; the velocity information and the life cycle of the generated particle are saved in the velocity rendering texture (velocityRT), wherein the velocity of the rgb channel of the velocityRT records the velocity information of the particle, VelocityRT The alpha channel records the life cycle of the particle.
  • the particle property initialization module 820 can write particle properties of the particles into the position rendering texture and the velocity rendering texture through a Shader for generating particles.
  • each RT may be in the RGBA32f format, occupying a memory of 0.125-16M, corresponding to a particle attribute that can store 8192-100W particles.
  • the GPU may initialize a particle of each particle of the target particle system according to key frame data of the target particle system. Attributes.
  • the key frame data of the target particle system may include an initial display position, an initial change speed, or an initial display color, and the like.
  • the GPU may determine the position information of the target particle system according to the display object position of the initial key frame, compared to determining the target according to the particle display range in the overall attribute information.
  • the position information of the system can further accurately display the display position of each particle of the particle system according to the display object position of the initial key frame, and can be restricted by the shape of the display range of the shader shader.
  • the GPU can further accurately determine the initial velocity information and display color of each particle of the particle system according to the rate of change of the initial key frame.
  • the particle display module 830 is configured to display the particles of the target particle system by a shader shader according to particle attributes of respective particles in the target particle system.
  • the particle display module 830 can draw the corresponding particles on the screen according to the position information and the speed information of the particle by sampling the position information of the particle saved in the position rendering texture PosRT and the speed information saved in the speed rendering texture velocityRT.
  • the position information of the particle determines its drawing position on the screen, and the speed information can determine the posture, direction and subsequent update of the particle display.
  • the particle display module 830 can display the particle through a shader for displaying the particle, and the shader for displaying the particle is specifically used to read the position information of the particle from the position rendering texture, and render from the speed.
  • the velocity information of the particle is read in the texture, and the corresponding particle is drawn on the screen according to the position information and the velocity information of the obtained particle.
  • Shader is an editable program on the GPU that is used to implement image rendering instead of a fixed rendering pipeline.
  • Shaders include Vertex Shader vertex shaders and Pixel Shader pixel shaders.
  • the Vertex Shader is used for the calculation of the geometric relationship of the vertices
  • the Pixel Shader is used for the calculation of the source color of the film. Due to the editability of the Shader, by sampling the texture RT in the Vertex Shader, sampling the colors in the Pixel Shader, and displaying the corresponding particles, a variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
  • FIG. 2 is a pattern effect displayed by a shader
  • FIG. 3 is a text effect displayed by a shader.
  • the pattern effect and the text effect may be black and white or color.
  • the display effect of the target particle system of the present application may be as shown in FIG. 4 in combination with a specific game scenario.
  • the particle display of the target particle system may be placed in a game scene. At the top level, the other display objects in the game scene interface are drawn first, and finally the target particle system is displayed on the screen.
  • the shader display particles may be in a radiation mode or a polymerization mode, wherein the radiation mode is to randomly radiate particles at a random velocity around the center of the emitter position of the shader, and then the particle is aggregated in an initial state.
  • the highest degree, then gradually divergence; the aggregation method is also called gravity, that is, the shader randomly emits particles within a certain range, and then the gravity is set on the preset track or pattern of the screen, and the surrounding particles can be pulled around the track or pattern, then In the initial state, the degree of particle polymerization is very low, and then gradually aggregates around the preset track or pattern to form a display effect of a preset track or pattern.
  • the apparatus may further include: a death determination module 840, configured to determine whether the particle is dead according to a life cycle of the particle of the target particle system and a generation time, and stop displaying the particle if the particle dies.
  • a death determination module 840 configured to determine whether the particle is dead according to a life cycle of the particle of the target particle system and a generation time, and stop displaying the particle if the particle dies.
  • the GPU records the generation time and life cycle of each particle when initializing the particle properties of the particle.
  • the generation time and life cycle of each particle are recorded in the Alpha channel of PosRT and VelocityRT.
  • the death judging module 840 can obtain the generation duration of each particle according to the generation time and the current time of each particle, and compare the generation duration with the life cycle of the particle. If the generation duration has reached or exceeded the life cycle, the particle death can be determined. And then move the dead particles out of the screen and stop showing the particles.
  • the apparatus may further include: a particle attribute update module 850, configured to update particle attributes of respective particles of the target particle system while the particles are still in a life cycle, and display the updated target Particles of the particle system.
  • a particle attribute update module 850 configured to update particle attributes of respective particles of the target particle system while the particles are still in a life cycle, and display the updated target Particles of the particle system.
  • the particle attribute can be divided into a state-dependent particle attribute and a state-independent particle attribute, wherein the state-independent particle attribute is a particle that can be calculated only by a closed function defined by the initial property of the particle and the current time.
  • Attribute state-related particles
  • the attribute is the update calculation that needs to read the particle attribute of the previous frame as input or state correlation.
  • State-related particle property updates require separate drawing steps, saving the updated particle properties to RT, and displaying the updated particles through the shader.
  • the particle attribute update module 850 does not have to update the particles every frame, but can set the update period of the particles as needed, for example, to set an update period for simulating particles describing objects farther from the perspective. It can be updated every 2 frames or 3 frames, and so on.
  • the particle attribute update module 850 may update the state-related particle attribute according to the force state of the target particle system, and the force state of the target particle system may be processed by the CPU and then passed to the GPU, such as a CPU. While periodically transmitting the overall information of the target particle system to the GPU, the force state of the target particle system is sent to the GPU together.
  • the particle attribute update module 850 can also update the particle attributes of the individual particles of the target particle system based on the key frame data of the target particle system.
  • the key frame data of the target particle system may include a display object position, a change speed, or a display color of the at least one key frame corresponding time.
  • the particle attribute update module 850 may select the time according to the key frame.
  • the position of the display object determines the position information of each particle of the target particle system, and the position of the particle can be adjusted to be displayed on the display object position of the key frame.
  • the target particle system can be uniformly adjusted according to the change speed according to the corresponding time of the key frame.
  • the velocity information of each particle and the color of each particle of the target particle system can be uniformly adjusted according to the color of the display object according to the time corresponding to the key frame, thereby achieving precise control of each particle of the target particle system.
  • the particle attribute update module 850 is specifically configured to update the position information saved by the particle in the corresponding position rendering texture and the speed information saved in the corresponding speed rendering texture.
  • the particle attribute update module 850 includes:
  • the attribute change amount holding unit 851 is configured to calculate, according to the force state of the target particle system, an attribute change amount of the particle attribute and the state-related particle in the target particle system, and save the attribute change amount in the temporary rendering texture,
  • the amount of change in the attribute includes the amount of change in position and the amount of change in speed.
  • the attribute change amount superimposing unit 852 is configured to superimpose the position change amount in the temporary rendered texture to the position information in the position rendering texture of the corresponding particle, and superimpose the speed change amount in the temporary rendered texture to the speed of the corresponding particle Renders velocity information in the texture.
  • the position information saved in the position rendering texture before updating is u1
  • the calculated position increment is u
  • the speed before updating is v1
  • the speed information saved in the rendered texture is v1
  • the calculated speed increment is v
  • u, v are saved by temporarily rendering the texture.
  • the update process can be as shown in Figure 8.
  • the gray area is the core of the saving RT algorithm. Since TempRT saves the increment, it does not need to read the saved previous frame result, and TempRT can be released after the update processing. Compared with the classic algorithm, there are two less RTs. Two add to Pass processing.
  • the processing device of the particle system further includes:
  • the texture resource allocation module 860 is configured to allocate a rendering texture resource to the target particle system according to a maximum particle emissivity and a maximum lifetime of the target particle system.
  • the idle rendering texture resource can be managed by establishing a multi-level order linked list, and then the rendering texture resource is allocated to the target particle system from the idle rendering texture resource according to the partner algorithm.
  • the linked list of the level n manages the RT resource of size 1*2n, that is, the size of the RT block managed by each level linked list is twice the upper level.
  • the RT resource in the linked list of level n can be further divided into multiple sub-blocks, for example, 1*2n can be divided into 2*2n-1 RT resources.
  • RT resource block of size 4 When using the order linked list to manage allocation, when the target particle system requires 4 RTs, it is checked from the order linked list with block size 4. If there are free blocks on the linked list, an RT resource block of size 4 can be directly allocated. To the user, otherwise look up in the linked list of the next level (block size is 8). If the managed RT resource block size is 8 free lists in the linked list, then an idle RT resource block is split into two sizes of 4 An RT resource block, in which one RT resource block of size 4 is allocated to the target particle system, another RT resource block of size 4 is added to the upper-level linked list, and so on.
  • the target particle system When the target particle system releases the RT resource, if there is currently an idle RT resource block of the same size as the RT resource block released by the target particle system, the two RT resource blocks of the same size are merged into the next-level linked list. If the target particle system releases a RT resource block of size 4, if an RT resource block of the same size 4 is found to be idle, then the two RT resource blocks can be merged into one RT resource block of size 8 into management. In the linked list of RT resource blocks of size 8, and so on.
  • the CPU may also allocate a rendering texture resource to the target particle system and inform the GPU of the RT resource allocated to the target particle system.
  • the apparatus further includes:
  • the pattern information receiving module 880 is configured to receive pattern information of the target particle system sent by the CPU, where the pattern information includes pixel position information and a generation time of each pixel.
  • the CPU may write pattern information (eg, a color picture) of the target particle system into a specified storage space, such as a memory, a hard disk, or a video memory, and load the pattern by the pattern information receiving module 880 to the specified storage space. information.
  • pattern information eg, a color picture
  • the CPU may generate a color image according to a target black and white image, by traversing each pixel in the black and white image one by one, and passing through a pixel in the color image when the pixel color is greater than 0 (non-black)
  • the rgb channel records the position information of the pixel whose color is greater than 0, and records the generation time, the display time, and the like of the pixel whose color is greater than 0 through the alpha channel of the pixel, so that the color of each pixel is sequentially greater than 0.
  • the position and time information of the pixel points are saved in the respective pixels of the color image.
  • the CPU transmits the pattern information of the obtained color image to the GPU.
  • FIG. 12 An exemplary black and white picture as shown in FIG. 12, the left side 1201 shows the rgb channel of the black and white picture, the right side 1202 shows the alpha channel of the black and white picture, and the CPU can according to the position and alpha channel in the rgb channel of the black and white image.
  • the time information in the middle is generated to obtain the right color image 1203.
  • the color of each pixel of the color image is determined according to the position of the non-zero pixel of the black and white image, and the alpha channel of each pixel records the generation time of the pixel whose color is greater than 0, Display information such as time.
  • the black and white image may be an image of a text pattern.
  • the CPU may further generate the color image according to the stereo model image (3D mesh image), and the RGB channel of the pixel of the color image also stores the position coordinates of the vertex position in the stereo model image.
  • the black and white image based on the color pattern generated by the CPU may be a text pattern. Since the image generated by the text pattern has a very small resolution (default 32*32), it can be generated in real time.
  • the particle attribute initialization module 820 can also be used to:
  • the overall attribute information of the target particle system is initialized, and position information and generation time of each particle of the target particle system are initialized.
  • the subsequent GPU may restore the original image corresponding to the pattern information, such as the target black and white image or the stereo model image, on the screen, so that the pixel position information extracted from the pattern information and the generation time of the pixel point may be further accurately determined.
  • the display position and generation time of each particle of the particle system to achieve more detailed particle display control.
  • the processing device of the particle system of the embodiment of the present invention receives the overall attribute information of the particle system sent by the CPU, generates particles, and displays the generated particles and manages the life cycle.
  • the embodiment of the present application substantially reduces the data between the CPU and the CPU. Passing reduces the number and frequency of waiting for CPU data transmission, thus effectively improving the processing efficiency of the particle system.
  • the CPU when processing the particle attribute update, the CPU usually needs two or more pairs of RTs to save the position information and the speed information of the current frame and the previous frame of the particle, and the processing device of the particle system in the embodiment of the present application processes the particle.
  • the processing device 1300 of the particle system in this embodiment may include: at least one processor CPU 1301, GPU 1303, Memory 1304 and display screen 1305, at least one communication bus 1307.
  • the communication bus 1307 is used to implement connection communication between the above components.
  • the memory 1304 includes at least one shader shader, and when the at least one shader is executed by the GPU 1303, the following operations are performed:
  • Receiving overall attribute information of the target particle system sent by the CPU, and the overall attribute information of the target particle system includes a particle display range, a particle life cycle range, a particle speed range, and a generation time;
  • Individual particles of the target particle system are displayed based on particle properties of individual particles in the target particle system.
  • the at least one Shader may also be configured to perform the following operations:
  • Whether the particle is dead or not depends on the life cycle of the particle of the target particle system and the generation time, and if the particle dies, the display of the particle is stopped.
  • the at least one shader described above can also be configured to perform the following operations:
  • the at least one shader is configured to perform particle attributes of the respective particles that initialize the target particle system, including:
  • the at least one shader is configured to perform displaying the particles of the target particle system according to particle properties of respective particles in the target particle system, including:
  • the at least one shader configured to perform the particle attribute updating of each particle of the target particle system specifically includes:
  • the at least one shader is configured to perform position information saved by the update particle in the position rendering texture and speed information saved in the speed rendering texture, including:
  • the position change amount in the temporary rendering texture is superimposed to the position information in the position rendering texture of the corresponding particle, and the speed change amount in the temporary rendering texture is superimposed to the velocity information in the velocity rendering texture of the corresponding particle.
  • the overall attribute information of the target particle system further includes a maximum particle emissivity and a maximum lifetime
  • the at least one shader is further configured to execute before performing the rendering of the position information and the generation time of the particle in the position rendering texture, and saving the speed information and the life cycle of the particle in the speed rendering texture:
  • the rendered texture resource is allocated to the target particle system based on the maximum particle emissivity and maximum lifetime of the target particle system.
  • the at least one shader is configured to perform a maximum particle emissivity and a maximum lifetime according to the target particle system, and assigning the rendered texture resource to the target particle system specifically includes:
  • the rendered texture resource is allocated to the target particle system from the idle rendered texture resource according to a multi-level order linked list and a buddy algorithm that manages the idle rendered texture resource.
  • the overall attribute information further includes key frame data of the target particle system, and the key frame data of the target particle system includes a display object position, a change speed, or a display color of the at least one key frame corresponding time;
  • the at least one Shader described above is also configured to execute:
  • the at least one shader is further configured to execute before performing the initialization of the particle properties of the individual particles of the target particle system:
  • the at least one Shader is further configured to perform particle attributes of the respective particles that initialize the target particle system, including:
  • the position information and the generation time of each particle of the target particle system are initialized according to the pixel position information in the pattern information and the generation time of each pixel in combination with the overall attribute information of the target particle system.
  • the at least one shader may include the overall attribute information receiving module 810, the particle attribute initialization module 820, and the particle display module 830 shown in FIG.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

一种粒子系统的处理方法以及处理装置,该处理方法包括:接收CPU发送的目标粒子系统的整体属性信息(S101),所述目标粒子系统的整体属性信息包括粒子显示范围,粒子生命周期范围,粒子速度范围以及生成时间;根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性(S102),其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子(S103)。采用该处理方法和处理装置可提高粒子系统的处理效率。

Description

一种粒子系统的处理方法及装置
本申请要求于2016年5月16日提交中国专利局、申请号为201610324183.1,发明名称为“一种粒子系统的处理方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机图形技术领域,尤其涉及一种粒子系统的处理方法及装置。
发明背景
对如烟、火、云、雾、瀑布等不规则现象特效现象进行可视化仿真是计算机图形学中极具挑战性的研究课题,传统的造型方法很难高真实度地描述具体的形状和特征效果。
如今一般采用粒子系统展示上述不规则现象特效,粒子系统可以对复杂的运动系统进行模拟。粒子系统中粒子的位置更新、死亡检测等都需通过CPU(Central Processing Unit,中央处理器)处理,GPU(Graphics Processing Unit,图形处理器)根据CPU的处理结果显示粒子系统中的粒子,这个过程会耗费大量的CPU时间,同时而CPU在处理过程中,GPU需要处于锁定等待状态,直到CPU完成粒子的生成与位置更新后,GPU才能根据CPU更新后的数据进行显示,从而导致处理效率低。
发明内容
本申请实施例提供了一种粒子系统的处理方法及装置,可提高粒子系统的处理效率。
本申请实施例提供了一种粒子系统的处理方法,包括:
接收CPU发送的目标粒子系统的整体属性信息,所述目标粒子系统的整体属性信息包括粒子显示范围,粒子生命周期范围,粒子速度范围以及生成时间;
根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;
根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
相应地,本申请实施例还提供了一种粒子系统的处理的装置,包括:
图形处理器GPU;
与所述GPU相连接的存储器;所述存储器中存储有多个指令模块,包括整体属性信息接收模块、粒子属性初始化模块和粒子显示模块;当所述指令模块由所述GPU执行时,执行以下操作:
所述整体属性信息接收模块,用于接收CPU发送的目标粒子系统的整体属性信息;
所述粒子属性初始化模块,用于根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;
所述粒子显示模块,用于根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
本申请实施例还提供了一种非易失性机器可读存储介质,所述存储介质中存储有机器可读指令,所述机器可读指令可以由图形处理器GPU执行以完成以下操作:
接收中央处理器CPU发送的目标粒子系统的整体属性信息,所述目 标粒子系统的整体属性信息包括粒子显示范围,粒子生命周期范围,粒子速度范围以及生成时间;
根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;
根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
本申请实施例通过由GPU接收CPU发送的粒子系统的整体属性信息后生成粒子,并对生成的粒子进行显示及生命周期的管理,本申请实施例大幅减少了GPU与CPU之间的数据传递,降低了GPU等待CPU数据传输的次数与频率,从而有效提高了粒子系统的处理效率。
附图简要说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例中的粒子系统的处理方法的流程示意图;
图2是本申请实施例中的粒子系统显示的图案效果示意图;
图3是本申请实施例中的粒子系统显示的文字效果示意图;
图4是本申请实施例中结合具体游戏场景的粒子系统的显示效果示意图;
图5是本申请另一实施例中的粒子系统的处理方法的流程示意图;
图6是本申请另一实施例中的粒子系统的处理方法的流程示意图;
图7是本申请另一实施例中的粒子系统的处理方法的流程示意图;
图8是本申请实施例中一种伙伴算法结构示意图;
图9是本申请实施例中粒子系统分配界面示意图;
图10是本申请实施例中的粒子系统的处理装置的结构示意图;
图11是本申请实施例中的粒子属性更新模块结构示意图;
图12是本申请实施例中的CPU根据黑白图像生成用于发送给GPU的彩色图像的示意图;
图13是本申请另一实施例中的粒子系统的处理装置的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面结合附图对本申请作进一步的详细阐述。
本申请实施例中的粒子系统的处理方法和装置,可以由计算机系统中的GPU实现,也可以计算机系统中实现与GPU类似功能的功能架构实现,示例性的,下文以GPU作为本申请实施例中的执行主体介绍本申请的实施方式,在其他实施例的计算机系统的功能架构中,也可以是其他功能结构模块实现本申请实施例中的相应步骤。
图1是本申请实施例中一种粒子系统的处理方法的流程示意图,如图所示所述方法至少包括:
S101,接收CPU发送的目标粒子系统的整体属性信息。
具体的,本申请实施例中的粒子系统是一种图形展现类,用于有效模拟不规则模糊物体或者形状,例如通过一个粒子系统在屏幕中模拟展现一个烟火,通过另一个粒子系统在屏幕中模拟展现一串不断变化状态的字符,等等。所述目标粒子系统是用于渲染目标纹理的粒子系统。在粒子系统中,不规则对象被定义为由大量不规则、随机分布的粒子所组 成,而每一个粒子均有一定的生命周期,它们不断改变位置、不断运动,充分地体现了不规则对象的性质。本申请实施例中,CPU向GPU传递粒子系统的整体属性数据,所述整体属性数据包含所述粒子系统中所有粒子的属性的取值范围,不需要包含单个粒子的属性,所传递的数据也不会随着粒子个数的增加而增多。其中,所述粒子系统的整体属性信息包括粒子显示范围(shader着色器发射位置和范围),粒子生命周期范围,粒子速度范围以及生成时间。在实施例中,CPU可以将目标粒子系统的整体属性信息传递到GPU常量寄存器中。
在一些实施例中,所述整体属性信息还可以包括目标粒子系统的关键帧数据,或包括目标粒子系统的图案信息,用于初始化粒子系统的各个粒子的粒子属性,或用于后续更新目标粒子系统的各个粒子的粒子属性。其中,所述目标粒子系统的关键帧数据包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色。所述目标粒子系统的图案信息携带初始的像素位置信息和各个像素的生成时间。
在一些实施例中,CPU可以周期性的向GPU发送目标粒子系统的整体属性信息,用于GPU后续更新目标粒子系统的各个粒子的粒子属性。
S102,根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性。所述各个粒子的粒子属性可以包括各个粒子的位置信息、速度信息、生命周期以及生成时间
具体的,GPU可以根据目标粒子系统的整体属性信息中的粒子显示范围(确定shader着色器发射位置和范围),在该粒子显示范围内随机确定各个粒子的位置信息,即生成的各个粒子位置随机分布在该粒子显示范围中;以及GPU可以根据目标粒子系统的整体属性信息中的粒子 生命周期范围,在该粒子生命周期范围内随机确定各个粒子的生命周期,即生成的各个粒子生命周期在该粒子生命周期范围中随机分布;以及GPU可以根据目标粒子系统的粒子速度范围,在该粒子速度范围内随机确定各个粒子的速度,即生成的各个粒子的速度随机分布在该粒子速度范围中;以及GPU可以根据目标粒子系统的整体属性信息中的生成时间,在该生成时间确定的生命周期内随机确定各个粒子的生成时间,即生成的各个粒子的生成时间随机分布在该生成时间确定的生命周期内。
在一些实施例中,GPU可以将生成的粒子的位置信息和生成时间保存在位置渲染纹理(PosRT,position RT,其中RT为Render Target,渲染对象,表示离屏渲染的纹理),其中PosRT的rgb通道记录粒子的位置信息,PosRT的alpha通道记录粒子的生成时间;将生成的粒子的速度信息和生命周期保存在速度渲染纹理(velocityRT),其中,velocityRT的rgb通道记录粒子的速度信息,velocityRT的alpha通道记录粒子的生命周期。在一些实施例中,GPU可以通过用于生成粒子的Shader(着色器)向位置渲染纹理和速度渲染纹理中写入粒子的粒子属性。在一些实施例中,每张RT(PosRT或者velocityRT)都可以是RGBA32f格式,占用的显存为0.125-16M,对应可存储8192-100W个粒子的粒子属性。
在一些实施例中,若所述目标粒子系统的整体属性信息携带目标粒子系统的关键帧数据,则GPU可以GPU根据所述目标粒子系统的关键帧数据初始化所述目标粒子系统的各个粒子的粒子属性。所述目标粒子系统的关键帧数据可以包括初始显示位置、初始变化速度或者初始显示颜色等。例如目标粒子系统的关键帧数据包括初始关键帧的显示对象位置,则GPU可以根据初始关键帧的显示对象位置确定目标粒子系统的位置信息,相较于根据整体属性信息中的粒子显示范围确定目标粒子系统的位置信息,根据初始关键帧的显示对象位置可以进一步精确粒子系 统的各个粒子的显示位置,并可以不受shader着色器的显示范围形状的限制。同样GPU可以根据初始关键帧的变化速度进一步精确粒子系统的各个粒子的初始的速度信息以及显示颜色。
S103,根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的粒子。
具体的,GPU可以通过采样粒子在位置渲染纹理PosRT中保存的位置信息和在速度渲染纹理velocityRT中保存的速度信息,从而根据粒子的位置信息和速度信息在屏幕中绘制对应的粒子,其中粒子的位置信息决定其在屏幕上的绘制位置,速度信息可以确定粒子显示的姿态、方向以及用于后续的更新。
其中,GPU可以通过用于显示粒子的Shader(着色器)进行显示粒子,用于显示粒子的Shader具体用于从所述位置渲染纹理中读取粒子的位置信息,从所述速度渲染纹理中读取粒子的速度信息,并根据得到的粒子的位置信息和速度信息在屏幕中绘制对应的粒子。Shader是GPU上用来实现图像渲染的用来替代固定渲染管线的可编辑程序。Shader包括Vertex Shader顶点着色器和Pixel Shader像素着色器。其中Vertex Shader用于顶点的几何关系等的运算,Pixel Shader用于片源颜色等的计算。由于Shader的可编辑性,通过在Vertex Shader中采样纹理RT,在Pixel Shader中采样色彩,进而显示对应的粒子,可以实现各种各样的图像效果而不用受显卡的固定渲染管线限制。
示例性的,图2为通过shader显示的图案效果,图3为通过shader显示的文字效果。所述图案效果和文字效果可以为黑白或者彩色。
结合具体游戏场景,本申请的目标粒子系统的显示效果可以如图4所示,在一些实施例中,可以将目标粒子系统的粒子显示放在游戏场景的最上层,即先绘制游戏场景界面中的其他显示对象,最后再在屏幕上 显示目标粒子系统。
在一些实施例中,所述shader显示粒子可以是放射方式,也可以是聚合方式,其中放射方式为根据着色器的发射位置为中心随机向四周以随机速度放射粒子,那么在初始状态下粒子聚合度最高,随后渐渐发散;聚合方式也称引力方式,即着色器将粒子在一定范围内随机发射,然后屏幕预设轨迹或图案上设置引力,可以将周围粒子牵引到轨迹或图案的周围,那么在初始状态下粒子聚合度很低,随后渐渐聚合在预设轨迹或图案的周围,形成预设轨迹或图案的显示效果。
本申请实施例通过由GPU接收CPU发送的粒子系统的整体属性信息后生成并显示粒子,本申请实施例大幅减少了GPU与CPU之间的数据传递,降低了GPU等待CPU数据传输的次数与频率,从而有效提高了粒子系统的处理效率。
图5是本申请另一实施例的粒子系统的处理方法的流程示意图,如图所示所述方法至少包括:
S201,接收CPU发送的目标粒子系统的整体属性信息和图案信息。
所述粒子系统的整体属性信息包括粒子显示范围(shader着色器发射位置和范围),粒子生命周期范围,粒子速度范围以及生成时间。在一些实施例中,CPU可以将目标粒子系统的整体属性信息传递到GPU常量寄存器中。
所述目标粒子系统的图案信息,可以包括图像中各个像素的位置信息和各个像素的生成时间。在一些实施例中,CPU可以将目标粒子系统的图案信息(例如一张彩色图片)写入指定存储空间,如内存、硬盘或显存中,由GPU到该指定存储空间加载该图案信息。
具体的,可以由CPU根据一张目标黑白图像生成一幅彩色图像,通过逐个遍历黑白图像中的每个像素点,当像素点颜色大于0(非黑色) 时,通过彩色图像中的一个像素的rgb通道记录所述颜色大于0的像素点的位置信息,以及通过该像素的Alpha通道记录所述颜色大于0的像素点的生成时间、显示时间等信息,从而顺序的将各个像素点颜色大于0的像素点的位置和时间信息保存在彩色图像的各个像素中。CPU将所述得到的彩色图像的图案信息发送给GPU。
示例性的如图12所示的黑白图片,左侧展示的是黑白图片的rgb通道,右侧展示的是黑白图片的alpha通道,CPU可以根据黑白图像的rgb通道中的位置和alpha通道中的时间信息生成得到右侧彩色图像,彩色图像的各像素点的颜色根据黑白图像的非0像素点的位置决定,各像素的Alpha通道记录所述颜色大于0的像素点的生成时间、显示时间等信息。
在一些实施例中,所述黑白图像可以为文字图案的图像。
同理的,所述CPU还可以根据立体模型图像(3D网格图像)生成上述彩色图像,同样所述彩色图像的像素点的RGB通道保存立体模型图像中顶点位置的位置坐标。
具体的,所述由CPU生成的彩色图案基于的黑白图像可以为文字图案。由于文字图案生成的图片分辨率非常小(默认32*32),可以实时生成。
S202,根据所述图案信息中的像素位置信息和各个像素的生成时间结合所述目标粒子系统的整体属性信息,初始化所述目标粒子系统的各个粒子的位置信息和生成时间。
GPU从所述图案信息中提取对应的像素位置信息和像素点的生成时间,进而根据的各个像素的位置和生成时间结合所述目标粒子系统的整体属性信息初始化目标粒子系统的各个粒子的粒子属性。
S203,根据所述目标粒子系统中各个粒子的粒子属性显示所述目标 粒子系统的各个粒子
从而GPU可以在屏幕上还原所述图案信息对应的原始图像,例如上述的目标黑白图像或立体模型图像。
本实施例中的GPU可以根据CPU发送的目标粒子系统的整体属性信息和图案信息,进而根据目标粒子系统的整体属性信息和图案信息生成并显示粒子,从而可以根据由所述图案信息中提取得到的像素位置信息和像素点的生成时间进一步精确粒子系统的各个粒子的显示位置和生成时间,以实现更细化的粒子显示控制。
图6是本申请另一实施例的粒子系统的处理方法的流程示意图,如图所示所述方法至少包括:
S301,接收CPU发送的目标粒子系统的整体属性信息。
所述粒子系统的整体属性信息包括粒子显示范围(shader着色器发射位置和范围),粒子生命周期范围,粒子速度范围以及生成时间。在一些实施例中,CPU可以将目标粒子系统的整体属性信息传递到GPU常量寄存器中。
在一些实施例中,所述整体属性信息还可以包括目标粒子系统的关键帧数据,或包括目标粒子系统的图案信息以及目标粒子系统的受力状态,用于初始化粒子系统的各个粒子的粒子属性,或用于后续更新目标粒子系统的各个粒子的粒子属性。其中,所述目标粒子系统的关键帧数据包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色。所述目标粒子系统的图案信息携带初始的像素位置信息和各个像素的生成时间。
在一些实施例中,CPU可以是周期性的向GPU发送目标粒子系统的整体属性信息,用于GPU后续更新目标粒子系统的各个粒子的粒子属性。
S302,根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性。所述各个粒子的粒子属性可以包括各个粒子的位置信息、速度信息、生命周期以及生成时间。
具体实现可以参考前文实施例中的S102,本实施例中不再赘述。
S303,根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的粒子。
具体实现可以参考前文实施例中的S103,本实施例中不再赘述。
S304,根据所述目标粒子系统的粒子的生命周期以及生成时间判断粒子是否死亡,若粒子死亡则停止显示该粒子。
具体的,本实施例中的GPU在初始化粒子的粒子属性时记录了每个粒子的生成时间和生命周期,例如通过PosRT和VelocityRT的Alpha通道中记录每个粒子的生成时间和生命周期,在通过Shader显示粒子后,可以根据各个粒子的生成时间和当前时间得到各个粒子的生成时长,进而将生成时长与该粒子的生命周期比对,若生成时长已到达或超过生命周期,则可以确定该粒子死亡,进而把死亡的粒子移出屏幕,停止显示该粒子。
S305,若粒子仍在生命周期中,更新所述目标粒子系统的各个粒子的粒子属性,并通过shader显示更新后的所述目标粒子系统的粒子。
具体的,粒子属性可以分为状态相关的粒子属性和状态无关的粒子属性,其中状态无关的粒子属性为仅通过一个由粒子的初始属性和根据当前时间定义的闭合函数,就能计算出的粒子属性,而状态相关的粒子属性则是更新计算需要读取上一帧的粒子属性作为输入就是状态相关。状态相关的粒子属性更新需要单独的绘制步骤,将更新后的粒子属性保存到RT上,通过shader显示更新后的粒子。在一些实施例中,GPU不 必在每一帧都对粒子进行更新,而是可以根据需要设置粒子的更新周期,例如设置用于模拟描述距离视角较远的对象的粒子的更新周期可以是每2帧更新一次或3帧更新一次,等等。
在一些实施例中,GPU可以根据所述目标粒子系统的受力状态更新状态相关的粒子属性,所述目标粒子系统的受力状态可以由CPU处理得到后传递给GPU,例如CPU在向GPU周期性发送目标粒子系统的整体信息的同时,将目标粒子系统的受力状态一起发送给GPU。
在一些实施例中,GPU也可以根据所述目标粒子系统的关键帧数据更新所述目标粒子系统的各个粒子的粒子属性。所述目标粒子系统的关键帧数据可以包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色,当到达所述关键帧对应时间时,GPU可以根据该关键帧对应时间的显示对象位置确定目标粒子系统的各个粒子的位置信息,可以将粒子位置调整至显示在关键帧的显示对象位置上,同理可以根据根据该关键帧对应时间的变化速度统一调整目标粒子系统的各个粒子的速度信息,以及可以根据根据该关键帧对应时间的显示对象的颜色统一调整目标粒子系统的各个粒子的颜色,从而实现对目标粒子系统的各个粒子的精确控制。
本申请实施例通过由GPU接收CPU发送的粒子系统的整体属性信息后生成粒子,并对生成的粒子进行显示及生命周期的管理,本申请实施例大幅减少了GPU与CPU之间的数据传递,降低了GPU等待CPU数据传输的次数与频率,从而有效提高了粒子系统的处理效率。
图7是本申请另一实施例提供的粒子系统的处理方法的流程示意图,如图所示所述方法至少包括:
S401,CPU为目标粒子系统分配渲染纹理资源。
在一些实施例中,所述渲染纹理资源(RT资源)是分配给目标粒子 系统的显存资源。具体的,CPU根据目标粒子系统的同时存在的最多粒子数为目标粒子系统分配渲染纹理资源,最多粒子数=最大粒子发射率*最大生命周期。进而根据每张RT资源可以存储的粒子属性的数量,确定目标粒子系统所需要的RT资源。例如在一些实施例中,每张RT(PosRT或者velocityRT)都是RGBA32f格式,占用的显存为0.125-16M,对应可存储8192-100W个粒子的粒子属性。
进而为了减少分配回收产生的碎片,可以通过建立多级order链表管理空闲渲染纹理资源,进而根据伙伴算法从空闲渲染纹理资源中为目标粒子系统分配渲染纹理资源。具体如图9所示,其中0~9标识order链表等级,等级n的链表中管理大小为1*2n的RT资源,即每一级链表管理的RT块的大小是上一级的2倍,而等级n的链表中的RT资源又可以分为多个子块,如1*2n可以分为2*2n-1个RT资源。采用order链表管理分配时,当目标粒子系统需求为4个RT时,从块大小为4的order链表查起,若该链表上有空闲的块,则可以直接分配一个大小为4的RT资源块给用户,否则向下一个级别(块大小为8)的链表中查找,若管理RT资源块大小为8链表中存在空闲资源块,则将一个空闲的RT资源块分裂为两个大小为4的RT资源块,其中一个大小为4的RT资源块分配给目标粒子系统,另外一块大小为4的RT资源块加入上一级链表中,以此类推。而当目标粒子系统释放RT资源时,如果当前存在与该目标粒子系统释放的RT资源块相同大小的空闲RT资源块,则将这两块相同大小的RT资源块合并放入下一级别的链表中,如目标粒子系统释放一个大小为4的RT资源块,若找到一个同样大小为4的RT资源块空闲,则此时两RT资源块可以合并为一个大小为8的RT资源块放入管理大小为8的RT资源块的链表中,以此类推。
S402,接收CPU发送的目标粒子系统的整体属性信息,和分配给目 标粒子系统的渲染纹理资源。
具体的,本申请实施例中的粒子系统是模拟不规则模糊物体或者形状非常有效的一种图形展现类,例如通过一个粒子系统在屏幕中模拟展现一个烟火、,通过另一个粒子系统在屏幕中模拟展现一串不断变化状态的字符,等等。在粒子系统中,不规则对象被定义为由大量不规则、随机分布的粒子所组成,而每一个粒子均有一定的生命周期,它们不断改变形状、不断运动,充分地体现了不规则对象的性质。其中,所述粒子系统的整体属性信息包括粒子显示范围(shader着色器发射范围),粒子生命周期范围,粒子速度范围以及生成时间。
CPU向GPU传递粒子系统的整体属性数据,不需要包含单个粒子的属性,所传递的数据也不会随着粒子个数的增加而增多。
在一些实施例中,也可以由CPU将目标粒子系统的最大粒子发射率和最大生命周期放入目标粒子系统的整体属性信息中发送给GPU,由GPU为目标粒子系统分配渲染纹理资源。
S403,根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度信息和生命周期保存在速度渲染纹理。
具体的,所述各个粒子的粒子属性可以包括各个粒子的位置信息、速度信息、生命周期以及生成时间。
GPU可以将生成的粒子的位置信息和生成时间保存在位置渲染纹理(PosRT),其中PosRT的rgb通道记录粒子的位置信息,PosRT的alpha通道记录粒子的生成时间;将生成的粒子的速度信息和生命周期保存在速度渲染纹理(velocityRT),其中,velocityRT的rgb通道记录粒子的速度信息,velocityRT的alpha通道记录粒子的生命周期。
S404,采样粒子在位置渲染纹理中保存的位置信息和在速度渲染纹 理中保存的速度信息,从而显示对应粒子。
具体的,GPU可以通过采样粒子在位置渲染纹理PosRT中保存的位置信息和在速度渲染纹理velocityRT中保存的速度信息,从而根据粒子的位置信息和速度信息在屏幕中绘制对应的粒子,其中粒子的位置信息决定其在屏幕上的绘制位置,速度信息可以确定粒子显示的姿态、方向以及用于后续的更新。
其中,GPU可以通过用于显示粒子的Shader(着色器)进行显示粒子,用于显示粒子的Shader具体用于从所述位置渲染纹理中读取粒子的位置信息,从所述速度渲染纹理中读取粒子的速度信息,并根据得到的粒子的位置信息和速度信息在屏幕中绘制对应的粒子。Shader是GPU上用来实现图像渲染的用来替代固定渲染管线的可编辑程序。Shader包括Vertex Shader顶点着色器和Pixel Shader像素着色器。其中Vertex Shader用于顶点的几何关系等的运算,Pixel Shader用于片源颜色等的计算。由于Shader的可编辑性,通过在Vertex Shader中采样纹理RT,在Pixel Shader中采样色彩,进而显示对应的粒子,可以实现各种各样的图像效果而不用受显卡的固定渲染管线限制。
例如,图2为通过shader显示的图案效果,图3为通过shader显示的文字效果。所述图案效果和文字效果可以为黑白或者彩色。
S405,根据所述目标粒子系统的粒子的生命周期以及生成时间判断粒子是否死亡,若粒子死亡则停止显示该粒子。
具体的,PosRT和VelocityRT的Alpha通道中记录每个粒子的生成时间和生命周期,根据各个粒子的生成时间和当前时间得到各个粒子的生成时长,进而将生成时长与该粒子的生命周期比对,若生成时长已到达或超过生命周期,则可以确定该粒子死亡,进而把死亡的粒子移出屏幕,停止显示该粒子。
S406,若粒子仍在生命周期中,根据目标粒子系统的受力状态,计算目标粒子系统中粒子属性与状态相关的粒子的属性变化量,并将所述属性变化量保存在临时渲染纹理。
具体的,粒子属性可以分为状态相关的粒子属性和状态无关的粒子属性,其中状态无关的粒子属性为仅通过一个由粒子的初始属性和根据当前时间定义的闭合函数,就能计算出的粒子属性,而状态相关的粒子属性则是更新计算需要读取上一帧的粒子属性作为输入就是状态相关。状态相关的粒子属性更新需要单独的绘制步骤,渲染到RT上。若粒子仍在生命周期中,GPU可以根据目标粒子系统的受力状态,计算目标粒子系统中粒子属性与状态相关的粒子的属性变化量,并将所述属性变化量保存在临时渲染纹理,所述属性变化量包括位置变化量和速度变化量。在一些实施例中,所述目标粒子系统的受力状态可以由CPU处理得到后传递给GPU,例如CPU在向GPU周期性发送目标粒子系统的整体信息的同时,将目标粒子系统的受力状态一起发送给GPU。
S407,将所述临时渲染纹理中的位置变化量叠加至对应粒子的位置渲染纹理中的位置信息,将所述临时渲染纹理中的速度变化量叠加至对应粒子的速度渲染纹理中的速度信息。
具体的,若更新前位置渲染纹理中保存的位置信息为u1,计算得到的位置增量为u,因此更新后位置渲染纹理中保存的位置信息为u2=u1+u,同样,若更新前速度渲染纹理中保存的速度信息为v1,计算得到的速度增量为v,因此更新后速度渲染纹理中保存的速度信息为v2=v1+v。其中,u、v通过临时渲染纹理保存。
更新处理流程可以如图8所示。灰色区域是节省RT算法的核心,由于TempRT保存的是增量,不需要读取保存的上一帧结果,并且TempRT在更新处理后即可释放,相比经典算法少了两张RT,多了两个 add to Pass的处理。
S408,采样粒子在位置渲染纹理PosRT中保存的叠加后的位置信息和在速度渲染纹理velocityRT中保存的叠加后的速度信息,从而显示更新后的粒子。
本申请实施例通过由GPU接收CPU发送的粒子系统的整体属性信息后生成粒子,并对生成的粒子进行显示及生命周期的管理,本申请实施例大幅减少了GPU与CPU之间的数据传递,降低了GPU等待CPU数据传输的次数与频率,从而有效提高了粒子系统的处理效率。另一方面,CPU在处理粒子属性更新时,通常需要两对或更多对的RT保存当前帧及上一帧粒子的位置信息与速度信息,而本申请实施例中GPU处理粒子属性更新时,只需保存速度信息和位置信息的增量,因此至少可以节省一张位置纹理和一张速度纹理,只需增加一个临时纹理即可实现,而该临时纹理在更新处理后即可释放,尤其在粒子数量庞大的粒子系统的处理过程中可以节省大量显存资源。
图10是本申请实施例提供的粒子系统的处理装置的流程示意图,如图10所示所述装置至少包括:整体属性信息接收模块810、粒子属性初始化模块820、粒子显示模块830。
其中,整体属性信息接收模块810,用于接收CPU发送的目标粒子系统的整体属性信息。
具体的,本申请实施例中的粒子系统是一种图形展现类,用于有效模拟不规则模糊物体或者形状,例如通过一个粒子系统在屏幕中模拟展现一个烟火、通过另一个粒子系统在屏幕中模拟展现一串不断变化状态的字符,等等。在粒子系统中,不规则对象被定义为由大量不规则、随机分布的粒子所组成,而每一个粒子均有一定的生命周期,它们不断改变位置、不断运动,充分地体现了不规则对象的性质。本申请实施例中, CPU向GPU传递粒子系统的整体属性数据,不需要包含单个粒子的属性,所传递的数据也不会随着粒子个数的增加而增多。其中,所述粒子系统的整体属性信息包括粒子显示范围(shader着色器发射位置和范围),粒子生命周期范围,粒子速度范围以及生成时间。在一些实施例中,CPU可以将目标粒子系统的整体属性信息传递到GPU常量寄存器中。
在一些实施例中,所述整体属性信息还可以包括目标粒子系统的关键帧数据,或包括目标粒子系统的图案信息,用于初始化粒子系统的各个粒子的粒子属性,或用于后续更新目标粒子系统的各个粒子的粒子属性。其中,所述目标粒子系统的关键帧数据包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色。所述目标粒子系统的图案信息携带初始的像素位置信息和各个像素的生成时间。
在一些实施例中,CPU可以周期性的向GPU发送目标粒子系统的整体属性信息,用于GPU后续更新目标粒子系统的各个粒子的粒子属性。
粒子属性初始化模块820,用于根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性。
具体的,粒子属性初始化模块820可以根据目标粒子系统的整体属性信息中的粒子显示范围(确定shader着色器发射位置和范围),在该粒子显示范围内随机确定各个粒子的位置信息,即生成的各个粒子位置随机分布在该粒子显示范围中;以及GPU可以根据目标粒子系统的整体属性信息中的粒子生命周期范围,在该粒子生命周期范围内随机确定各个粒子的生命周期,即生成的各个粒子生命周期在该粒子生命周期范围中随机分布;以及GPU可以根据目标粒子系统的粒子速度范围,在该粒子速度范围内随机确定各个粒子的速度,即生成的各个粒子的速度 随机分布在该粒子速度范围中;以及GPU可以根据目标粒子系统的整体属性信息中的生成时间,在该生成时间确定的生命周期内随机确定各个粒子的生成时间,即生成的各个粒子的生成时间随机分布在该生成时间确定的生命周期内。
在一些实施例中,所述粒子属性初始化模块820具体用于:
将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度信息和生命周期保存在速度渲染纹理。所述粒子属性初始化模块820可以将生成的粒子的位置信息和生成时间保存在位置渲染纹理(PosRT,position RT,其中RT为Render Target,渲染对象,表示离屏渲染的纹理),其中PosRT的rgb通道记录粒子的位置信息,PosRT的alpha通道记录粒子的生成时间;将生成的粒子的速度信息和生命周期保存在速度渲染纹理(velocityRT),其中,velocityRT的rgb通道记录粒子的速度信息,velocityRT的alpha通道记录粒子的生命周期。在一些实施例中,所述粒子属性初始化模块820可以通过用于生成粒子的Shader(着色器)向位置渲染纹理和速度渲染纹理中写入粒子的粒子属性。
在一些实施例中,每张RT(PosRT或者velocityRT)都可以是RGBA32f格式,占用的显存为0.125-16M,对应可存储8192-100W个粒子的粒子属性。
在一些实施例中,若所述目标粒子系统的整体属性信息携带目标粒子系统的关键帧数据,则GPU可以GPU根据所述目标粒子系统的关键帧数据初始化所述目标粒子系统的各个粒子的粒子属性。所述目标粒子系统的关键帧数据可以包括初始显示位置、初始变化速度或者初始显示颜色等。例如目标粒子系统的关键帧数据包括初始关键帧的显示对象位置,则GPU可以根据初始关键帧的显示对象位置确定目标粒子系统的位置信息,相较于根据整体属性信息中的粒子显示范围确定目标粒子系 统的位置信息,根据初始关键帧的显示对象位置可以进一步精确粒子系统的各个粒子的显示位置,并可以不受shader着色器的显示范围形状的限制。同样GPU可以根据初始关键帧的变化速度进一步精确粒子系统的各个粒子的初始的速度信息以及显示颜色。
粒子显示模块830,用于根据所述目标粒子系统中各个粒子的粒子属性通过shader着色器显示所述目标粒子系统的粒子。
具体的,粒子显示模块830可以通过采样粒子在位置渲染纹理PosRT中保存的位置信息和在速度渲染纹理velocityRT中保存的速度信息,从而根据粒子的位置信息和速度信息在屏幕中绘制对应的粒子,其中粒子的位置信息决定其在屏幕上的绘制位置,速度信息可以确定粒子显示的姿态、方向以及用于后续的更新。
其中,粒子显示模块830可以通过用于显示粒子的Shader(着色器)进行显示粒子,用于显示粒子的Shader具体用于从所述位置渲染纹理中读取粒子的位置信息,从所述速度渲染纹理中读取粒子的速度信息,并根据得到的粒子的位置信息和速度信息在屏幕中绘制对应的粒子。Shader是GPU上用来实现图像渲染的用来替代固定渲染管线的可编辑程序。Shader包括Vertex Shader顶点着色器和Pixel Shader像素着色器。其中Vertex Shader用于顶点的几何关系等的运算,Pixel Shader用于片源颜色等的计算。由于Shader的可编辑性,通过在Vertex Shader中采样纹理RT,在Pixel Shader中采样色彩,进而显示对应的粒子,可以实现各种各样的图像效果而不用受显卡的固定渲染管线限制。
示例性的,图2为通过shader显示的图案效果,图3为通过shader显示的文字效果。所述图案效果和文字效果可以为黑白或者彩色。
结合具体游戏场景,本申请的目标粒子系统的显示效果可以如图4所示,在一些实施例中,可以将目标粒子系统的粒子显示放在游戏场景 的最上层,即先绘制游戏场景界面中的其他显示对象,最后再在屏幕上显示目标粒子系统。
在一些实施例中,所述shader显示粒子可以是放射方式,也可以是聚合方式,其中放射方式为根据着色器的发射位置为中心随机向四周以随机速度放射粒子,那么在初始状态下粒子聚合度最高,随后渐渐发散;聚合方式也称引力方式,即着色器将粒子在一定范围内随机发射,然后屏幕预设轨迹或图案上设置引力,可以将周围粒子牵引到轨迹或图案的周围,那么在初始状态下粒子聚合度很低,随后渐渐聚合在预设轨迹或图案的周围,形成预设轨迹或图案的显示效果。
在一些实施例中,所述装置可以进一步包括:死亡判断模块840,用于根据所述目标粒子系统的粒子的生命周期以及生成时间判断粒子是否死亡,若粒子死亡则停止显示该粒子。
具体的,GPU在初始化粒子的粒子属性时记录了每个粒子的生成时间和生命周期,例如通过PosRT和VelocityRT的Alpha通道中记录每个粒子的生成时间和生命周期,在通过Shader显示粒子后,死亡判断模块840可以根据各个粒子的生成时间和当前时间得到各个粒子的生成时长,进而将生成时长与该粒子的生命周期比对,若生成时长已到达或超过生命周期,则可以确定该粒子死亡,进而把死亡的粒子移出屏幕,停止显示该粒子。
在一些实施例中,所述装置可以进一步包括:粒子属性更新模块850,用于在粒子仍在生命周期时,更新所述目标粒子系统的各个粒子的粒子属性,并显示更新后的所述目标粒子系统的粒子。
具体的,粒子属性可以分为状态相关的粒子属性和状态无关的粒子属性,其中状态无关的粒子属性为仅通过一个由粒子的初始属性和根据当前时间定义的闭合函数,就能计算出的粒子属性,而状态相关的粒子 属性则是更新计算需要读取上一帧的粒子属性作为输入就是状态相关。状态相关的粒子属性更新需要单独的绘制步骤,将更新后的粒子属性保存到RT上,通过shader显示更新后的粒子。在一些实施例中,粒子属性更新模块850不必在每一帧都对粒子进行更新,而是可以根据需要设置粒子的更新周期,例如设置用于模拟描述距离视角较远的对象的粒子的更新周期可以是每2帧更新一次或3帧更新一次,等等。
在一些实施例中,粒子属性更新模块850可以根据所述目标粒子系统的受力状态更新状态相关的粒子属性,所述目标粒子系统的受力状态可以由CPU处理得到后传递给GPU,例如CPU在向GPU周期性发送目标粒子系统的整体信息的同时,将目标粒子系统的受力状态一起发送给GPU。
在一些实施例中,粒子属性更新模块850也可以根据所述目标粒子系统的关键帧数据更新所述目标粒子系统的各个粒子的粒子属性。所述目标粒子系统的关键帧数据可以包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色,当到达所述关键帧对应时间时,粒子属性更新模块850可以根据该关键帧对应时间的显示对象位置确定目标粒子系统的各个粒子的位置信息,可以将粒子位置调整至显示在关键帧的显示对象位置上,同理可以根据根据该关键帧对应时间的变化速度统一调整目标粒子系统的各个粒子的速度信息,以及可以根据根据该关键帧对应时间的显示对象的颜色统一调整目标粒子系统的各个粒子的颜色,从而实现对目标粒子系统的各个粒子的精确控制。
在一些实施例中,所述粒子属性更新模块850具体用于更新粒子在对应位置渲染纹理中保存的位置信息和在对应速度渲染纹理中保存的速度信息。
具体的,如图11所示,所述粒子属性更新模块850包括:
属性变化量保存单元851,用于根据目标粒子系统的受力状态,计算目标粒子系统中粒子属性与状态相关的粒子的属性变化量,并将所述属性变化量保存在临时渲染纹理,所述属性变化量包括位置变化量和速度变化量。
属性变化量叠加单元852,用于将所述临时渲染纹理中的位置变化量叠加至对应粒子的位置渲染纹理中的位置信息,将所述临时渲染纹理中的速度变化量叠加至对应粒子的速度渲染纹理中的速度信息。
具体的,若更新前位置渲染纹理中保存的位置信息为u1,计算得到的位置增量为u,因此更新后位置渲染纹理中保存的位置信息为u2=u1+u,同样,若更新前速度渲染纹理中保存的速度信息为v1,计算得到的速度增量为v,因此更新后速度渲染纹理中保存的速度信息为v2=v1+v。其中,u、v通过临时渲染纹理保存。
更新处理流程可以如图8所示。灰色区域是节省RT算法的核心,由于TempRT保存的是增量,不需要读取保存的上一帧结果,并且TempRT在更新处理后即可释放,相比经典算法少了两张RT,多了两个add to Pass的处理。
在一些实施例中,所述粒子系统的处理装置还包括:
纹理资源分配模块860,用于根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源。
具体的,GPU可以根据目标粒子系统的同时存在的最多粒子数为目标粒子系统分配渲染纹理资源,最多粒子数=最大粒子发射率*最大生命周期。进而根据每张RT资源可以存储的粒子属性的数量,确定目标粒子系统所需要的RT资源。例如在一些实施例中,每张RT(PosRT或者velocityRT)都是RGBA32f格式,占用的显存为0.125-16M,对应可存储8192-100W个粒子的粒子属性。可以由CPU将目标粒子系统的 最大粒子发射率和最大生命周期放入目标粒子系统的整体属性信息中发送给GPU,由GPU为目标粒子系统分配渲染纹理资源。
进而为了减少分配回收产生的碎片,可以通过建立多级order链表管理空闲渲染纹理资源,进而根据伙伴算法从空闲渲染纹理资源中为目标粒子系统分配渲染纹理资源。具体如图9所示,其中0~9标识order链表等级,等级n的链表中管理大小为1*2n的RT资源,即每一级链表管理的RT块的大小是上一级的2倍,而等级n的链表中的RT资源又可以分为多个子块,如1*2n可以分为2*2n-1个RT资源。采用order链表管理分配时,当目标粒子系统需求为4个RT时,从块大小为4的order链表查起,若该链表上有空闲的块,则可以直接分配一个大小为4的RT资源块给用户,否则向下一个级别(块大小为8)的链表中查找,若管理RT资源块大小为8链表中存在空闲资源块,则将一个空闲的RT资源块分裂为两个大小为4的RT资源块,其中一个大小为4的RT资源块分配给目标粒子系统,另外一块大小为4的RT资源块加入上一级链表中,以此类推。而当目标粒子系统释放RT资源时,如果当前存在与该目标粒子系统释放的RT资源块相同大小的空闲RT资源块,则将这两块相同大小的RT资源块合并放入下一级别的链表中,如目标粒子系统释放一个大小为4的RT资源块,若找到一个同样大小为4的RT资源块空闲,则此时两RT资源块可以合并为一个大小为8的RT资源块放入管理大小为8的RT资源块的链表中,以此类推。
在一些实施例中,也可以由CPU为目标粒子系统分配渲染纹理资源,并将分配给目标粒子系统的RT资源告知GPU。
在一些实施例中,所述装置还包括:
图案信息接收模块880,用于接收CPU发送的目标粒子系统的图案信息,所述图案信息包括像素位置信息和各个像素的生成时间。
在一些实施例中,CPU可以将目标粒子系统的图案信息(例如一张彩色图片)写入指定存储空间,如内存、硬盘或显存中,由图案信息接收模块880到该指定存储空间加载该图案信息。
具体的,可以由CPU根据一张目标黑白图像生成一幅彩色图像,通过逐个遍历黑白图像中的每个像素点,当像素点颜色大于0(非黑色)时,通过彩色图像中的一个像素的rgb通道记录所述颜色大于0的像素点的位置信息,以及通过该像素的Alpha通道记录所述颜色大于0的像素点的生成时间、显示时间等信息,从而顺序的将各个像素点颜色大于0的像素点的位置和时间信息保存在彩色图像的各个像素中。CPU将所述得到的彩色图像的图案信息发送给GPU。
示例性的如图12所示的黑白图片,左侧1201展示的是黑白图片的rgb通道,右侧1202展示的是黑白图片的alpha通道,CPU可以根据黑白图像的rgb通道中的位置和alpha通道中的时间信息生成得到右侧彩色图像1203,彩色图像的各像素点的颜色根据黑白图像的非0像素点的位置决定,各像素的Alpha通道记录所述颜色大于0的像素点的生成时间、显示时间等信息。
在一些实施例中,所述黑白图像可以为文字图案的图像。
同理的,所述CPU还可以根据立体模型图像(3D网格图像)生成上述彩色图像,同样所述彩色图像的像素点的RGB通道保存立体模型图像中顶点位置的位置坐标。
具体的,所述由CPU生成的彩色图案基于的黑白图像可以为文字图案。由于文字图案生成的图片分辨率非常小(默认32*32),可以实时生成。
所述粒子属性初始化模块820,还可以用于:
根据所述图案信息中的像素位置信息和各个像素的生成时间结合所 述目标粒子系统的整体属性信息,初始化所述目标粒子系统的各个粒子的位置信息和生成时间。后续GPU可以在屏幕上还原所述图案信息对应的原始图像,例如上述的目标黑白图像或立体模型图像,从而可以根据由所述图案信息中提取得到的像素位置信息和像素点的生成时间进一步精确粒子系统的各个粒子的显示位置和生成时间,以实现更细化的粒子显示控制。
本申请实施例的粒子系统的处理装置接收CPU发送的粒子系统的整体属性信息后生成粒子,并对生成的粒子进行显示及生命周期的管理,本申请实施例大幅减少了与CPU之间的数据传递,降低了等待CPU数据传输的次数与频率,从而有效提高了粒子系统的处理效率。另一方面,CPU在处理粒子属性更新时,通常需要两对或更多对的RT保存当前帧及上一帧粒子的位置信息与速度信息,而本申请实施例中粒子系统的处理装置处理粒子属性更新时,只需保存速度信息和位置信息的增量,因此至少可以节省一张位置纹理和一张速度纹理,只需增加一个临时纹理即可实现,而该临时纹理在更新处理后即可释放,尤其在粒子数量庞大的粒子系统的处理过程中可以节省大量显存资源。
图13是本申请另一实施例中的一种粒子系统的处理装置的结构示意图,如图所示本实施例中的粒子系统的处理装置1300可以包括:至少一个处理器CPU 1301,GPU 1303、存储器1304和显示屏1305,至少一个通信总线1307。其中,通信总线1307用于实现上述组件之间的连接通信。其中存储器1304中包括至少一个着色器Shader,当上述至少一个Shader被所述GPU1303执行时完成以下操作:
接收CPU发送的目标粒子系统的整体属性信息,所述目标粒子系统的整体属性信息包括粒子显示范围,粒子生命周期范围,粒子速度范围以及生成时间;
根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;
根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
在一些实施例中,在执行所述根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子之后,上述至少一个Shader还可以被设置用于执行以下操作:
根据所述目标粒子系统的粒子的生命周期以及生成时间判断粒子是否死亡,若粒子死亡则停止显示该粒子。
在一些实施例中,上述至少一个Shader还可以被设置用于执行以下操作:
若粒子仍在生命周期中,更新所述目标粒子系统的各个粒子的粒子属性,并显示更新后的所述目标粒子系统的粒子。
在一些实施例中,上述至少一个Shader被设置用于执行初始化目标粒子系统的各个粒子的粒子属性具体包括:
将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度信息和生命周期保存在速度渲染纹理;
上述至少一个Shader被设置用于执行根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的粒子包括:
采样粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息,从而显示对应粒子;
上述至少一个Shader被设置用于执行更新所述目标粒子系统的各个粒子的粒子属性具体包括:
更新粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保 存的速度信息。
在一些实施例中,上述至少一个Shader被设置用于执行更新粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息具体包括:
根据目标粒子系统的受力状态,计算目标粒子系统中粒子属性与状态相关的粒子的属性变化量,并将所述属性变化量保存在临时渲染纹理,所述属性变化量包括位置变化量和速度变化量;
将所述临时渲染纹理中的位置变化量叠加至对应粒子的位置渲染纹理中的位置信息,将所述临时渲染纹理中的速度变化量叠加至对应粒子的速度渲染纹理中的速度信息。
在一些实施例中,所述目标粒子系统的整体属性信息还包括最大粒子发射率和最大生命周期;
在执行将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度信息和生命周期保存在速度渲染纹理之前,上述至少一个Shader还被设置用于执行:
根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源。
在一些实施例中,上述至少一个Shader被设置用于执行根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源具体包括:
根据管理空闲渲染纹理资源的多级order链表和伙伴算法从空闲渲染纹理资源中为目标粒子系统分配渲染纹理资源。
在一些实施例中,所述整体属性信息还包括目标粒子系统的关键帧数据,所述目标粒子系统的关键帧数据包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色;
上述至少一个Shader还被设置用于执行:
根据所述目标粒子系统的关键帧数据初始化或更新所述目标粒子系统的各个粒子的粒子属性。
在一些实施例中,在执行初始化目标粒子系统的各个粒子的粒子属性之前,上述至少一个Shader还被设置用于执行:
接收CPU发送的目标粒子系统的图案信息,所述图案信息携带像素位置信息和各个像素的生成时间;
上述至少一个Shader还被设置用于执行初始化目标粒子系统的各个粒子的粒子属性具体包括:
根据所述图案信息中的像素位置信息和各个像素的生成时间结合所述目标粒子系统的整体属性信息,初始化所述目标粒子系统的各个粒子的位置信息和生成时间。
在一些实施例中,所述至少一个shader可以包括图10所示的整体属性信息接收模块810、粒子属性初始化模块820、粒子显示模块830。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本申请的保护范围之内。

Claims (19)

  1. 一种粒子系统的处理方法,其特征在于,所述方法包括:
    接收中央处理器CPU发送的目标粒子系统的整体属性信息,所述目标粒子系统的整体属性信息包括粒子显示范围,粒子生命周期范围,粒子速度范围以及生成时间;
    根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;
    根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
  2. 如权利要求1所述的粒子系统的处理方法,其特征在于,所述根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子之后还包括:
    根据所述目标粒子系统的粒子的生命周期以及生成时间判断粒子是否死亡,若粒子死亡则停止显示该粒子。
  3. 如权利要求2所述的粒子系统的处理方法,其特征在于,所述方法还包括:
    若粒子仍在生命周期中,更新所述目标粒子系统的各个粒子的粒子属性,并显示更新后的所述目标粒子系统的粒子。
  4. 如权利要求3所述的粒子系统的处理方法,其特征在于,所述初始化目标粒子系统的各个粒子的粒子属性包括:
    将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度 信息和生命周期保存在速度渲染纹理;
    所述根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的粒子包括:
    采样粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息,从而显示对应粒子;
    所述更新所述目标粒子系统的各个粒子的粒子属性包括:
    更新粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息。
  5. 如权利要求4所述的粒子系统的处理方法,其特征在于,所述更新粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息包括:
    根据目标粒子系统的受力状态,计算目标粒子系统中粒子属性与状态相关的粒子的属性变化量,并将所述属性变化量保存在临时渲染纹理,所述属性变化量包括位置变化量和速度变化量;
    将所述临时渲染纹理中的位置变化量叠加至对应粒子的位置渲染纹理中的位置信息,将所述临时渲染纹理中的速度变化量叠加至对应粒子的速度渲染纹理中的速度信息。
  6. 如权利要求4所述的粒子系统的处理方法,其特征在于,所述目标粒子系统的整体属性信息还包括最大粒子发射率和最大生命周期;
    所述将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度信息和生命周期保存在速度渲染纹理之前,所述方法还包括:
    根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源。
  7. 如权利要求6所述的粒子系统的处理方法,其特征在于,所述根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源包括:
    根据管理空闲渲染纹理资源的多级order链表和伙伴算法从空闲渲染纹理资源中为目标粒子系统分配渲染纹理资源。
  8. 如权利要求1所述的粒子系统的处理方法,其特征在于,所述整体属性信息还包括目标粒子系统的关键帧数据,所述目标粒子系统的关键帧数据包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色;
    所述方法还包括:
    根据所述目标粒子系统的关键帧数据初始化或更新所述目标粒子系统的各个粒子的粒子属性。
  9. 如权利要求1所述的粒子系统的处理方法,其特征在于,所述初始化目标粒子系统的各个粒子的粒子属性之前还包括:
    接收CPU发送的目标粒子系统的图案信息,所述图案信息携带像素位置信息和各个像素的生成时间;
    所述初始化目标粒子系统的各个粒子的粒子属性包括:
    根据所述图案信息中的像素位置信息和各个像素的生成时间结合所述目标粒子系统的整体属性信息,初始化所述目标粒子系统的各个粒子的位置信息和生成时间。
  10. 一种粒子系统的处理装置,其特征在于,所述装置包括:
    图形处理器GPU;
    与所述GPU相连接的存储器;所述存储器中存储有多个指令模块,包括整体属性信息接收模块、粒子属性初始化模块和粒子显示模块;当所述指令模块由所述GPU执行时,执行以下操作:
    所述整体属性信息接收模块,用于接收中央处理器CPU发送的目标粒子系统的整体属性信息;
    所述粒子属性初始化模块,用于根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;
    所述粒子显示模块,用于根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
  11. 如权利要求10所述的粒子系统的处理装置,其特征在于,还包括:
    死亡判断模块,用于根据所述目标粒子系统的粒子的生命周期以及生成时间判断粒子是否死亡,若粒子死亡则停止显示该粒子。
  12. 如权利要求11所述的粒子系统的处理装置,其特征在于,还包括:
    粒子属性更新模块,用于在粒子仍在生命周期时,更新所述目标粒子系统的各个粒子的粒子属性,并显示更新后的所述目标粒子系统的粒子。
  13. 如权利要求12所述的粒子系统的处理装置,其特征在于,所述 粒子属性初始化模块具体用于:
    将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度信息和生命周期保存在速度渲染纹理;
    所述粒子显示模块具体用于:
    采样粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息,从而显示对应粒子;
    所述粒子属性更新模块具体用于:
    更新粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息。
  14. 如权利要求13所述的粒子系统的处理装置,其特征在于,所述粒子属性更新模块包括:
    属性变化量保存单元,用于根据目标粒子系统的受力状态,计算目标粒子系统中粒子属性与状态相关的粒子的属性变化量,并将所述属性变化量保存在临时渲染纹理,所述属性变化量包括位置变化量和速度变化量;
    属性变化量叠加单元,用于将所述临时渲染纹理中的位置变化量叠加至对应粒子的位置渲染纹理中的位置信息,将所述临时渲染纹理中的速度变化量叠加至对应粒子的速度渲染纹理中的速度信息。
  15. 如权利要求13所述的粒子系统的处理装置,其特征在于,所述目标粒子系统的整体属性信息还包括最大粒子发射率和最大生命周期;
    所述装置还包括:
    纹理资源分配模块,用于根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源。
  16. 如权利要求15所述的粒子系统的处理装置,其特征在于,所述纹理资源分配模块具体用于:
    根据管理空闲渲染纹理资源的多级order链表和伙伴算法从空闲渲染纹理资源中为目标粒子系统分配渲染纹理资源。
  17. 如权利要求10所述的粒子系统的处理装置,其特征在于,所述整体属性信息还包括目标粒子系统的关键帧数据,所述目标粒子系统的关键帧数据包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色;
    所述粒子属性初始化模块,还用于根据所述目标粒子系统的关键帧数据初始化所述目标粒子系统的各个粒子的粒子属性;
    所述装置还包括:
    所述粒子属性更新模块,用于根据所述目标粒子系统的关键帧数据更新所述目标粒子系统的各个粒子的粒子属性。
  18. 如权利要求10所述的粒子系统的处理装置,其特征在于,所述装置还包括:
    图案信息接收模块,用于接收CPU发送的目标粒子系统的图案信息,所述图案信息包括像素位置信息和各个像素的生成时间;
    所述粒子属性初始化模块具体用于:
    根据所述图案信息中的像素位置信息和各个像素的生成时间结合所述目标粒子系统的整体属性信息,初始化所述目标粒子系统的各个粒子的位置信息和生成时间。
  19. 一种非易失性机器可读存储介质,其特征在于,所述存储介质中存储有机器可读指令,所述机器可读指令可以由图形处理器GPU执行以完成以下操作:
    接收中央处理器CPU发送的目标粒子系统的整体属性信息,所述目标粒子系统的整体属性信息包括粒子显示范围,粒子生命周期范围,粒子速度范围以及生成时间;
    根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;
    根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
PCT/CN2017/083917 2016-05-16 2017-05-11 一种粒子系统的处理方法及装置 WO2017198104A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020187018114A KR102047615B1 (ko) 2016-05-16 2017-05-11 입자 시스템을 위한 처리 방법 및 장치
US16/052,265 US10699365B2 (en) 2016-05-16 2018-08-01 Method, apparatus, and storage medium for processing particle system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610324183.1 2016-05-16
CN201610324183.1A CN107392835B (zh) 2016-05-16 2016-05-16 一种粒子系统的处理方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/052,265 Continuation US10699365B2 (en) 2016-05-16 2018-08-01 Method, apparatus, and storage medium for processing particle system

Publications (1)

Publication Number Publication Date
WO2017198104A1 true WO2017198104A1 (zh) 2017-11-23

Family

ID=60325664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/083917 WO2017198104A1 (zh) 2016-05-16 2017-05-11 一种粒子系统的处理方法及装置

Country Status (4)

Country Link
US (1) US10699365B2 (zh)
KR (1) KR102047615B1 (zh)
CN (1) CN107392835B (zh)
WO (1) WO2017198104A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191550A (zh) * 2018-07-13 2019-01-11 乐蜜有限公司 一种粒子渲染方法、装置、电子设备及存储介质
CN109903359A (zh) * 2019-03-15 2019-06-18 广州市百果园网络科技有限公司 一种粒子的显示方法、装置、移动终端和存储介质
CN112700518A (zh) * 2020-12-28 2021-04-23 北京字跳网络技术有限公司 拖尾视觉效果的生成方法、视频的生成方法、电子设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665531A (zh) * 2018-05-08 2018-10-16 阿里巴巴集团控股有限公司 3d粒子模型的变换方法及装置
CN108933895A (zh) * 2018-07-27 2018-12-04 北京微播视界科技有限公司 三维粒子特效生成方法、装置和电子设备
CN110213638B (zh) * 2019-06-05 2021-10-08 北京达佳互联信息技术有限公司 动画显示方法、装置、终端及存储介质
CN110415326A (zh) * 2019-07-18 2019-11-05 成都品果科技有限公司 一种粒子效果的实现方法及装置
CN111815749A (zh) * 2019-09-03 2020-10-23 厦门雅基软件有限公司 粒子计算方法、装置、电子设备及计算机可读存储介质
CN112215932B (zh) * 2020-10-23 2024-04-30 网易(杭州)网络有限公司 粒子动画处理方法、装置、存储介质及计算机设备
CN112270732A (zh) * 2020-11-17 2021-01-26 Oppo广东移动通信有限公司 粒子动画的生成方法、处理装置、电子设备和存储介质
CN113763701B (zh) * 2021-05-26 2024-02-23 腾讯科技(深圳)有限公司 路况信息的显示方法、装置、设备及存储介质
CN117194055B (zh) * 2023-11-06 2024-03-08 西安芯云半导体技术有限公司 Gpu显存申请及释放的方法、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1753031A (zh) * 2005-11-10 2006-03-29 北京航空航天大学 基于gpu的粒子系统
CN101452582A (zh) * 2008-12-18 2009-06-10 北京中星微电子有限公司 一种实现三维视频特效的方法和装置
CN102722859A (zh) * 2012-05-31 2012-10-10 北京像素软件科技股份有限公司 一种计算机仿真场景渲染方法
US8289327B1 (en) * 2009-01-21 2012-10-16 Lucasfilm Entertainment Company Ltd. Multi-stage fire simulation
CN103714568A (zh) * 2013-12-31 2014-04-09 北京像素软件科技股份有限公司 一种大规模粒子系统的实现方法
CN104571993A (zh) * 2014-12-30 2015-04-29 北京像素软件科技股份有限公司 粒子系统的处理方法、显卡和移动应用平台

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0104931D0 (en) * 2001-02-28 2001-04-18 Univ Leeds object interaction simulation
US20060155576A1 (en) * 2004-06-14 2006-07-13 Ryan Marshall Deluz Configurable particle system representation for biofeedback applications
CN101529468B (zh) * 2006-10-27 2013-06-05 汤姆森许可贸易公司 用于从二维图像中恢复三维粒子系统的系统和方法
KR100898989B1 (ko) * 2006-12-02 2009-05-25 한국전자통신연구원 물 표면의 포말 생성 및 표현 장치와 그 방법
KR100889601B1 (ko) * 2006-12-04 2009-03-20 한국전자통신연구원 물 파티클 데이터를 이용한 물결과 거품 표현 장치 및 방법
CN102426692A (zh) * 2011-08-18 2012-04-25 北京像素软件科技股份有限公司 粒子绘制方法
CN102982506A (zh) * 2012-11-13 2013-03-20 沈阳信达信息科技有限公司 基于gpu的粒子系统优化
CN104143208A (zh) * 2013-05-12 2014-11-12 哈尔滨点石仿真科技有限公司 一种大规模真实感雪景实时渲染方法
CN104022756B (zh) * 2014-06-03 2016-09-07 西安电子科技大学 一种基于gpu架构的改进的粒子滤波方法
CN104778737B (zh) * 2015-03-23 2017-10-13 浙江大学 基于gpu的大规模落叶实时渲染方法
CN104700446B (zh) * 2015-03-31 2017-10-03 境界游戏股份有限公司 一种粒子系统中粒子顶点数据的更新方法
US9905038B2 (en) * 2016-02-15 2018-02-27 Nvidia Corporation Customizable state machine for visual effect insertion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1753031A (zh) * 2005-11-10 2006-03-29 北京航空航天大学 基于gpu的粒子系统
CN101452582A (zh) * 2008-12-18 2009-06-10 北京中星微电子有限公司 一种实现三维视频特效的方法和装置
US8289327B1 (en) * 2009-01-21 2012-10-16 Lucasfilm Entertainment Company Ltd. Multi-stage fire simulation
CN102722859A (zh) * 2012-05-31 2012-10-10 北京像素软件科技股份有限公司 一种计算机仿真场景渲染方法
CN103714568A (zh) * 2013-12-31 2014-04-09 北京像素软件科技股份有限公司 一种大规模粒子系统的实现方法
CN104571993A (zh) * 2014-12-30 2015-04-29 北京像素软件科技股份有限公司 粒子系统的处理方法、显卡和移动应用平台

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191550A (zh) * 2018-07-13 2019-01-11 乐蜜有限公司 一种粒子渲染方法、装置、电子设备及存储介质
CN109191550B (zh) * 2018-07-13 2023-03-14 卓米私人有限公司 一种粒子渲染方法、装置、电子设备及存储介质
CN109903359A (zh) * 2019-03-15 2019-06-18 广州市百果园网络科技有限公司 一种粒子的显示方法、装置、移动终端和存储介质
CN109903359B (zh) * 2019-03-15 2023-05-05 广州市百果园网络科技有限公司 一种粒子的显示方法、装置、移动终端和存储介质
CN112700518A (zh) * 2020-12-28 2021-04-23 北京字跳网络技术有限公司 拖尾视觉效果的生成方法、视频的生成方法、电子设备
CN112700518B (zh) * 2020-12-28 2023-04-07 北京字跳网络技术有限公司 拖尾视觉效果的生成方法、视频的生成方法、电子设备

Also Published As

Publication number Publication date
US20180342041A1 (en) 2018-11-29
CN107392835B (zh) 2019-09-13
KR102047615B1 (ko) 2019-11-21
CN107392835A (zh) 2017-11-24
KR20180087356A (ko) 2018-08-01
US10699365B2 (en) 2020-06-30

Similar Documents

Publication Publication Date Title
WO2017198104A1 (zh) 一种粒子系统的处理方法及装置
KR102327144B1 (ko) 그래픽 프로세싱 장치 및 그래픽 프로세싱 장치에서 타일 기반 그래픽스 파이프라인을 수행하는 방법
US10055893B2 (en) Method and device for rendering an image of a scene comprising a real object and a virtual replica of the real object
KR101563098B1 (ko) 커맨드 프로세서를 갖는 그래픽 프로세싱 유닛
US8115767B2 (en) Computer graphics shadow volumes using hierarchical occlusion culling
KR102381945B1 (ko) 그래픽 프로세싱 장치 및 그래픽 프로세싱 장치에서 그래픽스 파이프라인을 수행하는 방법
US20080246760A1 (en) Method and apparatus for mapping texture onto 3-dimensional object model
JP2008077627A (ja) 3次元画像のレンダリングにおける早期zテスト方法およびシステム
JP2016509718A (ja) ビジビリティ情報を用いたグラフィックスデータのレンダリング
CN107392836B (zh) 使用图形处理管线实现的立体多投影
WO2021253640A1 (zh) 阴影数据确定方法、装置、设备和可读介质
US20230230311A1 (en) Rendering Method and Apparatus, and Device
KR101670958B1 (ko) 이기종 멀티코어 환경에서의 데이터 처리 방법 및 장치
CN115701305A (zh) 阴影筛选
US9704290B2 (en) Deep image identifiers
US20160042558A1 (en) Method and apparatus for processing image
US9406165B2 (en) Method for estimation of occlusion in a virtual environment
US20230316626A1 (en) Image rendering method and apparatus, computer device, and computer-readable storage medium
JP6235926B2 (ja) 情報処理装置、生成方法、プログラム及び記録媒体
CN116670719A (zh) 一种图形处理方法、装置及电子设备
KR101227183B1 (ko) 3d 그래픽 모델 입체 렌더링 장치 및 입체 렌더링 방법
KR20150052585A (ko) 커맨드들을 관리하는 장치 및 방법
JP2007141078A (ja) プログラム、情報記憶媒体及び画像生成システム
WO2022135050A1 (zh) 渲染方法、设备以及系统
Zhang et al. Real-time hair simulation on mobile device

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20187018114

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020187018114

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17798667

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17798667

Country of ref document: EP

Kind code of ref document: A1