WO2017198104A1 - 一种粒子系统的处理方法及装置 - Google Patents
一种粒子系统的处理方法及装置 Download PDFInfo
- Publication number
- WO2017198104A1 WO2017198104A1 PCT/CN2017/083917 CN2017083917W WO2017198104A1 WO 2017198104 A1 WO2017198104 A1 WO 2017198104A1 CN 2017083917 W CN2017083917 W CN 2017083917W WO 2017198104 A1 WO2017198104 A1 WO 2017198104A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- particle
- particle system
- target
- information
- target particle
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
Definitions
- the present application relates to the field of computer graphics technology, and in particular, to a method and an apparatus for processing a particle system.
- Particle systems are now commonly used to demonstrate the above-mentioned irregularities, and particle systems can simulate complex motion systems.
- the particle position update and death detection in the particle system are processed by a CPU (Central Processing Unit), and the GPU (Graphics Processing Unit) displays the particles in the particle system according to the processing result of the CPU. It takes a lot of CPU time, and while the CPU is in the process of processing, the GPU needs to be in the lock wait state. Until the CPU completes the particle generation and location update, the GPU can display according to the updated data of the CPU, resulting in low processing efficiency.
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- the embodiment of the present application provides a method and a device for processing a particle system, which can improve the processing efficiency of the particle system.
- the embodiment of the present application provides a method for processing a particle system, including:
- Receiving overall attribute information of the target particle system sent by the CPU, and the overall attribute information of the target particle system includes a particle display range, a particle life cycle range, a particle speed range, and a generation time;
- Individual particles of the target particle system are displayed based on particle properties of individual particles in the target particle system.
- the embodiment of the present application further provides an apparatus for processing a particle system, including:
- the memory stores a plurality of instruction modules, including an overall attribute information receiving module, a particle attribute initialization module, and a particle display module; when the instruction module is executed by the GPU, executing the following operating:
- the overall attribute information receiving module is configured to receive overall attribute information of a target particle system sent by the CPU;
- the particle attribute initialization module is configured to generate particles of the target particle system according to overall attribute information of the target particle system and initialize particle attributes of each particle of the target particle system, wherein the particle attributes of the respective particles include positions of the respective particles Information, speed information, life cycle, and generation time;
- the particle display module is configured to display each particle of the target particle system according to particle properties of each particle in the target particle system.
- Embodiments of the present application also provide a non-transitory machine-readable storage medium in which machine readable instructions are stored, the machine readable instructions being executable by a graphics processor GPU to perform the following operations:
- the overall attribute information of the target particle system includes the particle display range, the particle life cycle range, the particle velocity range, and the generation time;
- Individual particles of the target particle system are displayed based on particle properties of individual particles in the target particle system.
- the GPU receives the overall attribute information of the particle system sent by the CPU, generates a particle, and performs display and lifecycle management on the generated particle.
- the embodiment of the present application substantially reduces data transmission between the GPU and the CPU. The number and frequency of GPU waiting for CPU data transmission are reduced, thereby effectively improving the processing efficiency of the particle system.
- FIG. 1 is a schematic flow chart of a method for processing a particle system in an embodiment of the present application
- FIG. 2 is a schematic view showing a pattern effect displayed by the particle system in the embodiment of the present application.
- FIG. 3 is a schematic diagram showing the text effect displayed by the particle system in the embodiment of the present application.
- FIG. 4 is a schematic diagram showing a display effect of a particle system combined with a specific game scene in the embodiment of the present application;
- FIG. 5 is a schematic flow chart of a processing method of a particle system in another embodiment of the present application.
- FIG. 6 is a schematic flow chart of a method for processing a particle system in another embodiment of the present application.
- FIG. 7 is a schematic flow chart of a method for processing a particle system in another embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a partner algorithm in an embodiment of the present application.
- FIG. 9 is a schematic diagram of a particle system distribution interface in an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of a processing device of a particle system in an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of a particle attribute update module in an embodiment of the present application.
- FIG. 12 is a schematic diagram of a CPU generated by a CPU in an embodiment of the present application for transmitting a color image to a GPU according to a black and white image;
- FIG. 13 is a schematic structural diagram of a processing apparatus of a particle system in another embodiment of the present application.
- the method and the device for processing the particle system in the embodiment of the present application may be implemented by a GPU in a computer system, or may be implemented in a functional system that implements a function similar to the GPU in the computer system.
- the GPU is used as the embodiment of the present application.
- the implementation of the present application is described in the functional architecture of the computer system of other embodiments, and other functional structural modules may be used to implement the corresponding steps in the embodiments of the present application.
- FIG. 1 is a schematic flow chart of a method for processing a particle system in an embodiment of the present application. As shown in the figure, the method includes at least:
- the particle system in the embodiment of the present application is a graphic display class for effectively simulating an irregular fuzzy object or shape, for example, simulating a fireworks in a screen through a particle system, and passing another particle system on the screen. Simulate a string of characters that change state, and so on.
- the target particle system is a particle system for rendering a target texture.
- an irregular object is defined as a group of irregular, randomly distributed particles. Cheng, and each particle has a certain life cycle, they constantly change position, keep moving, fully embodies the nature of irregular objects.
- the CPU transfers the overall attribute data of the particle system to the GPU, where the overall attribute data includes a range of values of attributes of all the particles in the particle system, and does not need to include attributes of a single particle, and the data transmitted is also It does not increase as the number of particles increases.
- the overall attribute information of the particle system includes a particle display range (shader shader emission position and range), a particle life cycle range, a particle velocity range, and a generation time.
- the CPU may pass the overall attribute information of the target particle system to the GPU constant register.
- the overall attribute information may further include key frame data of the target particle system, or include pattern information of the target particle system, used to initialize particle properties of individual particles of the particle system, or used to subsequently update the target particle.
- the key frame data of the target particle system includes a display object position, a change speed, or a display color corresponding to at least one key frame.
- the pattern information of the target particle system carries initial pixel position information and generation time of each pixel.
- the CPU may periodically send the overall attribute information of the target particle system to the GPU for the GPU to subsequently update the particle attributes of the respective particles of the target particle system.
- S102 Generate particles of the target particle system according to overall attribute information of the target particle system and initialize particle attributes of respective particles of the target particle system.
- the particle properties of the individual particles may include position information, velocity information, life cycle, and generation time of each particle.
- the GPU may randomly determine the position information of each particle in the particle display range according to the particle display range (determining the shader shader emission position and range) in the overall attribute information of the target particle system, that is, the generated particle positions are randomly Distributed in the particle display range; and the GPU can be based on the particles in the overall attribute information of the target particle system
- the life cycle range randomly determining the life cycle of each particle within the life cycle of the particle, that is, the generated life cycle of each particle is randomly distributed in the life cycle range of the particle; and the GPU can be based on the particle velocity range of the target particle system.
- the velocity of each particle is randomly determined within the particle velocity range, that is, the generated velocity of each particle is randomly distributed in the particle velocity range; and the GPU can determine the life at the generation time according to the generation time in the overall property information of the target particle system.
- the generation time of each particle is randomly determined within the period, that is, the generated generation time of each particle is randomly distributed within the life cycle determined by the generation time.
- the GPU may save the generated location information and generation time of the generated particle in a position rendering texture (PosRT, position RT, where RT is Render Target, render object, representing off-screen rendered texture), where PosRT rgb
- the channel records the position information of the particle, and the alpha channel of the PosRT records the generation time of the particle; the velocity information and the life cycle of the generated particle are saved in the velocity rendering texture (velocityRT), wherein the velocity of the rgb channel of the velocityRT records the velocity information of the particle, VelocityRT
- the alpha channel records the life cycle of the particle.
- the GPU can write particle properties of the particles into the position rendering texture and the velocity rendering texture through a Shader for generating particles.
- each RT may be in the RGBA32f format, occupying a memory of 0.125-16M, corresponding to a particle attribute that can store 8192-100W particles.
- the GPU may initialize a particle of each particle of the target particle system according to key frame data of the target particle system. Attributes.
- the key frame data of the target particle system may include an initial display position, an initial change speed, or an initial display color, and the like.
- the GPU may determine the position information of the target particle system according to the display object position of the initial key frame, compared to determining the target according to the particle display range in the overall attribute information.
- the position information of the particle system can further accurately the particle system according to the position of the display object of the initial key frame.
- the display position of each particle is not limited by the shape of the display range of the shader shader.
- the GPU can further accurately determine the initial velocity information and display color of each particle of the particle system according to the rate of change of the initial key frame.
- the GPU can draw the corresponding particle in the screen according to the position information and the speed information of the particle by sampling the position information saved by the particle in the position rendering texture PosRT and the velocity information saved in the speed rendering texture velocityRT, where the particle is
- the location information determines its location on the screen, and the velocity information can determine the pose, direction, and subsequent updates for the particle display.
- the GPU can display the particles through a shader for displaying the particles, and the shader for displaying the particles is specifically used to read the position information of the particles from the position rendering texture, and read from the speed rendering texture.
- the velocity information of the particle is taken, and the corresponding particle is drawn on the screen according to the position information and the velocity information of the obtained particle.
- Shader is an editable program on the GPU that is used to implement image rendering instead of a fixed rendering pipeline.
- Shaders include Vertex Shader vertex shaders and Pixel Shader pixel shaders.
- the Vertex Shader is used for the calculation of the geometric relationship of the vertices
- the Pixel Shader is used for the calculation of the source color of the film. Due to the editability of the Shader, by sampling the texture RT in the Vertex Shader, sampling the colors in the Pixel Shader, and displaying the corresponding particles, a variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
- FIG. 2 is a pattern effect displayed by a shader
- FIG. 3 is a text effect displayed by a shader.
- the pattern effect and the text effect may be black and white or color.
- the display effect of the target particle system of the present application can be as shown in FIG. 4.
- the particle display of the target particle system can be placed at the top of the game scene, that is, the game scene interface is first drawn. Other display objects, and finally on the screen Display the target particle system.
- the shader display particles may be in a radiation mode or a polymerization mode, wherein the radiation mode is to randomly radiate particles at a random velocity around the center of the emitter position of the shader, and then the particle is aggregated in an initial state.
- the highest degree, then gradually divergence; the aggregation method is also called gravity, that is, the shader randomly emits particles within a certain range, and then the gravity is set on the preset track or pattern of the screen, and the surrounding particles can be pulled around the track or pattern, then In the initial state, the degree of particle polymerization is very low, and then gradually aggregates around the preset track or pattern to form a display effect of a preset track or pattern.
- the data is generated and displayed.
- the embodiment of the present application substantially reduces the data transmission between the GPU and the CPU, and reduces the number and frequency of the GPU waiting for the CPU data transmission. , thereby effectively improving the processing efficiency of the particle system.
- FIG. 5 is a schematic flow chart of a method for processing a particle system according to another embodiment of the present application. The method as shown in the figure includes at least:
- the overall attribute information of the particle system includes particle display range (shader shader emission position and range), particle life cycle range, particle velocity range, and generation time.
- the CPU can pass the overall attribute information of the target particle system to the GPU constant register.
- the pattern information of the target particle system may include position information of each pixel in the image and a generation time of each pixel.
- the CPU may write pattern information (eg, a color picture) of the target particle system to a specified storage space, such as a memory, a hard disk, or a video memory, and load the pattern information by the GPU to the specified storage space.
- the CPU may generate a color image according to a target black and white image, by traversing each pixel in the black and white image one by one, when the pixel color is greater than 0 (non-black) And recording, by the rgb channel of one pixel in the color image, position information of the pixel whose color is greater than 0, and recording, by the alpha channel of the pixel, information such as generation time and display time of the pixel whose color is greater than 0, Thereby, the position and time information of the pixel points whose respective pixel dot colors are larger than 0 are sequentially stored in the respective pixels of the color image.
- the CPU transmits the pattern information of the obtained color image to the GPU.
- FIG. 12 An exemplary black and white picture as shown in Figure 12, the left side shows the rgb channel of the black and white picture, the right side shows the alpha channel of the black and white picture, the CPU can be based on the position in the rgb channel of the black and white image and the alpha channel
- the time information is generated to obtain a right color image, and the color of each pixel of the color image is determined according to the position of the non-zero pixel of the black and white image, and the alpha channel of each pixel records the generation time, display time, and the like of the pixel whose color is greater than 0. information.
- the black and white image may be an image of a text pattern.
- the CPU may further generate the color image according to the stereo model image (3D mesh image), and the RGB channel of the pixel of the color image also stores the position coordinates of the vertex position in the stereo model image.
- the black and white image based on the color pattern generated by the CPU may be a text pattern. Since the image generated by the text pattern has a very small resolution (default 32*32), it can be generated in real time.
- the GPU extracts corresponding pixel position information and a generation time of the pixel point from the pattern information, and further initializes particle attributes of each particle of the target particle system according to the position and generation time of each pixel according to the overall attribute information of the target particle system. .
- the GPU can restore the original image corresponding to the pattern information on the screen, such as the target black and white image or the stereo model image described above.
- the GPU in this embodiment may generate and display particles according to the overall attribute information and pattern information of the target particle system according to the overall attribute information and pattern information of the target particle system sent by the CPU, so as to be extracted according to the pattern information.
- the pixel position information and the generation time of the pixel point further accurately display the display position and generation time of each particle of the particle system to achieve finer particle display control.
- FIG. 6 is a schematic flow chart of a processing method of a particle system according to another embodiment of the present application. The method shown in the figure includes at least:
- the overall attribute information of the particle system includes particle display range (shader shader emission position and range), particle life cycle range, particle velocity range, and generation time.
- the CPU can pass the overall attribute information of the target particle system to the GPU constant register.
- the overall attribute information may further include key frame data of the target particle system, or include pattern information of the target particle system and a force state of the target particle system for initializing particle properties of each particle of the particle system. , or the particle properties of each particle used to subsequently update the target particle system.
- the key frame data of the target particle system includes a display object position, a change speed, or a display color corresponding to at least one key frame.
- the pattern information of the target particle system carries initial pixel position information and generation time of each pixel.
- the CPU may periodically send overall attribute information of the target particle system to the GPU for the GPU to subsequently update the particle attributes of the individual particles of the target particle system.
- S302. Generate particles of the target particle system according to the overall attribute information of the target particle system and initialize particle attributes of the respective particles of the target particle system.
- the particle properties of the individual particles may include position information, velocity information, life cycle, and generation time of each particle.
- S304 Determine whether the particle is dead according to the life cycle and the generation time of the particle of the target particle system, and stop displaying the particle if the particle dies.
- the GPU in this embodiment records the generation time and life cycle of each particle when initializing the particle attribute of the particle, for example, by recording the generation time and life cycle of each particle in the Alpha channel of PosRT and VelocityRT, After the Shader displays the particles, the generation time of each particle can be obtained according to the generation time and the current time of each particle, and the generation time is compared with the life cycle of the particle. If the generation time has reached or exceeded the life cycle, the particle can be determined. Death, which in turn removes the dead particles from the screen and stops displaying the particles.
- the particle attribute can be divided into a state-dependent particle attribute and a state-independent particle attribute, wherein the state-independent particle attribute is a particle that can be calculated only by a closed function defined by the initial property of the particle and the current time.
- State-related particle property updates require separate drawing steps, saving the updated particle properties to RT, and displaying the updated particles through the shader.
- the GPU does not The particle must be updated every frame, but the update period of the particle can be set as needed. For example, the update period of the particle used to simulate the object describing the distance from the perspective can be updated every 2 frames or 3 frames. Once, and so on.
- the GPU may update state-related particle attributes according to the force state of the target particle system, and the force state of the target particle system may be processed by the CPU and then passed to the GPU, for example, the CPU is in the GPU cycle. While transmitting the overall information of the target particle system, the force state of the target particle system is sent to the GPU together.
- the GPU may also update particle properties of individual particles of the target particle system based on key frame data of the target particle system.
- the key frame data of the target particle system may include a display object position, a change speed, or a display color of the at least one key frame corresponding time.
- the GPU may display the object position according to the time corresponding to the key frame. Determining the position information of each particle of the target particle system, the particle position can be adjusted to be displayed on the display object position of the key frame, and the speed of each particle of the target particle system can be uniformly adjusted according to the change speed according to the corresponding time of the key frame. Information, and the color of each particle of the target particle system can be uniformly adjusted according to the color of the display object according to the time corresponding to the key frame, thereby achieving precise control of each particle of the target particle system.
- the GPU receives the overall attribute information of the particle system sent by the CPU, generates a particle, and performs display and lifecycle management on the generated particle.
- the embodiment of the present application substantially reduces data transmission between the GPU and the CPU. The number and frequency of GPU waiting for CPU data transmission are reduced, thereby effectively improving the processing efficiency of the particle system.
- FIG. 7 is a schematic flow chart of a method for processing a particle system according to another embodiment of the present application, where the method includes at least:
- the CPU allocates a rendering texture resource to the target particle system.
- the idle rendering texture resource can be managed by establishing a multi-level order linked list, and then the rendering texture resource is allocated to the target particle system from the idle rendering texture resource according to the partner algorithm.
- the linked list of the level n manages the RT resource of size 1*2n, that is, the size of the RT block managed by each level linked list is twice the upper level.
- the RT resource in the linked list of level n can be further divided into multiple sub-blocks, for example, 1*2n can be divided into 2*2n-1 RT resources.
- RT resource block of size 4 When using the order linked list to manage allocation, when the target particle system requires 4 RTs, it is checked from the order linked list with block size 4. If there are free blocks on the linked list, an RT resource block of size 4 can be directly allocated. To the user, otherwise look up in the linked list of the next level (block size is 8). If the managed RT resource block size is 8 free lists in the linked list, then an idle RT resource block is split into two sizes of 4 An RT resource block, in which one RT resource block of size 4 is allocated to the target particle system, another RT resource block of size 4 is added to the upper-level linked list, and so on.
- the target particle system When the target particle system releases the RT resource, if there is currently an idle RT resource block of the same size as the RT resource block released by the target particle system, the two RT resource blocks of the same size are merged into the next-level linked list. If the target particle system releases a RT resource block of size 4, if an RT resource block of the same size 4 is found to be idle, then the two RT resource blocks can be merged into one RT resource block of size 8 into management. In the linked list of RT resource blocks of size 8, and so on.
- S402. Receive overall attribute information of the target particle system sent by the CPU, and allocate the information to the target. Render texture resources for the target particle system.
- the particle system in the embodiment of the present application is a graphic display class that simulates an irregular fuzzy object or a shape that is very effective, for example, a particle system simulates a fireworks on the screen, and another particle system is on the screen. Simulate a string of characters that change state, and so on.
- irregular objects are defined as consisting of a large number of irregular, randomly distributed particles, each of which has a certain life cycle. They constantly change shape and keep moving, fully embodying irregular objects. nature.
- the overall attribute information of the particle system includes a particle display range (shader shader emission range), a particle life cycle range, a particle velocity range, and a generation time.
- the CPU passes the overall attribute data of the particle system to the GPU, and does not need to include the attributes of a single particle, and the transmitted data does not increase as the number of particles increases.
- the CPU may also send the maximum particle emissivity and the maximum lifetime of the target particle system to the GPU in the overall attribute information of the target particle system, and the GPU allocates the rendered texture resource to the target particle system.
- the particle attributes of the respective particles may include position information, speed information, life cycle, and generation time of each particle.
- the GPU can save the position information and generation time of the generated particles in the position rendering texture (PosRT), wherein the rgb channel of the PosRT records the position information of the particles, the alpha channel of the PosRT records the generation time of the particles, and the speed information of the generated particles and
- the lifecycle is saved in velocity rendering texture (velocityRT), where VelocityRT's rgb channel records particle velocity information, and velocityRT's alpha channel records the particle's lifetime.
- the GPU can draw the corresponding particle in the screen according to the position information and the speed information of the particle by sampling the position information saved by the particle in the position rendering texture PosRT and the velocity information saved in the speed rendering texture velocityRT, where the particle is
- the location information determines its location on the screen, and the velocity information can determine the pose, direction, and subsequent updates for the particle display.
- the GPU can display the particles through a shader for displaying the particles, and the shader for displaying the particles is specifically used to read the position information of the particles from the position rendering texture, and read from the speed rendering texture.
- the velocity information of the particle is taken, and the corresponding particle is drawn on the screen according to the position information and the velocity information of the obtained particle.
- Shader is an editable program on the GPU that is used to implement image rendering instead of a fixed rendering pipeline.
- Shaders include Vertex Shader vertex shaders and Pixel Shader pixel shaders.
- the Vertex Shader is used for the calculation of the geometric relationship of the vertices
- the Pixel Shader is used for the calculation of the source color of the film. Due to the editability of the Shader, by sampling the texture RT in the Vertex Shader, sampling the colors in the Pixel Shader, and displaying the corresponding particles, a variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
- FIG. 2 is a pattern effect displayed by a shader
- FIG. 3 is a text effect displayed by a shader.
- the pattern effect and the text effect may be black and white or color.
- S405 Determine whether the particle is dead according to the life cycle and the generation time of the particle of the target particle system, and stop displaying the particle if the particle dies.
- the generation time and life cycle of each particle are recorded in the Alpha channel of PosRT and VelocityRT, and the generation time of each particle is obtained according to the generation time and current time of each particle, and the generation duration is compared with the life cycle of the particle. If the generation time has reached or exceeded the life cycle, it can be determined that the particle is dead, and then the dead particles are removed from the screen to stop displaying the particle.
- S406 If the particle is still in the life cycle, calculate the attribute change of the particle attribute and the state-related particle in the target particle system according to the force state of the target particle system, and save the attribute change amount in the temporary rendering texture.
- the particle attribute can be divided into a state-dependent particle attribute and a state-independent particle attribute, wherein the state-independent particle attribute is a particle that can be calculated only by a closed function defined by the initial property of the particle and the current time.
- the GPU can calculate the attribute change of the particle attribute and the state-related particle in the target particle system according to the force state of the target particle system, and save the attribute change amount in the temporary rendering texture.
- the attribute change amount includes the position change amount and the speed change amount.
- the force state of the target particle system may be processed by the CPU and then passed to the GPU. For example, the CPU periodically transmits the overall information of the target particle system to the GPU while the target particle system is under stress. Send it to the GPU together.
- the position information saved in the position rendering texture before updating is u1
- the calculated position increment is u
- the speed before updating is v1
- the speed information saved in the rendered texture is v1
- the calculated speed increment is v
- u, v are saved by temporarily rendering the texture.
- the update process can be as shown in Figure 8.
- the gray area is the core of the saving RT algorithm. Since TempRT saves the increment, it does not need to read the saved previous frame result, and TempRT can be released after the update processing. Compared with the classic algorithm, there are two less RTs. Two Add to Pass processing.
- the GPU receives the overall attribute information of the particle system sent by the CPU, generates a particle, and performs display and lifecycle management on the generated particle.
- the embodiment of the present application substantially reduces data transmission between the GPU and the CPU. The number and frequency of GPU waiting for CPU data transmission are reduced, thereby effectively improving the processing efficiency of the particle system.
- the CPU when processing the particle attribute update, the CPU usually needs two or more pairs of RTs to save the position information and the speed information of the current frame and the previous frame of the particle, and in the embodiment of the present application, when the GPU processes the particle attribute update, Simply save the speed information and the position information increments, so you can save at least one position texture and one speed texture, just add a temporary texture, and the temporary texture can be released after the update process, especially in A large number of particle systems can save a lot of memory resources during processing.
- the device includes at least an overall attribute information receiving module 810, a particle attribute initializing module 820, and a particle display module 830.
- the overall attribute information receiving module 810 is configured to receive overall attribute information of the target particle system sent by the CPU.
- the particle system in the embodiment of the present application is a graphic display class for effectively simulating an irregular fuzzy object or shape, for example, simulating a fireworks on a screen through a particle system, and passing through another particle system on the screen. Simulate a string of characters that change state, and so on.
- an irregular object is defined as consisting of a large number of irregular, randomly distributed particles, each of which has a certain life cycle. They constantly change position and keep moving, fully embodying irregular objects. nature.
- the CPU passes the overall attribute data of the particle system to the GPU, and does not need to include the attributes of a single particle, and the transmitted data does not increase as the number of particles increases.
- the overall attribute information of the particle system includes a particle display range (shader shader emission position and range), a particle life cycle range, a particle velocity range, and a generation time.
- the CPU can pass the overall attribute information of the target particle system to the GPU constant register.
- the overall attribute information may further include key frame data of the target particle system, or include pattern information of the target particle system, used to initialize particle properties of individual particles of the particle system, or used to subsequently update the target particle.
- the key frame data of the target particle system includes a display object position, a change speed, or a display color corresponding to at least one key frame.
- the pattern information of the target particle system carries initial pixel position information and generation time of each pixel.
- the CPU may periodically send the overall attribute information of the target particle system to the GPU for the GPU to subsequently update the particle attributes of the respective particles of the target particle system.
- the particle attribute initialization module 820 is configured to generate particles of the target particle system according to the overall attribute information of the target particle system and initialize particle attributes of the respective particles of the target particle system.
- the particle attribute initialization module 820 can randomly determine the position information of each particle within the particle display range according to the particle display range (determining the shader shader emission position and range) in the overall attribute information of the target particle system, that is, the generated Each particle position is randomly distributed in the particle display range; and the GPU can randomly determine the life cycle of each particle within the life cycle of the particle according to the particle life cycle range in the overall attribute information of the target particle system, that is, each particle generated The life cycle is randomly distributed in the life cycle of the particle; and the GPU can randomly determine the velocity of each particle within the particle velocity range according to the particle velocity range of the target particle system, that is, the velocity of each particle generated.
- the GPU can randomly determine the generation time of each particle within the life cycle determined by the generation time according to the generation time in the overall attribute information of the target particle system, that is, the generated generation time of each particle Randomly distributed within the life cycle determined by the generation time.
- the particle attribute initialization module 820 is specifically configured to:
- the particle attribute initialization module 820 may save the generated position information and the generation time of the generated particle in a position rendering texture (PosRT, position RT, where RT is a Render Target, and the rendering object represents an off-screen rendered texture), wherein the rgb of the PosRT The channel records the position information of the particle, and the alpha channel of the PosRT records the generation time of the particle; the velocity information and the life cycle of the generated particle are saved in the velocity rendering texture (velocityRT), wherein the velocity of the rgb channel of the velocityRT records the velocity information of the particle, VelocityRT The alpha channel records the life cycle of the particle.
- the particle property initialization module 820 can write particle properties of the particles into the position rendering texture and the velocity rendering texture through a Shader for generating particles.
- each RT may be in the RGBA32f format, occupying a memory of 0.125-16M, corresponding to a particle attribute that can store 8192-100W particles.
- the GPU may initialize a particle of each particle of the target particle system according to key frame data of the target particle system. Attributes.
- the key frame data of the target particle system may include an initial display position, an initial change speed, or an initial display color, and the like.
- the GPU may determine the position information of the target particle system according to the display object position of the initial key frame, compared to determining the target according to the particle display range in the overall attribute information.
- the position information of the system can further accurately display the display position of each particle of the particle system according to the display object position of the initial key frame, and can be restricted by the shape of the display range of the shader shader.
- the GPU can further accurately determine the initial velocity information and display color of each particle of the particle system according to the rate of change of the initial key frame.
- the particle display module 830 is configured to display the particles of the target particle system by a shader shader according to particle attributes of respective particles in the target particle system.
- the particle display module 830 can draw the corresponding particles on the screen according to the position information and the speed information of the particle by sampling the position information of the particle saved in the position rendering texture PosRT and the speed information saved in the speed rendering texture velocityRT.
- the position information of the particle determines its drawing position on the screen, and the speed information can determine the posture, direction and subsequent update of the particle display.
- the particle display module 830 can display the particle through a shader for displaying the particle, and the shader for displaying the particle is specifically used to read the position information of the particle from the position rendering texture, and render from the speed.
- the velocity information of the particle is read in the texture, and the corresponding particle is drawn on the screen according to the position information and the velocity information of the obtained particle.
- Shader is an editable program on the GPU that is used to implement image rendering instead of a fixed rendering pipeline.
- Shaders include Vertex Shader vertex shaders and Pixel Shader pixel shaders.
- the Vertex Shader is used for the calculation of the geometric relationship of the vertices
- the Pixel Shader is used for the calculation of the source color of the film. Due to the editability of the Shader, by sampling the texture RT in the Vertex Shader, sampling the colors in the Pixel Shader, and displaying the corresponding particles, a variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
- FIG. 2 is a pattern effect displayed by a shader
- FIG. 3 is a text effect displayed by a shader.
- the pattern effect and the text effect may be black and white or color.
- the display effect of the target particle system of the present application may be as shown in FIG. 4 in combination with a specific game scenario.
- the particle display of the target particle system may be placed in a game scene. At the top level, the other display objects in the game scene interface are drawn first, and finally the target particle system is displayed on the screen.
- the shader display particles may be in a radiation mode or a polymerization mode, wherein the radiation mode is to randomly radiate particles at a random velocity around the center of the emitter position of the shader, and then the particle is aggregated in an initial state.
- the highest degree, then gradually divergence; the aggregation method is also called gravity, that is, the shader randomly emits particles within a certain range, and then the gravity is set on the preset track or pattern of the screen, and the surrounding particles can be pulled around the track or pattern, then In the initial state, the degree of particle polymerization is very low, and then gradually aggregates around the preset track or pattern to form a display effect of a preset track or pattern.
- the apparatus may further include: a death determination module 840, configured to determine whether the particle is dead according to a life cycle of the particle of the target particle system and a generation time, and stop displaying the particle if the particle dies.
- a death determination module 840 configured to determine whether the particle is dead according to a life cycle of the particle of the target particle system and a generation time, and stop displaying the particle if the particle dies.
- the GPU records the generation time and life cycle of each particle when initializing the particle properties of the particle.
- the generation time and life cycle of each particle are recorded in the Alpha channel of PosRT and VelocityRT.
- the death judging module 840 can obtain the generation duration of each particle according to the generation time and the current time of each particle, and compare the generation duration with the life cycle of the particle. If the generation duration has reached or exceeded the life cycle, the particle death can be determined. And then move the dead particles out of the screen and stop showing the particles.
- the apparatus may further include: a particle attribute update module 850, configured to update particle attributes of respective particles of the target particle system while the particles are still in a life cycle, and display the updated target Particles of the particle system.
- a particle attribute update module 850 configured to update particle attributes of respective particles of the target particle system while the particles are still in a life cycle, and display the updated target Particles of the particle system.
- the particle attribute can be divided into a state-dependent particle attribute and a state-independent particle attribute, wherein the state-independent particle attribute is a particle that can be calculated only by a closed function defined by the initial property of the particle and the current time.
- Attribute state-related particles
- the attribute is the update calculation that needs to read the particle attribute of the previous frame as input or state correlation.
- State-related particle property updates require separate drawing steps, saving the updated particle properties to RT, and displaying the updated particles through the shader.
- the particle attribute update module 850 does not have to update the particles every frame, but can set the update period of the particles as needed, for example, to set an update period for simulating particles describing objects farther from the perspective. It can be updated every 2 frames or 3 frames, and so on.
- the particle attribute update module 850 may update the state-related particle attribute according to the force state of the target particle system, and the force state of the target particle system may be processed by the CPU and then passed to the GPU, such as a CPU. While periodically transmitting the overall information of the target particle system to the GPU, the force state of the target particle system is sent to the GPU together.
- the particle attribute update module 850 can also update the particle attributes of the individual particles of the target particle system based on the key frame data of the target particle system.
- the key frame data of the target particle system may include a display object position, a change speed, or a display color of the at least one key frame corresponding time.
- the particle attribute update module 850 may select the time according to the key frame.
- the position of the display object determines the position information of each particle of the target particle system, and the position of the particle can be adjusted to be displayed on the display object position of the key frame.
- the target particle system can be uniformly adjusted according to the change speed according to the corresponding time of the key frame.
- the velocity information of each particle and the color of each particle of the target particle system can be uniformly adjusted according to the color of the display object according to the time corresponding to the key frame, thereby achieving precise control of each particle of the target particle system.
- the particle attribute update module 850 is specifically configured to update the position information saved by the particle in the corresponding position rendering texture and the speed information saved in the corresponding speed rendering texture.
- the particle attribute update module 850 includes:
- the attribute change amount holding unit 851 is configured to calculate, according to the force state of the target particle system, an attribute change amount of the particle attribute and the state-related particle in the target particle system, and save the attribute change amount in the temporary rendering texture,
- the amount of change in the attribute includes the amount of change in position and the amount of change in speed.
- the attribute change amount superimposing unit 852 is configured to superimpose the position change amount in the temporary rendered texture to the position information in the position rendering texture of the corresponding particle, and superimpose the speed change amount in the temporary rendered texture to the speed of the corresponding particle Renders velocity information in the texture.
- the position information saved in the position rendering texture before updating is u1
- the calculated position increment is u
- the speed before updating is v1
- the speed information saved in the rendered texture is v1
- the calculated speed increment is v
- u, v are saved by temporarily rendering the texture.
- the update process can be as shown in Figure 8.
- the gray area is the core of the saving RT algorithm. Since TempRT saves the increment, it does not need to read the saved previous frame result, and TempRT can be released after the update processing. Compared with the classic algorithm, there are two less RTs. Two add to Pass processing.
- the processing device of the particle system further includes:
- the texture resource allocation module 860 is configured to allocate a rendering texture resource to the target particle system according to a maximum particle emissivity and a maximum lifetime of the target particle system.
- the idle rendering texture resource can be managed by establishing a multi-level order linked list, and then the rendering texture resource is allocated to the target particle system from the idle rendering texture resource according to the partner algorithm.
- the linked list of the level n manages the RT resource of size 1*2n, that is, the size of the RT block managed by each level linked list is twice the upper level.
- the RT resource in the linked list of level n can be further divided into multiple sub-blocks, for example, 1*2n can be divided into 2*2n-1 RT resources.
- RT resource block of size 4 When using the order linked list to manage allocation, when the target particle system requires 4 RTs, it is checked from the order linked list with block size 4. If there are free blocks on the linked list, an RT resource block of size 4 can be directly allocated. To the user, otherwise look up in the linked list of the next level (block size is 8). If the managed RT resource block size is 8 free lists in the linked list, then an idle RT resource block is split into two sizes of 4 An RT resource block, in which one RT resource block of size 4 is allocated to the target particle system, another RT resource block of size 4 is added to the upper-level linked list, and so on.
- the target particle system When the target particle system releases the RT resource, if there is currently an idle RT resource block of the same size as the RT resource block released by the target particle system, the two RT resource blocks of the same size are merged into the next-level linked list. If the target particle system releases a RT resource block of size 4, if an RT resource block of the same size 4 is found to be idle, then the two RT resource blocks can be merged into one RT resource block of size 8 into management. In the linked list of RT resource blocks of size 8, and so on.
- the CPU may also allocate a rendering texture resource to the target particle system and inform the GPU of the RT resource allocated to the target particle system.
- the apparatus further includes:
- the pattern information receiving module 880 is configured to receive pattern information of the target particle system sent by the CPU, where the pattern information includes pixel position information and a generation time of each pixel.
- the CPU may write pattern information (eg, a color picture) of the target particle system into a specified storage space, such as a memory, a hard disk, or a video memory, and load the pattern by the pattern information receiving module 880 to the specified storage space. information.
- pattern information eg, a color picture
- the CPU may generate a color image according to a target black and white image, by traversing each pixel in the black and white image one by one, and passing through a pixel in the color image when the pixel color is greater than 0 (non-black)
- the rgb channel records the position information of the pixel whose color is greater than 0, and records the generation time, the display time, and the like of the pixel whose color is greater than 0 through the alpha channel of the pixel, so that the color of each pixel is sequentially greater than 0.
- the position and time information of the pixel points are saved in the respective pixels of the color image.
- the CPU transmits the pattern information of the obtained color image to the GPU.
- FIG. 12 An exemplary black and white picture as shown in FIG. 12, the left side 1201 shows the rgb channel of the black and white picture, the right side 1202 shows the alpha channel of the black and white picture, and the CPU can according to the position and alpha channel in the rgb channel of the black and white image.
- the time information in the middle is generated to obtain the right color image 1203.
- the color of each pixel of the color image is determined according to the position of the non-zero pixel of the black and white image, and the alpha channel of each pixel records the generation time of the pixel whose color is greater than 0, Display information such as time.
- the black and white image may be an image of a text pattern.
- the CPU may further generate the color image according to the stereo model image (3D mesh image), and the RGB channel of the pixel of the color image also stores the position coordinates of the vertex position in the stereo model image.
- the black and white image based on the color pattern generated by the CPU may be a text pattern. Since the image generated by the text pattern has a very small resolution (default 32*32), it can be generated in real time.
- the particle attribute initialization module 820 can also be used to:
- the overall attribute information of the target particle system is initialized, and position information and generation time of each particle of the target particle system are initialized.
- the subsequent GPU may restore the original image corresponding to the pattern information, such as the target black and white image or the stereo model image, on the screen, so that the pixel position information extracted from the pattern information and the generation time of the pixel point may be further accurately determined.
- the display position and generation time of each particle of the particle system to achieve more detailed particle display control.
- the processing device of the particle system of the embodiment of the present invention receives the overall attribute information of the particle system sent by the CPU, generates particles, and displays the generated particles and manages the life cycle.
- the embodiment of the present application substantially reduces the data between the CPU and the CPU. Passing reduces the number and frequency of waiting for CPU data transmission, thus effectively improving the processing efficiency of the particle system.
- the CPU when processing the particle attribute update, the CPU usually needs two or more pairs of RTs to save the position information and the speed information of the current frame and the previous frame of the particle, and the processing device of the particle system in the embodiment of the present application processes the particle.
- the processing device 1300 of the particle system in this embodiment may include: at least one processor CPU 1301, GPU 1303, Memory 1304 and display screen 1305, at least one communication bus 1307.
- the communication bus 1307 is used to implement connection communication between the above components.
- the memory 1304 includes at least one shader shader, and when the at least one shader is executed by the GPU 1303, the following operations are performed:
- Receiving overall attribute information of the target particle system sent by the CPU, and the overall attribute information of the target particle system includes a particle display range, a particle life cycle range, a particle speed range, and a generation time;
- Individual particles of the target particle system are displayed based on particle properties of individual particles in the target particle system.
- the at least one Shader may also be configured to perform the following operations:
- Whether the particle is dead or not depends on the life cycle of the particle of the target particle system and the generation time, and if the particle dies, the display of the particle is stopped.
- the at least one shader described above can also be configured to perform the following operations:
- the at least one shader is configured to perform particle attributes of the respective particles that initialize the target particle system, including:
- the at least one shader is configured to perform displaying the particles of the target particle system according to particle properties of respective particles in the target particle system, including:
- the at least one shader configured to perform the particle attribute updating of each particle of the target particle system specifically includes:
- the at least one shader is configured to perform position information saved by the update particle in the position rendering texture and speed information saved in the speed rendering texture, including:
- the position change amount in the temporary rendering texture is superimposed to the position information in the position rendering texture of the corresponding particle, and the speed change amount in the temporary rendering texture is superimposed to the velocity information in the velocity rendering texture of the corresponding particle.
- the overall attribute information of the target particle system further includes a maximum particle emissivity and a maximum lifetime
- the at least one shader is further configured to execute before performing the rendering of the position information and the generation time of the particle in the position rendering texture, and saving the speed information and the life cycle of the particle in the speed rendering texture:
- the rendered texture resource is allocated to the target particle system based on the maximum particle emissivity and maximum lifetime of the target particle system.
- the at least one shader is configured to perform a maximum particle emissivity and a maximum lifetime according to the target particle system, and assigning the rendered texture resource to the target particle system specifically includes:
- the rendered texture resource is allocated to the target particle system from the idle rendered texture resource according to a multi-level order linked list and a buddy algorithm that manages the idle rendered texture resource.
- the overall attribute information further includes key frame data of the target particle system, and the key frame data of the target particle system includes a display object position, a change speed, or a display color of the at least one key frame corresponding time;
- the at least one Shader described above is also configured to execute:
- the at least one shader is further configured to execute before performing the initialization of the particle properties of the individual particles of the target particle system:
- the at least one Shader is further configured to perform particle attributes of the respective particles that initialize the target particle system, including:
- the position information and the generation time of each particle of the target particle system are initialized according to the pixel position information in the pattern information and the generation time of each pixel in combination with the overall attribute information of the target particle system.
- the at least one shader may include the overall attribute information receiving module 810, the particle attribute initialization module 820, and the particle display module 830 shown in FIG.
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Generation (AREA)
Abstract
Description
Claims (19)
- 一种粒子系统的处理方法,其特征在于,所述方法包括:接收中央处理器CPU发送的目标粒子系统的整体属性信息,所述目标粒子系统的整体属性信息包括粒子显示范围,粒子生命周期范围,粒子速度范围以及生成时间;根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
- 如权利要求1所述的粒子系统的处理方法,其特征在于,所述根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子之后还包括:根据所述目标粒子系统的粒子的生命周期以及生成时间判断粒子是否死亡,若粒子死亡则停止显示该粒子。
- 如权利要求2所述的粒子系统的处理方法,其特征在于,所述方法还包括:若粒子仍在生命周期中,更新所述目标粒子系统的各个粒子的粒子属性,并显示更新后的所述目标粒子系统的粒子。
- 如权利要求3所述的粒子系统的处理方法,其特征在于,所述初始化目标粒子系统的各个粒子的粒子属性包括:将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度 信息和生命周期保存在速度渲染纹理;所述根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的粒子包括:采样粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息,从而显示对应粒子;所述更新所述目标粒子系统的各个粒子的粒子属性包括:更新粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息。
- 如权利要求4所述的粒子系统的处理方法,其特征在于,所述更新粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息包括:根据目标粒子系统的受力状态,计算目标粒子系统中粒子属性与状态相关的粒子的属性变化量,并将所述属性变化量保存在临时渲染纹理,所述属性变化量包括位置变化量和速度变化量;将所述临时渲染纹理中的位置变化量叠加至对应粒子的位置渲染纹理中的位置信息,将所述临时渲染纹理中的速度变化量叠加至对应粒子的速度渲染纹理中的速度信息。
- 如权利要求4所述的粒子系统的处理方法,其特征在于,所述目标粒子系统的整体属性信息还包括最大粒子发射率和最大生命周期;所述将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度信息和生命周期保存在速度渲染纹理之前,所述方法还包括:根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源。
- 如权利要求6所述的粒子系统的处理方法,其特征在于,所述根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源包括:根据管理空闲渲染纹理资源的多级order链表和伙伴算法从空闲渲染纹理资源中为目标粒子系统分配渲染纹理资源。
- 如权利要求1所述的粒子系统的处理方法,其特征在于,所述整体属性信息还包括目标粒子系统的关键帧数据,所述目标粒子系统的关键帧数据包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色;所述方法还包括:根据所述目标粒子系统的关键帧数据初始化或更新所述目标粒子系统的各个粒子的粒子属性。
- 如权利要求1所述的粒子系统的处理方法,其特征在于,所述初始化目标粒子系统的各个粒子的粒子属性之前还包括:接收CPU发送的目标粒子系统的图案信息,所述图案信息携带像素位置信息和各个像素的生成时间;所述初始化目标粒子系统的各个粒子的粒子属性包括:根据所述图案信息中的像素位置信息和各个像素的生成时间结合所述目标粒子系统的整体属性信息,初始化所述目标粒子系统的各个粒子的位置信息和生成时间。
- 一种粒子系统的处理装置,其特征在于,所述装置包括:图形处理器GPU;与所述GPU相连接的存储器;所述存储器中存储有多个指令模块,包括整体属性信息接收模块、粒子属性初始化模块和粒子显示模块;当所述指令模块由所述GPU执行时,执行以下操作:所述整体属性信息接收模块,用于接收中央处理器CPU发送的目标粒子系统的整体属性信息;所述粒子属性初始化模块,用于根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;所述粒子显示模块,用于根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
- 如权利要求10所述的粒子系统的处理装置,其特征在于,还包括:死亡判断模块,用于根据所述目标粒子系统的粒子的生命周期以及生成时间判断粒子是否死亡,若粒子死亡则停止显示该粒子。
- 如权利要求11所述的粒子系统的处理装置,其特征在于,还包括:粒子属性更新模块,用于在粒子仍在生命周期时,更新所述目标粒子系统的各个粒子的粒子属性,并显示更新后的所述目标粒子系统的粒子。
- 如权利要求12所述的粒子系统的处理装置,其特征在于,所述 粒子属性初始化模块具体用于:将粒子的位置信息和生成时间保存在位置渲染纹理,将粒子的速度信息和生命周期保存在速度渲染纹理;所述粒子显示模块具体用于:采样粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息,从而显示对应粒子;所述粒子属性更新模块具体用于:更新粒子在位置渲染纹理中保存的位置信息和在速度渲染纹理中保存的速度信息。
- 如权利要求13所述的粒子系统的处理装置,其特征在于,所述粒子属性更新模块包括:属性变化量保存单元,用于根据目标粒子系统的受力状态,计算目标粒子系统中粒子属性与状态相关的粒子的属性变化量,并将所述属性变化量保存在临时渲染纹理,所述属性变化量包括位置变化量和速度变化量;属性变化量叠加单元,用于将所述临时渲染纹理中的位置变化量叠加至对应粒子的位置渲染纹理中的位置信息,将所述临时渲染纹理中的速度变化量叠加至对应粒子的速度渲染纹理中的速度信息。
- 如权利要求13所述的粒子系统的处理装置,其特征在于,所述目标粒子系统的整体属性信息还包括最大粒子发射率和最大生命周期;所述装置还包括:纹理资源分配模块,用于根据目标粒子系统的最大粒子发射率和最大生命周期,为目标粒子系统分配渲染纹理资源。
- 如权利要求15所述的粒子系统的处理装置,其特征在于,所述纹理资源分配模块具体用于:根据管理空闲渲染纹理资源的多级order链表和伙伴算法从空闲渲染纹理资源中为目标粒子系统分配渲染纹理资源。
- 如权利要求10所述的粒子系统的处理装置,其特征在于,所述整体属性信息还包括目标粒子系统的关键帧数据,所述目标粒子系统的关键帧数据包括至少一个关键帧对应时间的显示对象位置、变化速度或者显示颜色;所述粒子属性初始化模块,还用于根据所述目标粒子系统的关键帧数据初始化所述目标粒子系统的各个粒子的粒子属性;所述装置还包括:所述粒子属性更新模块,用于根据所述目标粒子系统的关键帧数据更新所述目标粒子系统的各个粒子的粒子属性。
- 如权利要求10所述的粒子系统的处理装置,其特征在于,所述装置还包括:图案信息接收模块,用于接收CPU发送的目标粒子系统的图案信息,所述图案信息包括像素位置信息和各个像素的生成时间;所述粒子属性初始化模块具体用于:根据所述图案信息中的像素位置信息和各个像素的生成时间结合所述目标粒子系统的整体属性信息,初始化所述目标粒子系统的各个粒子的位置信息和生成时间。
- 一种非易失性机器可读存储介质,其特征在于,所述存储介质中存储有机器可读指令,所述机器可读指令可以由图形处理器GPU执行以完成以下操作:接收中央处理器CPU发送的目标粒子系统的整体属性信息,所述目标粒子系统的整体属性信息包括粒子显示范围,粒子生命周期范围,粒子速度范围以及生成时间;根据目标粒子系统的整体属性信息生成所述目标粒子系统的粒子并初始化目标粒子系统的各个粒子的粒子属性,其中所述各个粒子的粒子属性包括各个粒子的位置信息、速度信息、生命周期以及生成时间;根据所述目标粒子系统中各个粒子的粒子属性显示所述目标粒子系统的各个粒子。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020187018114A KR102047615B1 (ko) | 2016-05-16 | 2017-05-11 | 입자 시스템을 위한 처리 방법 및 장치 |
US16/052,265 US10699365B2 (en) | 2016-05-16 | 2018-08-01 | Method, apparatus, and storage medium for processing particle system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610324183.1 | 2016-05-16 | ||
CN201610324183.1A CN107392835B (zh) | 2016-05-16 | 2016-05-16 | 一种粒子系统的处理方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/052,265 Continuation US10699365B2 (en) | 2016-05-16 | 2018-08-01 | Method, apparatus, and storage medium for processing particle system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017198104A1 true WO2017198104A1 (zh) | 2017-11-23 |
Family
ID=60325664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/083917 WO2017198104A1 (zh) | 2016-05-16 | 2017-05-11 | 一种粒子系统的处理方法及装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10699365B2 (zh) |
KR (1) | KR102047615B1 (zh) |
CN (1) | CN107392835B (zh) |
WO (1) | WO2017198104A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191550A (zh) * | 2018-07-13 | 2019-01-11 | 乐蜜有限公司 | 一种粒子渲染方法、装置、电子设备及存储介质 |
CN109903359A (zh) * | 2019-03-15 | 2019-06-18 | 广州市百果园网络科技有限公司 | 一种粒子的显示方法、装置、移动终端和存储介质 |
CN112700518A (zh) * | 2020-12-28 | 2021-04-23 | 北京字跳网络技术有限公司 | 拖尾视觉效果的生成方法、视频的生成方法、电子设备 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665531A (zh) * | 2018-05-08 | 2018-10-16 | 阿里巴巴集团控股有限公司 | 3d粒子模型的变换方法及装置 |
CN108933895A (zh) * | 2018-07-27 | 2018-12-04 | 北京微播视界科技有限公司 | 三维粒子特效生成方法、装置和电子设备 |
CN110213638B (zh) * | 2019-06-05 | 2021-10-08 | 北京达佳互联信息技术有限公司 | 动画显示方法、装置、终端及存储介质 |
CN110415326A (zh) * | 2019-07-18 | 2019-11-05 | 成都品果科技有限公司 | 一种粒子效果的实现方法及装置 |
CN111815749A (zh) * | 2019-09-03 | 2020-10-23 | 厦门雅基软件有限公司 | 粒子计算方法、装置、电子设备及计算机可读存储介质 |
CN112215932B (zh) * | 2020-10-23 | 2024-04-30 | 网易(杭州)网络有限公司 | 粒子动画处理方法、装置、存储介质及计算机设备 |
CN112270732B (zh) * | 2020-11-17 | 2024-06-25 | Oppo广东移动通信有限公司 | 粒子动画的生成方法、处理装置、电子设备和存储介质 |
CN113763701B (zh) * | 2021-05-26 | 2024-02-23 | 腾讯科技(深圳)有限公司 | 路况信息的显示方法、装置、设备及存储介质 |
CN117194055B (zh) * | 2023-11-06 | 2024-03-08 | 西安芯云半导体技术有限公司 | Gpu显存申请及释放的方法、装置及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1753031A (zh) * | 2005-11-10 | 2006-03-29 | 北京航空航天大学 | 基于gpu的粒子系统 |
CN101452582A (zh) * | 2008-12-18 | 2009-06-10 | 北京中星微电子有限公司 | 一种实现三维视频特效的方法和装置 |
CN102722859A (zh) * | 2012-05-31 | 2012-10-10 | 北京像素软件科技股份有限公司 | 一种计算机仿真场景渲染方法 |
US8289327B1 (en) * | 2009-01-21 | 2012-10-16 | Lucasfilm Entertainment Company Ltd. | Multi-stage fire simulation |
CN103714568A (zh) * | 2013-12-31 | 2014-04-09 | 北京像素软件科技股份有限公司 | 一种大规模粒子系统的实现方法 |
CN104571993A (zh) * | 2014-12-30 | 2015-04-29 | 北京像素软件科技股份有限公司 | 粒子系统的处理方法、显卡和移动应用平台 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0104931D0 (en) * | 2001-02-28 | 2001-04-18 | Univ Leeds | object interaction simulation |
US20060155576A1 (en) * | 2004-06-14 | 2006-07-13 | Ryan Marshall Deluz | Configurable particle system representation for biofeedback applications |
CA2667538C (en) * | 2006-10-27 | 2015-02-10 | Thomson Licensing | System and method for recovering three-dimensional particle systems from two-dimensional images |
KR100898989B1 (ko) * | 2006-12-02 | 2009-05-25 | 한국전자통신연구원 | 물 표면의 포말 생성 및 표현 장치와 그 방법 |
KR100889601B1 (ko) * | 2006-12-04 | 2009-03-20 | 한국전자통신연구원 | 물 파티클 데이터를 이용한 물결과 거품 표현 장치 및 방법 |
CN102426692A (zh) * | 2011-08-18 | 2012-04-25 | 北京像素软件科技股份有限公司 | 粒子绘制方法 |
CN102982506A (zh) * | 2012-11-13 | 2013-03-20 | 沈阳信达信息科技有限公司 | 基于gpu的粒子系统优化 |
CN104143208A (zh) * | 2013-05-12 | 2014-11-12 | 哈尔滨点石仿真科技有限公司 | 一种大规模真实感雪景实时渲染方法 |
CN104022756B (zh) * | 2014-06-03 | 2016-09-07 | 西安电子科技大学 | 一种基于gpu架构的改进的粒子滤波方法 |
CN104778737B (zh) * | 2015-03-23 | 2017-10-13 | 浙江大学 | 基于gpu的大规模落叶实时渲染方法 |
CN104700446B (zh) * | 2015-03-31 | 2017-10-03 | 境界游戏股份有限公司 | 一种粒子系统中粒子顶点数据的更新方法 |
US9905038B2 (en) * | 2016-02-15 | 2018-02-27 | Nvidia Corporation | Customizable state machine for visual effect insertion |
-
2016
- 2016-05-16 CN CN201610324183.1A patent/CN107392835B/zh active Active
-
2017
- 2017-05-11 KR KR1020187018114A patent/KR102047615B1/ko active IP Right Grant
- 2017-05-11 WO PCT/CN2017/083917 patent/WO2017198104A1/zh active Application Filing
-
2018
- 2018-08-01 US US16/052,265 patent/US10699365B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1753031A (zh) * | 2005-11-10 | 2006-03-29 | 北京航空航天大学 | 基于gpu的粒子系统 |
CN101452582A (zh) * | 2008-12-18 | 2009-06-10 | 北京中星微电子有限公司 | 一种实现三维视频特效的方法和装置 |
US8289327B1 (en) * | 2009-01-21 | 2012-10-16 | Lucasfilm Entertainment Company Ltd. | Multi-stage fire simulation |
CN102722859A (zh) * | 2012-05-31 | 2012-10-10 | 北京像素软件科技股份有限公司 | 一种计算机仿真场景渲染方法 |
CN103714568A (zh) * | 2013-12-31 | 2014-04-09 | 北京像素软件科技股份有限公司 | 一种大规模粒子系统的实现方法 |
CN104571993A (zh) * | 2014-12-30 | 2015-04-29 | 北京像素软件科技股份有限公司 | 粒子系统的处理方法、显卡和移动应用平台 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191550A (zh) * | 2018-07-13 | 2019-01-11 | 乐蜜有限公司 | 一种粒子渲染方法、装置、电子设备及存储介质 |
CN109191550B (zh) * | 2018-07-13 | 2023-03-14 | 卓米私人有限公司 | 一种粒子渲染方法、装置、电子设备及存储介质 |
CN109903359A (zh) * | 2019-03-15 | 2019-06-18 | 广州市百果园网络科技有限公司 | 一种粒子的显示方法、装置、移动终端和存储介质 |
CN109903359B (zh) * | 2019-03-15 | 2023-05-05 | 广州市百果园网络科技有限公司 | 一种粒子的显示方法、装置、移动终端和存储介质 |
CN112700518A (zh) * | 2020-12-28 | 2021-04-23 | 北京字跳网络技术有限公司 | 拖尾视觉效果的生成方法、视频的生成方法、电子设备 |
CN112700518B (zh) * | 2020-12-28 | 2023-04-07 | 北京字跳网络技术有限公司 | 拖尾视觉效果的生成方法、视频的生成方法、电子设备 |
Also Published As
Publication number | Publication date |
---|---|
US10699365B2 (en) | 2020-06-30 |
CN107392835A (zh) | 2017-11-24 |
KR20180087356A (ko) | 2018-08-01 |
CN107392835B (zh) | 2019-09-13 |
US20180342041A1 (en) | 2018-11-29 |
KR102047615B1 (ko) | 2019-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017198104A1 (zh) | 一种粒子系统的处理方法及装置 | |
KR102327144B1 (ko) | 그래픽 프로세싱 장치 및 그래픽 프로세싱 장치에서 타일 기반 그래픽스 파이프라인을 수행하는 방법 | |
US10055893B2 (en) | Method and device for rendering an image of a scene comprising a real object and a virtual replica of the real object | |
KR101563098B1 (ko) | 커맨드 프로세서를 갖는 그래픽 프로세싱 유닛 | |
JP5960368B2 (ja) | ビジビリティ情報を用いたグラフィックスデータのレンダリング | |
US20080180440A1 (en) | Computer Graphics Shadow Volumes Using Hierarchical Occlusion Culling | |
KR102381945B1 (ko) | 그래픽 프로세싱 장치 및 그래픽 프로세싱 장치에서 그래픽스 파이프라인을 수행하는 방법 | |
JP2008077627A (ja) | 3次元画像のレンダリングにおける早期zテスト方法およびシステム | |
KR20080090671A (ko) | 3d 객체 모델에 텍스쳐를 매핑하는 방법 및 장치 | |
WO2021253640A1 (zh) | 阴影数据确定方法、装置、设备和可读介质 | |
KR101670958B1 (ko) | 이기종 멀티코어 환경에서의 데이터 처리 방법 및 장치 | |
US20160042558A1 (en) | Method and apparatus for processing image | |
CN115701305A (zh) | 阴影筛选 | |
CN110415326A (zh) | 一种粒子效果的实现方法及装置 | |
US9704290B2 (en) | Deep image identifiers | |
CN115082609A (zh) | 图像渲染方法、装置、存储介质及电子设备 | |
US9406165B2 (en) | Method for estimation of occlusion in a virtual environment | |
US10262391B2 (en) | Graphics processing devices and graphics processing methods | |
US20230316626A1 (en) | Image rendering method and apparatus, computer device, and computer-readable storage medium | |
KR102147357B1 (ko) | 커맨드들을 관리하는 장치 및 방법 | |
JP6235926B2 (ja) | 情報処理装置、生成方法、プログラム及び記録媒体 | |
CN116670719A (zh) | 一种图形处理方法、装置及电子设备 | |
KR101227183B1 (ko) | 3d 그래픽 모델 입체 렌더링 장치 및 입체 렌더링 방법 | |
JP2007141078A (ja) | プログラム、情報記憶媒体及び画像生成システム | |
WO2022135050A1 (zh) | 渲染方法、设备以及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20187018114 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020187018114 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17798667 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17798667 Country of ref document: EP Kind code of ref document: A1 |