WO2017174006A1 - 图片处理的方法和装置 - Google Patents

图片处理的方法和装置 Download PDF

Info

Publication number
WO2017174006A1
WO2017174006A1 PCT/CN2017/079587 CN2017079587W WO2017174006A1 WO 2017174006 A1 WO2017174006 A1 WO 2017174006A1 CN 2017079587 W CN2017079587 W CN 2017079587W WO 2017174006 A1 WO2017174006 A1 WO 2017174006A1
Authority
WO
WIPO (PCT)
Prior art keywords
interaction
map
image frame
height
normal map
Prior art date
Application number
PCT/CN2017/079587
Other languages
English (en)
French (fr)
Inventor
解卫博
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201610210562.8A external-priority patent/CN105912234B/zh
Priority claimed from CN201610209641.7A external-priority patent/CN105913471B/zh
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020187018499A priority Critical patent/KR102108244B1/ko
Publication of WO2017174006A1 publication Critical patent/WO2017174006A1/zh
Priority to US16/152,618 priority patent/US10839587B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/663Methods for processing data by generating or executing the game program for rendering three dimensional images for simulating liquid objects, e.g. water, gas, fog, snow, clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/08Bandwidth reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/24Fluid dynamics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present application relates to the field of picture processing, and in particular, to a method and apparatus for picture processing.
  • the main method for realizing the interaction effect of grass in online games is to use the CPU to perform intersection detection on the player and the grass entity, and then record the grass affected by the interaction according to the detection result, and then record the vertex buffer ( Vertex Buffer) locks and then renders the interactive grasses one by one.
  • Vertex Buffer vertex Buffer
  • the efficiency is not high enough when the number of game players and grass is large (for example, 200 game players), at this time, the above solution can not smoothly and realistically realize the animation effect of grass and game player interaction.
  • the number of game players is N
  • the number of grasses is M. This requires N*M intersection detection first. This operation consumes a lot of performance when there are more game players. For each grass that interacts, it needs to have corresponding vertices.
  • the image of the existing virtual scene is realized by an interactive texture, and its size is very large, such as 2048*2048. Only one interaction map is used to cover the entire interaction area in the virtual scene. For the entire interaction area, the accuracy corresponding to the unit area is uniform. However, since the size of the interactive texture is too large, the number of pixels to be calculated is proportional to the size of the interactive texture, and the corresponding value is also very large, so it has a very fatal effect on the performance of the virtual scene, for example, opening The frame rate drops dramatically after the virtual scene.
  • the embodiment of the present application provides a method and an apparatus for processing a picture, so as to at least solve the technical problem that the display effect of the second object is relatively poor when the first object and the second object interact in the prior art.
  • a method for image processing includes: acquiring an interaction area where an interaction space of a first object in a current image frame intersects with a first plane where the second object is located, where a target object in the two objects is located in the interaction area; generating a normal map of the water wave animation corresponding to the interaction area, wherein the normal map shows a plurality of corrugations; using the plurality of corrugations The first target ripple moves the target object to a position in the current image frame corresponding to a corrugation position of the first target corrugation in the normal map, wherein the first target corrugation and the target object Corresponding.
  • an apparatus for image processing including: a first acquiring unit, configured to acquire an interaction space of a first object in a current image frame and intersect a first plane where the second object is located An interaction area, wherein the target object in the second object is located in the interaction area; and a generating unit, configured to generate a normal map of the water wave animation corresponding to the interaction area, wherein the normal map is in the Displaying a plurality of corrugations; a moving unit configured to move the target object into the current image frame and the ripple of the first target corrugation in the normal map by using the first target corrugation of the plurality of corrugations a position corresponding to the position, wherein the first target ripple is associated with the target object correspond.
  • a normal map of the water wave animation corresponding to the interaction area is generated, and the target object is moved according to the first target ripple corresponding to the target object.
  • the purpose of adjusting the position of the target object by the normal map is achieved, thereby realizing that the position of the second object can be animated with the water wave.
  • the technical effect of the change of the normal map is to solve the technical problem that the display effect of the second object is relatively poor when the first object and the second object interact in the prior art.
  • the distant view and the close-range of the image in the virtual scene are formed by the interaction maps of different levels, and the high-resolution interactive texture can be used to form the close-up image in the image, and the low-precision interactive texture is used to construct the distant view in the image, thereby avoiding the distant view.
  • the interaction map corresponding to the interaction area unnecessary waste is caused due to high precision, and the consumption of the memory is greatly reduced, thereby greatly reducing the number of pixels in the full-screen operation involved in completing the real-time update, significantly reducing the memory overhead and improving the performance.
  • the precision between the interaction regions is uneven, and the edge region where the interaction regions intersect has the defect of sudden change in precision, and then In order to improve the accuracy of the interaction.
  • FIG. 1 is an architectural diagram of a hardware environment in accordance with an embodiment of the present application.
  • FIG. 2 is a flowchart of a method for image processing according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of an alternative game player interaction space in accordance with an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an optional first image frame water wave height map according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an optional pixel diffusion according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a normal map of an optional first image frame water wave animation according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of an apparatus for image processing according to an embodiment of the present application.
  • FIG. 8 is a hardware structural diagram of a terminal according to an embodiment of the present application.
  • GPU Graphics card, English full name Graphic Process Unit.
  • Players can change the shape and look and feel of these elements by interacting with some elements in the game world, allowing the player to feel that the game world can be changed or influenced by him.
  • Vertex buffer stores the vertex information flow of the model, including position, texture coordinates, tangent space vector and other data.
  • the process of drawing a call the engine prepares the data and notifies the GPU is called a draw call.
  • Instancing rendering The GPU draws multiple instances of the model with the same model data in the scene at a time.
  • Water wave animation model interaction affects the height map, then attenuates the height map and spreads it around to generate a realistic water wave animation.
  • Vertex perturbation Offsets the model vertices by vertex texture or random, so that the top of the model appears to be dynamically changeable.
  • a method of picture processing is provided.
  • the method for image processing described above may be applied to a hardware environment formed by the server 104 and the terminal 102 as shown in FIG. 1.
  • 1 is hardware in accordance with an embodiment of the present application
  • the server 104 is connected to the terminal 102 through a network.
  • the network includes but is not limited to a wide area network, a metropolitan area network, or a local area network.
  • the terminal 102 is not limited to a computer, a mobile phone, a tablet, or the like.
  • FIG. 2 is a flow chart of a method of picture processing in accordance with an embodiment of the present application. As shown in FIG. 2, the method for image processing includes the following steps:
  • Step S202 Acquire an interaction area where the interaction space of the first object in the current image frame intersects with the first plane where the second object is located, where the target object in the second object is located in the interaction area.
  • the first object may be a game player in the game, a game skill and a game special effect, etc.
  • the second object may be an object that is susceptible to the first object such as grass, flowers, and fog; the first plane is where the second object is located. Plane, for example, the ground where the grass is located.
  • the actor ie, the first object mentioned above
  • the interaction node can generate an interaction space with different sizes and ranges according to different configurations, as shown in FIG. 3, in FIG.
  • the white sphere is the interactive space.
  • the interactive node can trigger the interaction in the virtual scene image, and then update the interaction caused by the real-time update of the image in the virtual scene.
  • the impact of the interaction is the motion-to-peripheral element triggered by the interaction node. Impact.
  • the interaction point generated when the interaction node intersects with the elements in the virtual scene image is obtained according to the configuration information corresponding to the interaction node.
  • Each interaction node has corresponding configuration information, and the configuration information is used to control the generated interaction, for example, control the scope of the interaction.
  • an interaction node may be a sphere bound at a certain point, and the sphere is determined by its intersection with other elements in the virtual scene, thereby obtaining its influence on the interaction of other elements.
  • Non-Play Characters NPC for short
  • Game skills and game effects are configured to interact with nodes according to requirements.
  • An interaction area is formed when the interaction space of the first object intersects with the ground where the grass is located. Wherein, when the interaction space is a sphere, the interaction space intersects the first plane to obtain a circular section.
  • Step S204 generating a normal map of the water wave animation corresponding to the interaction area, wherein the normal map shows a plurality of corrugations.
  • a normal map of the water wave animation corresponding to the circular cut surface may be generated according to the interaction region (eg, a circular cut surface), and the normal map includes a plurality of corrugations (ie, water ripples).
  • the normal map of the generated water wave animation can be applied to the circular cut surface, and the generated water ripple in the normal map
  • the number of bars is the number of bars that the circular slice needs to display in the current image frame.
  • Step S206 moving the target object to a position in the current image frame corresponding to the corrugation position of the first target corrugation in the normal map by using the first target corrugation among the plurality of corrugations, wherein the first target corrugation is opposite to the target object correspond.
  • the target object is an object that moves under the influence of a normal map, for example, may be part or all of the grass in the interactive area.
  • a normal map for example, may be part or all of the grass in the interactive area.
  • the first loop of grass is disturbed by the first target corrugation and moves to the corresponding position in the current image frame, wherein the corrugation position of the first target corrugation in the normal map can be Determining the corresponding position, the second circle of grass is not disturbed by the first target corrugation in the normal map and moves to the position after the first circle of grass moves, the first circle of grass is the target object, and the first target ripple is the normal map A corrugation in a plurality of corrugations for disturbing the first loop of grass.
  • a normal map of the water wave animation corresponding to the interaction area is generated, and the target object is moved according to the first target ripple corresponding to the target object.
  • the purpose of adjusting the position of the target object by the normal map is achieved, thereby realizing that the position of the second object can follow the water wave.
  • an animation effect can be displayed. For example, in a game, when a game character walks through the grass, the effect of the grass swaying under the disturbance of the game character is displayed, and a smearing effect similar to the diffusion of the water ripple in one direction can be formed when the game character moves.
  • the method of image processing described above can be applied to online games such as "Tianya Mingyue Knife", and the rendering of the target object in the interactive area is completed in the game engine of "Tianya Mingyue Knife".
  • the game character walks in any virtual scene in "Tianya Mingyue Knife", wherein the virtual scene is grass, and when the game character walks in the grass, the interaction space of the game character (such as the sphere shown in FIG. 3) Intersecting with the ground on which the grass is located (ie, the first plane described above) results in an interactive region.
  • the online game client of "Tianya Mingyue Knife” generates a normal map of the water wave animation corresponding to the interaction area, thereby using the first target ripple in the plurality of corrugations to match the grass corresponding to the first target ripple in the interaction area Moving to a corresponding position in the current image frame, wherein the corresponding position is a position corresponding to the corrugation position of the first target corrugation in the normal map.
  • the perturbation of the target object by the first object is implemented by using a graphics card (GPU) in a Vertex Shader using a normal map of the water wave animation, instead of passing through the center.
  • the central processing unit (CPU) implements the disturbance of the first object to the target object, and improves the efficiency of processing the display data.
  • the description will be made by taking the first object as the game character and the target object as the grass.
  • the target object is moved to a position in the current image frame corresponding to the corrugation position of the first target corrugation in the normal map by using the first target corrugation among the plurality of corrugations: according to the normal map of the water wave animation
  • the target object is adjusted from the first form to the second form, wherein the vertex of the target object in the second form is located at a position in the current image frame corresponding to the corrugation position of the first target corrugation in the normal map.
  • the interaction between the first object and the target object means that the action of the first object has an influence on the shape of the target object.
  • the game character has an interaction with the grass in the process of walking in the grass, and the interaction Will lead to morphological changes in the grass, specifically for the front, back, left, right swing, or high and low changes.
  • the game character walks in the grass, and the grass on both sides of the game character is changed from the upright (first form) to the curved (second form), that is, the first form is adjusted to the second form.
  • the interaction between the first object and the target object does not necessarily refer to the object that has been in contact with the first object, because when the game character walks in the grass, even if it does not touch the nearby grass, it will affect the vicinity.
  • the shape of the grass does not necessarily refer to the object that has been in contact with the first object, because when the game character walks in the grass, even if it does not touch the nearby grass, it will affect the vicinity.
  • the shape of the grass does not necessarily refer to the object that has been in contact with the first object, because when the game character walks in the grass, even if it does not touch the nearby grass, it will affect the vicinity.
  • the shape of the grass does not necessarily refer to the object that has been in contact with the first object
  • the target object interacting with the first object may be disturbed by the normal map of the water wave animation, so that the target object is adjusted from the first form to the second form, so that the target object is similar to water ripple.
  • the effect of dynamic changes is not limited to:
  • adjusting the target object from the first form to the second form according to the normal map of the water wave animation is: obtaining a range of values of the first component and the second component of the plurality of corrugations in the normal map, where One component is the component in the x-axis direction of the normal map, and the second component is the component in the y-axis direction of the normal map; the third component of the target object in the z-axis direction of the normal map is obtained; according to the first component and the first component The two components determine the first offset distance and the first on the plane of the x-axis and the y-axis of the vertex coordinates of the target object An offset direction, and determining a second offset distance and a second offset direction of the target object on the z-axis according to the third component, the first offset distance is in a range of values; controlling the vertex coordinates of the target object at the first The first offset distance is offset in the offset direction and the second offset distance is offset in the second offset direction.
  • the vertices of the target object may be disturbed by using the associated texture information of the normal map of the water wave animation generated by the GPU, and then the target object is changed from the first form to the second form.
  • the value range of the component of the plurality of corrugations included in the normal map (ie, the first component) and the component of the y-axis direction (ie, the second component) may be acquired first.
  • the first offset distance and the first offset direction that is, determining whether the target object is offset to the left or right at this time, or the forward and backward offsets, and the distance to the left and right offsets and the forward and backward offsets Distance and determine the projection height of each target object on the z-axis to determine a second offset distance and a second offset direction of the target object on the z-axis, wherein if the target object is projected on the z-axis If it is lower, it indicates that the normal map has less disturbance to the target object, that is, the first offset distance and the second offset distance of the target object are smaller, that is, the offset of the target object is smaller; if the target object A higher projection height on the
  • the target object is adjusted from the first form to the second form, and the dynamic effect of the target object shifting with the movement of the first object can be more realistically displayed.
  • the target object After determining the disturbance magnitude to the target object (ie, the first offset distance, the second offset distance, the first offset direction, and the second offset direction), controlling the vertex coordinates of the target object in the first offset direction Upshifting the first offset distance and offsetting the second offset distance in the second offset direction.
  • the target object may also have an animation similar to the water wave animation when the target object is perturbed by the normal map of the water wave animation. The effect is that the effect of the target object swinging under the disturbance of the first object can be displayed realistically, and the trailing effect formed when the target object is passed by the first object is displayed in the game interface.
  • the value range of the first component may be [-1, 1]
  • the range of the second component may also be [-1, 1]
  • the first component and the second component are taken. Setting the value range to [-1, 1] ensures that the target object can swing in the left, right, front, and back directions.
  • obtaining the range of the first component and the second component of the plurality of corrugations in the normal map is specifically: obtaining a maximum value of the outermost corrugation of the plurality of corrugations in the normal map in the x-axis direction Minimum value, and the maximum and minimum values of the outermost corrugation in the y-axis direction; adjusting the coordinates of each of the multiple corrugations on the x-axis and the y-axis to maximize and minimize the x-axis direction
  • the value is a preset value, and the maximum and minimum values in the y-axis direction are at preset values; the range indicated by the preset value is taken as the value range.
  • the range of the first component may be [-1, 1], and the range of the second component may also be selected as [-1, 1].
  • the method for setting the value range of the first component and the second component to [-1, 1] may be that the outermost corrugation in the normal map of the water wave animation is first obtained in the x-axis direction and the y-axis direction.
  • the minimum value of each ripple on the x-axis is not less than a preset value (for example, the value -1), and the maximum value of each ripple on the y-axis is not greater than a preset value (for example, a numerical value) 1)
  • the minimum value of each ripple on the y-axis is not less than a preset value (for example, the value -1).
  • the generated water wave animation has an animation characteristic
  • driving the target object's vertex coordinates with the generated water wave animation's normal map also generates an animation similar to the water wave animation.
  • the effect makes the game simulation effect more realistic, and can realistically display the swing effect of the target object when it is disturbed by the first object and the trailing effect of the target object formed when the first object moves.
  • the method provided by the present application may be used to perform multiple renderings by Instancing.
  • the game character or the grass disturbed by the stunt is processed, and the first form of the grass disturbed by the plurality of game characters or stunts is adjusted to the second form.
  • the method provided by the present application can support multiple (for example, 200) players to interact with the grass at the same time, and at the same time complete the processing of the form of the grass interacting with the plurality of players.
  • all the target objects in the virtual scene use the same vertex buffer. In the vertex buffer, the coordinates of the vertex of each target object, the position information of each target object, and the like are recorded.
  • generating a normal map of the water wave animation corresponding to the interaction area specifically includes: determining a water wave animation height map of the first image frame based on a water wave animation height map of the previous image frame of the first image frame corresponding to the current image frame.
  • the height of the water surface wherein the first image frame is an image frame in the water wave animation, and the water surface height is the highest water surface height of the plurality of corrugations in the water wave animation height map of the first image frame;
  • the surface height generates a water wave animation height map of the current image frame; a normal map of the water wave animation is generated according to the water wave animation height map of the current image frame.
  • the height map is used to carry the height data, and the corresponding format may be RGBA16f.
  • the water wave interaction algorithm is used to generate a normal map of the water wave animation, and then the target object is adjusted from the first form to the second form by the generated normal map of the water wave animation, wherein the water wave interaction algorithm is
  • the implementation of the GPU is because the implementation of the water wave interaction algorithm through the GPU can improve the water wave interaction efficiency, and the implementation is more direct, avoiding the problem of low interaction efficiency caused by a large amount of CPU calculation.
  • the water wave interaction algorithm is divided into four steps, which are: Step 1: Copy the water wave animation height map of the previous image frame, and use the height map as the initial height of the water wave height map of the first image frame; Step 2 And generating height data of the water wave height map of the first image frame according to the interaction node; step 3, diffusion and attenuation; and step 4, calculating a normal.
  • Step 1 Copy the water wave animation height map of the previous image frame, and use the height map as the initial height of the water wave height map of the first image frame
  • Step 2 And generating height data of the water wave height map of the first image frame according to the interaction node
  • step 3 diffusion and attenuation
  • step 4 calculating a normal.
  • Step 1 Copy the water wave animation height map of the previous image frame of the first image frame to the full screen, and copy it to the height map of the first image frame water wave animation as the starting height in the water wave animation height map of the first image frame.
  • the number of corrugations of the water wave animation in the first image frame corresponds to the number of corrugations of the normal map of the water wave animation in the current image frame.
  • Step 2 The first object (such as a game character, skill or special effect) is used as an interaction node, and the water surface height of the first image frame is affected according to the size of the configuration parameter of the interaction node, so that the water surface height data of the first image frame is completed. Generation. Further, a height map of the water wave animation of the first image frame can be obtained according to the water surface height, wherein the height map of the water wave animation of the first image frame is shown in FIG. 4 .
  • the configuration parameter can be set to a default value, and can also be set by a technician.
  • Step 3 centering on the second target corrugation at which the water surface height is located, the second target corrugation is outwardly diffused to obtain a plurality of corrugated waves, wherein the height of the plurality of corrugated diffuses gradually decreases from the center outward.
  • the water surface height is the highest water surface height of the plurality of corrugations in the water wave animation height map of the first image frame, and when the water ripples in the water wave animation are outwardly diffused, the water ripples with the highest water surface height (ie, the second target ripple) Starting to spread out the second target corrugation, and the height of the plurality of corrugations gradually decreases outward with the water ripple (ie, the second target corrugation) where the highest water level is located.
  • the outward diffusion of the second target ripple may specifically be that each pixel in the second target ripple diffuses to the pixels in the surrounding neighborhood, and the specific diffusion distribution is as shown in FIG. 5, and (i, j) indicates the current The pixel to be processed starts at that pixel and spreads to adjacent pixels, for example, (i-1, j) and (i+1, j) Equal pixels.
  • calculating the height of the pixel (i, j) in the first image frame may be calculated by using a formula in which the pixel (i, j) is any one of the first image frames:
  • H (i,j) (H (i-1,j-1) +H (i-1,j+1) +H (i+1,j+1) +H (i-1,j) + H (i+1,j) +H (i,j+1) )/8-H' (i,j)
  • H' (i, j) is the height of the pixel (i, j) in the previous image frame of the first image frame
  • H (i, j) is the height of the pixel (i, j) in the first image frame
  • H (i-1, j-1) is the height of the pixel (i-1, j-1) in the first image frame
  • H (i-1, j+1) is the pixel in the first image frame (i-
  • the height of 1, j+1) is the height of the pixel (i+1, j+1) in the first image frame
  • H (i-1, j) is the first image
  • H (i+1, j) is the height of the pixel (i+1, j) in the first image frame
  • H (i, j+1) is the first image.
  • the heights of the plurality of diffused corrugations may gradually decrease from the center outward, that is, the height of each corrugation gradually decreases from the center outward.
  • Step 4 Generate a normal map of the water wave animated height map of the first image frame after the attenuation, wherein if the normal map and the normal of the water surface are mixed and then applied to the water surface, a real water wave animation can be generated.
  • the normal map of the water wave animation of the first image frame in FIG. 6, it can be seen that there is obvious water wave ripple, and if multiple image frames are continuously played, water waves can be realized. Animated features.
  • the game designer sets the game character or skill, stunt, etc. to interact with the grass to a range of 30 m and a pixel of 256 x 256.
  • the game designer tested on the NV GTX 560Ti graphics card and tested the processing speed when 200 game characters interacted with the grass. After testing, when 200 people interacted with the grass, the animation effect on the grass was total. Consume less than 0.2ms, can quickly achieve interactive rendering of game characters, skills or effects and grass, and can be competent for large-scale interaction.
  • the image processing method proposed in the present application is completely based on the GPU, and each image frame renders the interaction information as a water wave height map into a texture, and attenuates the water wave animation height map to be spread to surrounding neighborhoods, and then generates corresponding correspondences.
  • the normal map accessing this normal map in the vertex shader of the target object, the x and y components of the line map perturb the vertex position of the target object. Since the generated water wave is animated, it is used to drive the vertices of the target object to generate corresponding animations.
  • the animation has a high visual sense and is very versatile in simulating the target object's swing animation and the player's movement. outstanding.
  • the method further includes: dividing the interaction area into different levels of interaction areas according to the distance between the interaction area and the viewpoint, so as to correspond to the interaction areas of different levels.
  • the interactive map has different precision.
  • the interaction area is divided into a close-range interaction area and a distant view interaction area, and the interaction map corresponding to the near-field interaction area and the distant view interaction area has different precisions because of the different distances from the viewpoint.
  • the close-range interaction area uses high-precision interactive textures
  • the distant interaction area uses lower-precision interactive textures.
  • a plurality of interaction regions are distributed in the image of the virtual scene, and each interaction region has a corresponding interaction map (for example, a height map and/or a normal map).
  • the relative viewpoint that is, the camera that performs image content shooting
  • the closer interaction area will constitute a close-up in the image
  • the distant interaction area will constitute a distant view in the image.
  • a distance threshold is preset, and the interaction area with the distance between the viewpoints less than the distance threshold is regarded as a close-up in the image, and the interaction area with the distance between the video and the distance is regarded as a distant view in the image.
  • close-ups require interactive maps with high precision, while distant views do not require interactive maps for high precision because of perspective.
  • the corresponding interaction map has unnecessary waste and high memory consumption, which greatly reduces the number of pixels in the full-screen operation involved in real-time update, significantly reduces the memory overhead and improves the performance.
  • the method further includes: loading height data carried in the updated water wave animation height map into the first interaction buffer, and storing height data corresponding to different levels of the interaction area in the first interaction buffer.
  • the height data corresponding to the close-in interaction area is stored in the red and green channels in the first interaction buffer
  • the height data corresponding to the vision interaction area is stored in the blue and alpha channels in the first interaction buffer.
  • height data stored in different channels in the first interaction buffer may be separately calculated, for example, may be stored in red and green channels, blue and alpha channels.
  • the height data is separately generated, which greatly reduces system overhead while making full use of system performance.
  • the method may further include: loading the normal information into a second interaction buffer different from the first interaction buffer, wherein different levels
  • the normal information corresponding to the interaction area is stored in different channels of the second interaction buffer.
  • the red and green channels in the second interaction buffer store normal information corresponding to the close-range interaction area
  • the blue and alpha channels store the vision interaction area. Corresponding normal information.
  • Rendering of data in a virtual scene includes rendering of height data and updating of normals to get and The pre-triggered interactions match and the light and shadow effects are realistic virtual scene images.
  • the method further includes outputting a height map matching each hierarchical interaction area according to the height data, and rendering the current image frame by using the height map, that is, rendering each height map to the virtual scene.
  • height data it will be output in a preset format to get a height map for each level of interaction area matching.
  • Real-time updates of each pixel height are achieved by rendering each height map to a virtual scene.
  • the height data is output in a RGBA16F format.
  • the normal map is output according to the normal information, and the normal map and the normal in the virtual scene are mixed and applied to the virtual scene to obtain an image updated in real time in the virtual scene.
  • the normal map will also be output in a preset format.
  • the normal map carries normal information in a RGBA format.
  • the normal map and the normal of the water surface are mixed and then applied to the water surface to obtain a real water wave effect.
  • the method further includes performing a smooth transition of precision between adjacent interaction regions according to the accuracy of the corresponding interaction map in the interaction regions of different levels.
  • the image in the virtual scene is formed by multiple interaction maps. Therefore, the image includes multiple interaction regions, and each interaction region corresponds to an interaction map.
  • the accuracy of the interaction map is not the same, so the precision between the interaction regions is not uniform, and the edge region where the interaction regions intersect has the defect of sudden change in precision, which needs to be phased.
  • Smooth transition between adjacent interaction areas to improve interaction accuracy.
  • a smooth transition between adjacent interaction regions can be achieved by linear interpolation between adjacent interaction regions.
  • the interaction of the existing water body interaction is very small, and only supports the interaction of several players at most, and the efficiency is very low.
  • the main reason is that the size of the interaction map used is very large. Performance has a fatal impact.
  • the performance and quality of the water body interaction will be greatly improved, and the performance of the small-sized interactive texture is greatly improved; although the size of the interactive texture is reduced, the quality is improved. This is all due to the hierarchical division of the interaction map according to the distance.
  • the three-dimensional rendering generally uses transmission projection, so the close-up maintains high precision.
  • the accuracy of the perspective can be greatly reduced, which in turn increases the scope of interaction. For example, the interaction range covered by the two levels will be much larger than the interaction range when there is only one interaction map.
  • the size of the interactive texture in the existing virtual scene is 1024*1024. After using two levels of interactive mapping, the size is changed to two 512*512. In this way, the accuracy of providing close-range is displayed.
  • the size of the two interaction buffers is 1024*1024, which occupies 12mb of the memory; after the two-level interaction map, the size of the two interaction buffers It becomes 512*512, which accounts for 5.2MB of memory and reduces the memory overhead by half. In the case of reduced memory overhead, performance will be greatly improved.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods of various embodiments of the present application.
  • an apparatus for performing image processing of the method for performing the above-mentioned picture processing the apparatus for processing the picture is mainly used for performing the method for processing the picture provided by the foregoing content in the embodiment of the present application, and the following The device for processing the image provided by the embodiment is specifically introduced:
  • FIG. 7 is a schematic diagram of an apparatus for image processing according to an embodiment of the present application.
  • the apparatus for image processing mainly includes a first obtaining unit 71, a generating unit 73, and a moving unit 75.
  • the first acquiring unit 71 is configured to acquire an interaction area where the interaction space of the first object in the current image frame intersects with the first plane where the second object is located, where the target object in the second object is located in the interaction area.
  • the generating unit 73 is configured to generate a normal map of the water wave animation corresponding to the interaction area, where There are multiple ripples in the normal map.
  • a normal map of the water wave animation corresponding to the circular cut surface may be generated according to the interaction region (eg, a circular cut surface), and the normal map includes a plurality of corrugations (ie, water ripples).
  • the normal map of the generated water wave animation can be applied to the circular cut surface, and the number of water ripples in the generated normal map is the number of the circular cut surface that needs to be displayed in the current image frame.
  • the moving unit 75 moves the target object to a position in the current image frame corresponding to the corrugation position of the first target corrugation in the normal map by using the first target corrugation among the plurality of corrugations, wherein the first target corrugation and the target The object corresponds.
  • a normal map of the water wave animation corresponding to the interaction area is generated, and the target object is moved according to the first target ripple corresponding to the target object.
  • the purpose of adjusting the position of the target object by the normal map is achieved, thereby realizing that the position of the second object can follow the water wave.
  • the moving unit comprises: an adjusting subunit, configured to adjust the target object from the first form to the second form according to the normal map of the water wave animation, wherein in the second form, the vertex of the target object is located in the current image frame. A position corresponding to the corrugation position of the first target corrugation in the normal map.
  • the adjusting subunit includes: a first acquiring module, configured to obtain a range of values of the first component and the second component of the plurality of corrugations in the normal map, where the first component is in the x-axis direction of the normal map
  • the second component is a component in the y-axis direction of the normal map
  • the second acquisition module is configured to acquire a third component of the target object in the z-axis direction of the normal map
  • the first determining module is configured to The first component and the second component determine a first offset distance and a first offset direction of the vertex coordinates of the target object on a plane of the x-axis and the y-axis, and determine a second target image on the z-axis according to the third component
  • An offset distance and a second offset direction wherein the first offset distance is in a range of values
  • the control module is configured to control the vertex coordinates of the target object to be offset by the first offset distance in the first offset direction, in the second The second offset distance is offset in
  • the first obtaining module includes: an obtaining submodule, configured to obtain a maximum value and a minimum value of the outermost corrugations of the plurality of corrugations in the normal map in the x-axis direction, and the outermost corrugation in the y
  • the maximum and minimum values in the direction of the axis the adjustment submodule is used to adjust the coordinates of each of the plurality of corrugations on the x-axis and the y-axis so that the maximum and minimum values in the x-axis direction are preset values And in the y-axis direction
  • the maximum and minimum values are at preset values; the sub-module is determined to use the range indicated by the preset value as the value range.
  • the generating unit includes: a second determining module, configured to determine a water surface height of the water wave animation height map of the first image frame based on a water wave animation height map of the previous image frame of the first image frame corresponding to the current image frame,
  • the first image frame is an image frame in the water wave animation
  • the water surface height is the highest water surface height of the plurality of corrugations in the water wave animation height map of the first image frame
  • the first generation module is configured to generate the current image frame according to the water surface height The water wave animation height map
  • the second generation module is configured to generate a normal map of the water wave animation according to the water wave animation height map of the current image frame.
  • the first generating module includes: a diffusion attenuating submodule, configured to diffuse the second target corrugation centering on the second target corrugation at which the water surface height is located, and obtain a plurality of diffused ripples, wherein the diffused The height of the plurality of corrugations gradually decreases from the center outward.
  • a diffusion attenuating submodule configured to diffuse the second target corrugation centering on the second target corrugation at which the water surface height is located, and obtain a plurality of diffused ripples, wherein the diffused The height of the plurality of corrugations gradually decreases from the center outward.
  • the apparatus for image processing shown in FIG. 7 may further include an interaction area dividing unit, where the interaction area dividing unit divides the interaction area into different levels of interaction areas according to the distance between the interaction area and the viewpoint, thereby corresponding to the interaction area.
  • the interactive map has different precision.
  • the interaction area is divided into a close-range interaction area and a distant view interaction area, and the interaction map corresponding to the near-field interaction area and the distant view interaction area has different precisions because of the different distances from the viewpoint.
  • the close-range interaction area uses high-precision interactive maps, while the distant view interaction area uses low-precision interactive maps.
  • the image processing apparatus further includes a first interaction buffer and a second interaction buffer.
  • the updated height data is loaded into the first interaction buffer, and the height data corresponding to the interaction areas of different levels is stored in different channels of the first interaction buffer.
  • the height data corresponding to the close-in interaction area is stored in the red and green channels in the first interaction buffer
  • the height data corresponding to the vision interaction area is stored in the blue and alpha channels in the first interaction buffer.
  • the height data stored in different channels in the first interaction buffer can be separately calculated, for example, height data that can be stored in the red and green channels, and in the blue and alpha channels. The height data is taken separately.
  • the normal information may be loaded into the second interaction buffer, where the normal information corresponding to the interaction regions of different levels is stored in different channels of the second interaction buffer, for example,
  • the red and green channels in the second interaction buffer store the normal information corresponding to the close-range interaction area
  • the blue and alpha channels store the normal information corresponding to the distant interaction area.
  • the apparatus may further include a height map rendering unit that outputs a height map matching each hierarchical interaction area according to the height data, and uses the height map pair The front image frame is rendered.
  • the apparatus further includes an accuracy smoothing unit that performs a smooth transition of precision between adjacent interaction regions according to the accuracy of the corresponding interaction map in the interaction regions of different levels.
  • the image in the virtual scene is formed by multiple interaction maps. Therefore, the image includes multiple interaction regions, and each interaction region corresponds to an interaction map.
  • the accuracy of the interaction map is not the same, so the precision between the interaction regions is not uniform, and the edge region where the interaction regions intersect has the defect of sudden change in precision, which needs to be phased.
  • Smooth transition between adjacent interaction areas to improve interaction accuracy.
  • a smooth transition between adjacent interaction regions can be achieved by linear interpolation between adjacent interaction regions.
  • the mobile terminal mainly includes a processor 401, a display 402, a data interface 403, a memory 404, and a network interface. 405, where:
  • Display 402 is primarily used to display images, such as a game interface, where the game interface includes game characters or stunts and grasses that interact with game characters or stunts.
  • the data interface 403 transmits the control command input to the user to the processor 401 mainly by means of data transmission.
  • the memory 404 is mainly used to store instructions for executing the method as described in Embodiment 1 and related data, for example, an object having a water wave animation effect in the game (for example, grass), and information such as the progress of the user's game.
  • the network interface 405 is mainly used for network communication with the processor 401 to provide data support for interactive rendering of pictures.
  • the processor 401 is mainly used to perform the following operations:
  • the processor 401 is further configured to adjust the target object from the first form to the normal map according to the water wave animation
  • the vertex of the target object is located at a position in the current image frame corresponding to the corrugation position of the first target corrugation in the normal map.
  • the processor 401 is further configured to obtain a range of values of the first component and the second component of the plurality of corrugations in the normal map, where the first component is a component in the x-axis direction of the normal map, and the second component is a normal map.
  • a component in the y-axis direction acquiring a third component of the target object in the z-axis direction of the normal map; determining, based on the first component and the second component, the vertex coordinates of the target object on the plane of the x-axis and the y-axis An offset distance and a first offset direction, and determining a second offset distance and a second offset direction of the target object on the z-axis according to the third component, the first offset distance being within a value range; controlling the target object
  • the vertex coordinates are offset by a first offset distance in a first offset direction and a second offset distance in a second offset direction.
  • the processor 401 is further configured to obtain a maximum value and a minimum value of the outermost corrugations of the plurality of corrugations in the normal map in the x-axis direction, and a maximum value and a minimum value of the outermost corrugations in the y-axis direction; Adjust the coordinates of each of the multiple corrugations on the x-axis and the y-axis so that the maximum and minimum values in the x-axis direction are preset values, and the maximum and minimum values in the y-axis direction are at preset values. ; The range indicated by the preset value is taken as the value range.
  • the processor 401 is further configured to determine a water surface height of the water wave animation height map of the first image frame based on the water wave animation height map of the previous image frame of the first image frame corresponding to the current image frame, wherein the first image frame is a water wave animation
  • the water surface height is the highest water surface height of the plurality of corrugations in the water wave animation height map of the first image frame
  • the water wave animation height map of the current image frame is generated according to the water surface height
  • the water wave animation height map is generated according to the current image frame The normal map of the water wave animation.
  • the processor 401 is further configured to outwardly diffuse the second target corrugation centering on the second target corrugation at which the water surface height is located, to obtain a plurality of diffused corrugations, wherein the height of the diffused plurality of corrugations is gradually decreased from the center outward. .
  • the processor 401 is further configured to divide the interaction area into different levels of interaction areas according to the distance between the interaction area and the viewpoint, so that the interaction map corresponding to the interaction area has different precision.
  • the memory 404 also includes a first interaction buffer and a second interaction buffer.
  • the processor 401 can also load the updated height data into the first interaction buffer, and the height data corresponding to the interaction areas of different levels is stored in different channels of the first interaction buffer.
  • the processor 401 performs a highly attenuated diffusion operation, the height data stored in different channels in the first interaction buffer may be separately calculated.
  • the processor 401 may further load the normal information into the second interaction buffer, where the normal information corresponding to the interaction regions of different levels is stored in the second interaction buffer.
  • the second interaction buffer is in a different channel.
  • the processor 401 can also output a height map matching the hierarchical interaction regions according to the height data, and render the current image frame by using the height map.
  • the processor 401 can also perform a smooth transition of precision between adjacent interaction regions according to the precision of the corresponding interaction map in the interaction regions of different levels.
  • Embodiments of the present application also provide a storage medium.
  • the foregoing storage medium may be used to store program code of the method for image processing in Embodiment 1 of the present application.
  • the foregoing storage medium may be located in at least one of a plurality of network devices in a network of a mobile communication network, a wide area network, a metropolitan area network, or a local area network.
  • the storage medium is arranged to store program code for performing the following steps:
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a mobile hard disk e.g., a hard disk
  • magnetic memory e.g., a hard disk
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present application in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium.
  • Including a number of instructions to cause one or more computer devices (which may be personal computers, A server or network device, etc.) performs all or part of the steps of the various embodiments of the present application.
  • the disclosed apparatus can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of a unit is only a logical function division.
  • multiple units or components may be combined or may be integrated into Another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种图片处理的方法和装置。其中,该方法包括:获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,第二对象中的目标对象位于交互区域内;生成与交互区域对应的水波动画的法线贴图,其中,法线贴图中显示有多条波纹;利用多条波纹中的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置。本申请解决了现有技术中第一对象和第二对象发生交互时,第二对象的显示效果比较差的技术问题。另外,通过将交互区域划分为不同层级以使得与交互区域相应的交互贴图具有不同精度,降低了完成实时更新所涉及全屏操作中的像素数目,明显降低显存开销,并提高了性能。

Description

图片处理的方法和装置
本申请要求于2016年4月6日提交中国专利局、申请号为201610209641.7、发明名称为“图片处理的方法和装置”以及于2016年04月06日提交中国专利局、申请号为201610210562.8、发明名称为“虚拟场景的交互方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图片处理领域,具体而言,涉及一种图片处理的方法和装置。
背景技术
在网络游戏中,游戏玩家往往会关注游戏主界面中显示出的游戏玩家与游资场景的交互效果。例如,游戏玩家在草地中行走时,如果在草地中出现摇摆以及拖尾的动画效果时,可以增加游戏玩家的游戏乐趣。其中,网络游戏的虚拟场景中草的可交互渲染是游戏渲染中非常重要的一部分,如果草不能被交互的话会给人死气沉沉的感觉;玩家走过草丛或者放技能时,如果草能出现逼真的交互效果(即上述摆动以及拖尾)会给游戏玩家很强的反馈,让游戏玩家觉得这个游戏世界是受自己影响的,会提高玩家的沉浸感。
例如,在网络游戏《FarCry》和《九阴真经》等游戏中,可以实现玩家在移动的时候,草出现交互的动画效果,但是,出现交互效果的草的交互规模非常小,最多只支持几个游戏玩家的交互,而且效果非常生硬,无法表现玩家从草丛中移动带来的拖尾。
目前,在网络游戏中存在的实现草的交互效果的主要方法是采用CPU对玩家和草的实体进行相交检测,然后根据检测的结果,将受到交互影响的草记录下来,然后对顶点缓冲区(Vertex Buffer)加锁,然后再将交互影响的草逐个渲染。
现有技术方案的缺点主要表现在两个方面:效率不够高,因此无法胜任大规模交互;并且在观感方面草的摇摆动画表现的不够自然,因此无法模拟游戏玩家移动时形成的拖尾的动画效果。
效率不够高表现在当游戏玩家和草的数量较多时(例如,200个游戏玩家),此时,上述方案就不能流畅并逼真的实现草与游戏玩家交互的动画效果。例如, 假设游戏玩家的数量为N,草的数量为M,这样首先需要N*M次相交检测,该操作在游戏玩家较多的时候会消耗很多性能;对于交互的每棵草都需要有对应的顶点流,这无疑会带来不少的显存开销;每一帧都需要对于受交互的草的顶点流经过加锁来更新,这也会带来不少的CPU开销;每一棵受交互影响的草的顶点信息是不完全一样的,因此没有办法通过Instancing渲染方法一次绘制完,只能逐个绘制,这会增加绘制调用(DrawCall)数目,从而会降低性能。
另外,现有的虚拟场景的图像是采用一个交互贴图实现的,其尺寸非常大,如2048*2048。仅采用的一个交互贴图覆盖了虚拟场景中的整个交互区域,对于整个交互区域而言,单位面积对应的精度是均匀的。但是,由于交互贴图的尺寸过大,而需要计算的像素数目是与交互贴图的尺寸成正比的,其所对应的数值也非常大,所以对于虚拟场景的性能有非常致命的影响,例如,打开虚拟场景之后帧率大幅下降。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种图片处理的方法和装置,以至少解决现有技术中第一对象和第二对象发生交互时,第二对象的显示效果比较差的技术问题。
根据本申请实施例的一个方面,提供了一种图片处理的方法,包括:获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,所述第二对象中的目标对象位于所述交互区域内;生成与所述交互区域对应的水波动画的法线贴图,其中,所述法线贴图中显示有多条波纹;利用所述多条波纹中的第一目标波纹将目标对象移动到所述当前图像帧中与所述第一目标波纹在所述法线贴图中的波纹位置相对应的位置,其中,所述第一目标波纹与所述目标对象相对应。
根据本申请实施例的另一方面,还提供了一种图片处理的装置,包括:第一获取单元,用于获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,所述第二对象中的目标对象位于所述交互区域内;生成单元,用于生成与所述交互区域对应的水波动画的法线贴图,其中,所述法线贴图中显示有多条波纹;移动单元,用于利用所述多条波纹中的第一目标波纹将目标对象移动到所述当前图像帧中与所述第一目标波纹在所述法线贴图中的波纹位置相对应的位置,其中,所述第一目标波纹与所述目标对象相 对应。
在本申请实施例中,通过在第一对象和第二对象发生交互时,生成与交互区域相对应的水波动画的法线贴图,并根据该与目标对象对应的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置,达到了通过法线贴图调整目标对象的位置的目的,从而实现了第二对象的位置可以随水波动画的法线贴图发生改变的技术效果,进而解决了现有技术中第一对象和第二对象发生交互时,第二对象的显示效果比较差的技术问题。
根据本申请,通过不同层级的交互贴图构成虚拟场景中图像的远景和近景,能够使用高精度的交互贴图构成图像中的近景,使用较低精度的交互贴图构成图像中的远景,进而避免了远景交互区域所对应的交互贴图中由于精度高而造成不必要的浪费,显存的消耗,因此极大地降低了完成实时更新所涉及全屏操作中的像素数目,明显降低显存开销,并提高了性能。
同时,通过根据相应的交互贴图的精度在相邻交互区域之间进行精度的平滑过渡,克服了各交互区域之间的精度是不均匀,交互区域相交的边缘区域存在着精度突变的缺陷,进而以此来提高交互精度。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的硬件环境的架构图;
图2是根据本申请实施例的一种图片处理的方法的流程图;
图3是根据本申请实施例的一种可选的游戏玩家的交互空间的示意图;
图4是根据本申请实施例的一种可选的第一图像帧水波高度图的示意图;
图5是根据本申请实施例的一种可选的像素扩散的示意图;
图6是根据本申请实施例的一种可选的第一图像帧水波动画的法线贴图的示意图;
图7是根据本申请实施例的一种图片处理的装置的示意图;以及
图8是根据本申请实施例的终端的硬件结构图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
专业术语解释
GPU:显卡,英文全称Graphic Process Unit。
可交互渲染:玩家可以通过对游戏世界中的一些元素进行交互来改变这些元素的外形和观感,让玩家感觉游戏世界是可以被他改变或者影响的。
顶点缓冲区:存储模型的顶点信息流,包括位置,纹理坐标,切线空间向量等数据。
绘制调用,引擎准备数据并通知GPU的过程称为一次绘制调用。
Instancing渲染:GPU将场景中模型数据相同的多个模型实例一次绘制完成。
水波动画模型:交互对高度图产生影响,然后将高度图衰减后扩散到周围,生成真实的水波动画。
顶点扰动:通过顶点纹理或者随机的方式对模型顶点进行偏移,从而使模型顶看起来是可以动态变化的。
实施例1
根据本申请实施例,提供了一种图片处理的方法。
可选地,在本实施例中,上述的图片处理的方法可以应用于如图1所示的服务器104和终端102所构成的硬件环境中。图1是根据本申请实施例的硬件 环境的示意图,如图1所示,服务器104通过网络与终端102进行连接,上述网络包括但不限于:广域网、城域网或局域网,终端102并不限定于计算机、手机、平板电脑等。
图2是根据本申请实施例的图片处理的方法的流程图。如图2所示,该图片处理的方法包括以下步骤:
步骤S202,获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,第二对象中的目标对象位于交互区域内。
例如,第一对象可以为游戏中的游戏玩家,游戏技能和游戏特效等,第二对象可以为草、花和雾等形态容易受到第一对象影响的对象;第一平面为第二对象所在的平面,例如,草所在的地面。
在网络游戏中需要交互的动作者(即,上述第一对象)需要绑定至少一个交互节点,交互节点根据不同的配置可以产生大小和范围不同的交互空间,如图3所示,图3中白色球体即为交互空间。
通过交互节点便能够触发虚拟场景图像中的交互,进而通过虚拟场景中图像的实时更新来更新交互所带来的影响,所指的交互所带来的影响即为交互节点触发的运动对周围元素的影响。根据交互节点对应的配置信息得到交互节点与虚拟场景图像中元素相交时产生的交互点。每一交互节点均有对应的配置信息,该配置信息用于对产生的交互进行控制,例如,控制交互的范围等。作为示例,交互节点可以为绑定在某个点上的球体,通过该球体来判定其与虚拟场景中的其它元素的相交,以此为依据来获得其对其它元素的交互产生的影响。在监听到图像中交互节点触发的运动时,获取运动的交互节点与元素相交时产生的交互点,以通过交互点来精准更新当前触发的交互对其它元素的影响。
玩家和非游戏角色(Non-Play Character,简称NPC)都会默认添加一个交互节点,游戏技能和游戏特效根据需求进行配置交互节点。当第一对象的交互空间和草所在的地面相交时会形成一个交互区域。其中,在交互空间为一个球体时,该交互空间与第一平面相交得到一个圆形切面。
步骤S204,生成与交互区域对应的水波动画的法线贴图,其中,法线贴图中显示有多条波纹。
具体地,可以根据该交互区域(例如,圆形切面)生成与该圆形切面对应的水波动画的法线贴图,并且该法线贴图中包含多条波纹(即,水波纹)。生成的水波动画的法线贴图可以应用到该圆形切面上,生成的法线贴图中的水波纹 的条数是该圆形切面在当前图像帧中需要展示的条数。
步骤S206,利用多条波纹中的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置,其中,第一目标波纹与目标对象相对应。
目标对象是在法线贴图的影响下移动的对象,例如,可以为处于交互区域中的部分或者全部草。例如,在交互区域中有两圈草,第一圈草受到第一目标波纹的扰动而移动到当前图像帧中相应位置上,其中,可以通过第一目标波纹在法线贴图中的波纹位置来确定上述相应位置,第二圈草没有受到法线贴图中第一目标波纹的扰动而移动到第一圈草移动之后的位置,第一圈草为目标对象,第一目标波纹为法线贴图的多条波纹中用于对第一圈草进行扰动的波纹。
在本申请实施例中,通过在第一对象和第二对象发生交互时,生成与交互区域相对应的水波动画的法线贴图,并根据该与目标对象对应的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在所述法线贴图中的波纹位置相对应的位置,达到了通过法线贴图调整目标对象的位置的目的,从而实现了第二对象的位置可以随水波动画的法线贴图发生改变的技术效果,进而解决了现有技术中第一对象和第二对象发生交互时,第二对象的显示效果比较差的技术问题。
当每个图像帧都采用与该图像帧对应的法线贴图进行调整后,多个调整后的图像帧连续显示时,可以显示出动画效果。例如,在游戏中,游戏人物从草丛中走过时,显示草在游戏人物的扰动下摇摆的效果,并且在游戏人物移动时可以形成类似水波纹向一个方向扩散的拖尾效果。
在一个具体的示例中,使用上述图片处理的方法可以应用到《天涯明月刀》等网络游戏中,在《天涯明月刀》的游戏引擎中完成交互区域中目标对象的渲染。例如,游戏人物在《天涯明月刀》中的任意虚拟场景中行走,其中,该虚拟场景中为草地,当游戏人物在草地中行走时,游戏人物的交互空间(如图3所示的球体)与草地所在的地面(即,上述第一平面)相交,得到交互区域。《天涯明月刀》的网络游戏客户端生成与该交互区域对应的水波动画的法线贴图,从而利用该多条波纹中的第一目标波纹,将交互区域中与第一目标波纹相对应的草移动到当前图像帧中的相应位置,其中,该相应位置为与第一目标波纹在法线贴图中的波纹位置相对应的位置。当多个图像帧形成连续的动画时,就可以在网络游戏的显示界面中显示出人走过草地使得草摇摆的效果,或者草被人 走过所形成的拖尾效果,使得提高游戏玩家与虚拟场景的交互,使得模拟的虚拟场景更加真实,从而提高了游戏的乐趣。
需要说明的是,在本申请实施例中,第一对象对目标对象的扰动时通过显卡(GPU)在顶点着色引擎(Vertex Shader)中采用水波动画的法线贴图来实现的,而不是通过中央处理器(Central Processing Unit,简称CPU)的来实现第一对象对目标对象的扰动,提高了处理显示数据的效率。
在本申请下述实施例中,为了描述简便,将以第一对象为游戏人物、目标对象为草为例进行说明。
可选地,利用多条波纹中的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置步骤:根据水波动画的法线贴图将目标对象从第一形态调整到第二形态,其中,在第二形态中目标对象的顶点位于当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置。
在本申请实施例中,第一对象与目标对象交互是指第一对象的动作对目标对象的形态产生了影响,例如,游戏人物在草地中行走的过程中与草产生了一个交互,这个交互会导致草出现形态上的变化,具体为前、后、左、右摇摆,或者高低变化。例如,游戏人物在草地中行走,该游戏人物两侧的草由直立(第一形态)变成弯曲(第二形态),也就是由第一形态被调整为第二形态。需要说明的是,第一对象与目标对象的交互并不一定是指与第一对象接触过的对象,因为,当游戏人物在草地中行走时,即使未接触到附近的草,也会影响附近草的形态。
当游戏人物在水中行走时,或者与水接触时,会在水面产生连续变化的波纹,游戏设计人员联想到当游戏人物在草地中行走或者穿行时,草的形态变化与水面中波纹的变化具有相关性。因此,在本申请实施例中,可以通过水波动画的法线贴图来扰动与第一对象交互的目标对象,以使目标对象从第一形态调整至第二形态,从而使得目标对象呈现类似水波纹的动态变化的效果。
可选地,根据水波动画的法线贴图将目标对象从第一形态调整到第二形态具体为:获取法线贴图中多条波纹的第一分量和第二分量的取值范围,其中,第一分量为法线贴图x轴方向上的分量,第二分量为法线贴图y轴方向上的分量;获取目标对象在法线贴图的z轴方向上的第三分量;根据第一分量和第二分量确定目标对象的顶点坐标的在x轴和y轴所在平面上的第一偏移距离和第 一偏移方向,并且根据第三分量确定目标对象在z轴上的第二偏移距离和第二偏移方向,第一偏移距离处于取值范围内;控制目标对象的顶点坐标在第一偏移方向上偏移第一偏移距离,在第二偏移方向上偏移第二偏移距离。
在本申请实施例中,可以采用显卡(GPU)生成的水波动画的法线贴图的相关贴图信息对目标对象的顶点进行扰动,进而,使得目标对象由第一形态变化至第二形态。具体地,可以先获取法线贴图中包含的多条波纹在x轴方向上的分量(即,上述第一分量)的取值范围和y轴方向上的分量(即,上述第二分量)的取值范围,以及获取目标对象在z轴方向上的投影高度(即,第三分量);进而根据上述x轴方向上的分量、y轴方向上的分量确定x轴和y轴所在平面上的第一偏移距离和第一偏移方向,即确定目标对象此时是向左、右偏移,还是向前、后偏移,以及向左、右偏移的距离和向前、后偏移的距离,并确定每个目标对象在z轴上的投影高度,以确定目标对象在z轴上的第二偏移距离和第二偏移方向,其中,如果目标对象在z轴上的投影高度较低,则表明法线贴图对目标对象的扰动较小,也即目标对象的第一偏移距离和第二偏移距离较小,也就是说目标对象的偏移幅度较小;如果目标对象在z轴上的投影高度较高,则表明法线贴图对目标对象的扰动较大,也即目标对象的第一偏移距离和第二偏移距离较大,也就是说目标对象的偏移幅度较大。具体可以理解为,当目标对象的长度较长时,容易受环境的影响发生较大幅度的摆动,当目标对象的长度较低时,反而并不容易受环境的影响发生较大的摆动。本申请采用上述方法将目标对象由第一形态调整到第二形态,可以更加真实地显示出目标对象随第一对象移动而发生偏移的动态效果。
在确定对目标对象的扰动大小(即,上述第一偏移距离、第二偏移距离、第一偏移方向和第二偏移方向)之后,控制目标对象的顶点坐标在第一偏移方向上偏移第一偏移距离,在第二偏移方向上偏移第二偏移距离。当多个图像帧连续在网络游戏界面中显示时,由于水波动画具有动画属性,因此,通过水波动画的法线贴图对目标对象进行扰动时也可以使目标对象也具有与水波动画相类似的动画效果,可以逼真显示目标对象在第一对象的扰动下摇摆的效果,并且在游戏界面中显示出目标对象被第一对象经过时所形成的拖尾效果。
需要说明的是,上述第一分量的取值范围可以为[-1,1],第二分量的取值范围同样可以为[-1,1],将上述第一分量和第二分量的取值范围设定为[-1,1]可以保证目标对象能够向左、右、前、后四个方向摆动。
可选地,获取法线贴图中多条波纹的第一分量和第二分量的取值范围具体为:获取法线贴图中多条波纹中最外层的波纹在x轴方向上的最大值和最小值,以及最外层的波纹在y轴方向上的最大值和最小值;调整多条波纹中每条波纹在x轴和y轴上的坐标,以使x轴方向上的最大值和最小值为预设数值,并且y轴方向上的最大值和最小值处于预设数值;将预设数值所指示的范围作为取值范围。
通过上述描述可知,第一分量的取值范围可以为[-1,1],第二分量的取值范围同样可以选取为[-1,1]。具体地,将上述第一分量和第二分量的取值范围设定为[-1,1]的方式可以为先获取水波动画的法线贴图中最外层波纹在x轴方向和y轴方向上的最大值和最小值;然后调整每条波纹在x轴和在y轴上的坐标的取值,以使每条波纹在x轴上最大值的取值不大于预设数值(例如,数值1),每条波纹在x轴上最小值的取值不小于预设数值(例如,数值-1),以及每条波纹在y轴上最大值的取值不大于预设数值(例如,数值1),每条波纹在y轴上最小值的取值不小于预设数值(例如,数值-1)。
当多个图像帧连续在游戏界面上显示时,由于生成的水波动画具有动画特性,因此,用生成的水波动画的法线贴图来驱动目标对象的顶点坐标也会生成与水波动画相类似的动画效果,使得游戏模拟效果更加真实,能够逼真的显示出目标对象在受到第一对象的扰动时的摇摆效果以及第一对象移动时形成的目标对象的拖尾效果。
需要说明的是,当该网络游戏中由多个游戏人物或者特技(相应于第一对象)等与草(目标对象)进行交互时,通过本申请提供的方法可以通过Instancing渲染的方式将多个游戏人物或者特技所扰动的草进行处理,同时将多个游戏人物或者特技所扰动的草的第一形态调整为第二形态。通过本申请提供的方法可以支持多个(例如200个)玩家与草同时交互,并同时完成与该多个玩家交互的草的形态的处理。并且在本申请实施例中,虚拟场景中所有的目标对象都是用同一个顶点缓冲区,在顶点缓冲区中,记录了每个目标对象的顶点坐标,每个目标对象的位置信息等信息。
可选地,生成与交互区域对应的水波动画的法线贴图具体包括:基于与当前图像帧对应的第一图像帧的前一个图像帧的水波动画高度图确定第一图像帧的水波动画高度图的水面高度,其中,第一图像帧为水波动画中的一个图像帧,水面高度为第一图像帧的水波动画高度图中多条波纹的最高水面高度;根据水 面高度生成当前图像帧的水波动画高度图;根据当前图像帧的水波动画高度图生成水波动画的法线贴图。作为示例,用高度图来承载高度数据,所对应的格式可以是RGBA16f。
在本申请实施例中,采用水波交互算法生成水波动画的法线贴图,进而,通过生成的水波动画的法线贴图将目标对象从第一形态调整到第二形态,其中,上述水波交互算法是通过GPU来实现的,因为,通过GPU来实现水波交互算法的实现可以提高水波交互效率,实现起来更加直接,避免了通过CPU的大量计算导致的交互效率低下的问题。
具体地,水波交互算法一共分为四个步骤,分别是:步骤一、复制前一个图像帧的水波动画高度图,并将该高度图作为第一图像帧的水波高度图的初始高度;步骤二、根据交互节点生成第一图像帧的水波高度图的高度数据;步骤三、扩散和衰减;步骤四、计算法线。下面对上述步骤一至步骤四进行详细描述。
步骤一:将第一图像帧的前一个图像帧的水波动画高度图进行全屏复制,复制到第一图像帧水波动画的高度图中,作为第一图像帧的水波动画高度图中的起始高度,其中,第一图像帧中水波动画的波纹条数与当前图像帧中水波动画的法线贴图的波纹条数相对应。
步骤二:将第一对象(例如游戏人物、技能或者特效)作为交互节点,根据交互节点的配置参数的大小来影响第一图像帧的水面高度,这样就完成了第一图像帧的水面高度数据的生成。进而就可以根据该水面高度得到第一图像帧的水波动画的高度图,其中,如图4所示的为第一图像帧的水波动画的高度图。其中,该配置参数可以设置为默认值,还可以通过技术人员来设置。
步骤三:以水面高度所在的第二目标波纹为中心,向外扩散第二目标波纹,得到扩散后的多条波纹,其中,扩散后的多条波纹的高度由中心向外逐渐降低。上述水面高度为第一图像帧的水波动画高度图中多条波纹的最高水面高度,当水波动画中的水波纹向外扩散时,均以最高水面高度所在的水波纹(即,第二目标波纹)开始向外扩散第二目标波纹,并且多条波纹的高度以最高水面高度所在的水波纹(即,第二目标波纹)开始向外逐渐降低。
需要说明的是,向外扩散第二目标波纹可以具体为第二目标波纹中的每一个像素会扩散到周围邻域的像素,具体扩散的分布如图5所示,(i,j)表示当前要处理的像素,以该像素点开始,并扩散到邻近的像素,例如,(i-1,j)和(i+1, j)等像素。
在本申请实施例中,计算第一图像帧中像素(i,j)的高度可以通过下述公式进行计算,其中,像素(i,j)为第一图像帧中的任意一个像素点:
H(i,j)=(H(i-1,j-1)+H(i-1,j+1)+H(i+1,j+1)+H(i-1,j)+H(i+1,j)+H(i,j+1))/8-H'(i,j)
其中,H'(i,j)为第一图像帧的前一个图像帧中的像素(i,j)的高度,H(i,j)为第一图像帧中像素(i,j)的高度,H(i-1,j-1)为第一图像帧中像素(i-1,j-1)的高度,H(i-1,j+1)为第一图像帧中像素(i-1,j+1)的高度,H(i+1,j+1)为第一图像帧中像素(i+1,j+1)的高度,H(i-1,j)为第一图像帧中像素(i-1,j)的高度,H(i+1,j)为第一图像帧中像素(i+1,j)的高度,H(i,j+1)为第一图像帧中像素(i,j+1)的高度。
通过上述方法得到扩散后的多条波纹之后,扩散后的多条波纹的高度可以由中心向外逐渐降低,即每条波纹的高度由中心向外逐渐降低。
步骤四:对衰减后的第一图像帧的水波动画高度图生成法线贴图,其中,如果将该法线贴图和水面的法线进行混合后再作用到水面上,就可以产生真实的水波动画。如图6所示的即为第一图像帧的水波动画的法线贴图,在图6中,可以看出具有明显的水波波纹,如果当多个图像帧进行连续播放时,就可以实现水波的动画特征。
在一个具体的示例中,游戏设计人员设置游戏人物或者技能、特技等与草的交互范围为30m,且像素为256×256。游戏设计人员在型号为NV GTX 560Ti的显卡上进行了测试,测试了当200个游戏人物与草进行交互时的处理速率,经过测试可知,200人与草交互时,对得到草的动画效果总共消耗不到0.2ms,能够快速实现游戏人物、技能或者特技与草的交互渲染,并且可以胜任大规模的交互。
本申请中提出的图片处理的方法是完全基于GPU,每一个图像帧将交互信息作为水波高度图渲染到一张贴图中,并将水波动画高度图衰减后扩散到周围邻域,然后生成其对应的法线贴图;在目标对象的顶点着色引擎中访问这张法线贴图,用法线贴图的x分量和y分量对目标对象的顶点位置进行扰动。由于生成的水波是具有动画特性,用其来驱动目标对象的顶点也会生成相应的动画,该动画视觉感较高,在模拟目标对象的摇摆动画和玩家移动时形成的拖尾方面表现的非常出色。
根据本申请的一方面,所述方法还包括:根据交互区域与视点之间的距离将交互区域划分为不同层级的交互区域,从而使得与不同层级的交互区域对应 的交互贴图具有不同的精度。例如,将交互区域划分为近景交互区域和远景交互区域,近景交互区域和远景交互区域所对应的交互贴图因为距离视点的远近不同而具有不同的精度。其中,近景交互区域使用高精度的交互贴图,而远景交互区域使用较低精度的交互贴图。
这里,虚拟场景的图像中分布了多个交互区域,每一交互区域具有相应的交互贴图(例如高度图和/或法线贴图)。相对视点,即进行图像内容拍摄的摄像机,距离较近的交互区域将构成图像中的近景,距离较远的交互区域将构成图像中的远景。其中,预设了一距离阈值,与视点之间的距离小于此距离阈值的交互区域将视为图像中的近景,与视频之间的距离大于此距离阈值的交互区域将视为图像中的远景。这里,近景需要交互贴图具有很高的精度,而远景则因为透视的原因而并不需要交互贴图具有很高的精度。因此,通过不同层级的交互贴图构成虚拟场景中图像的远景和近景,能够使用高精度的交互贴图构成图像中的近景,使用较低精度的交互贴图构成图像中的远景,进而避免了远景交互区域所对应的交互贴图中由于精度高而造成不必要的浪费,显存的消耗,因此极大地降低了完成实时更新所涉及全屏操作中的像素数目,明显降低显存开销,并提高了性能。
根据本申请的另一方面,所述方法还包括:将更新的水波动画高度图中承载的高度数据载入第一交互缓冲区,不同层级的交互区域对应的高度数据存储于第一交互缓冲区的不同通道中。例如,近景交互区域对应的高度数据存储于第一交互缓冲区中的红色和绿色通道,远景交互区域对应的高度数据存储于第一交互缓冲区中的蓝色和Alpha通道。这样,就实现高度数据的分类存储,进而方便提高后续运算速度,降低性能开销。根据本申请的一方面,在进行高度衰减扩散运算时,可以对第一交互缓冲区中不同通道中存储的高度数据分别进行计算,例如,可以对红色和绿色通道、蓝色和Alpha通道中存储的高度数据分别进行,进而在充分利用系统性能的同时大幅降低系统开销。
根据本申请的另一方面,在由高度数据生成法线信息之后,所述方法还可包括:将法线信息载入区别于第一交互缓冲区的第二交互缓冲区,其中,不同层级的交互区域对应的法线信息存储于第二交互缓冲区的不同通道中,例如,第二交互缓冲区中红色和绿色通道存储近景交互区域对应的法线信息,蓝色和Alpha通道存储远景交互区域对应的法线信息。
数据在虚拟场景中的渲染包括高度数据的渲染和法线的更新,以得到与当 前触发的交互相符且光影效果逼真的虚拟场景图像。
根据本申请的一方面,所述方法还包括根据高度数据输出与各层级交互区域匹配的高度图,并利用高度图对当前图像帧进行渲染,也即将各高度图渲染至虚拟场景。
对于高度数据,将以预置的格式输出,以得到各层级交互区域匹配的高度图。将各高度图渲染至虚拟场景就实现了各像素高度的实时更新。作为示例,高度数据是以RGBA16F的格式输出高度图的。
根据本申请的另一方面,根据法线信息输出法线贴图,并将法线贴图和虚拟场景中的法线混合后作用到虚拟场景中,以获得虚拟场景中实时更新呈现的图像。对于法线信息,也将以预置的格式相应输出法线贴图。在优选的实施例中,法线贴图是以RGBA的格式承载法线信息的。
例如,在水体交互中,将法线贴图和水面的法线进行混合后再作用到水面上,以获得真实的水波效果。
根据本申请的另一方面,所述方法还包括:在不同层级的交互区域中,根据相应的交互贴图的精度在相邻交互区域之间进行精度的平滑过渡。
这里,虚拟场景中图像是由多个交互贴图形成的,因此,图像中包括了多个交互区域,每一交互区域均对应于一交互帖图。在虚拟场景所呈现的图像中,由于交互贴图的精度并不相同,因此,导致了各交互区域之间的精度是不均匀的,交互区域相交的边缘区域存在着精度突变的缺陷,需要进行相邻交互区域之间的平滑过渡,进而以此来提高交互精度。在一个实施例中,相邻交互区域之间的平滑过渡可以通过在相邻交互区域之间进行线性插值来实现。
通过如上所述的过程,采用多个交互贴图来使得虚拟场景中图像存在多个精度,进而在提高近景交互区域的交互贴图精度的前提下成功降低交互贴图尺寸,使得交互贴图的精度得到充分利用,从而大幅降低性能开销,增加交互范围。
以通过如上述的过程实现的水体交互为例,现有的水体交互中交互规模非常小,最多只支持几个玩家的交互,效率非常低下,其主要原因在于采用的交互贴图尺寸非常大,对于性能存在着致命影响。而通过如上所述的过程,将大幅提升水体交互的性能和质量,采用多个小尺寸交互贴图,性能大幅提升;交互贴图的尺寸虽然降低,但质量反而有所提升。这都得益于根据距离将交互贴图进行层级划分,三维渲染一般都采用透射投影,因此,近景保持高的精度, 而远景的精度则可以大幅降低,由此也相应提高了交互范围,例如,两个层级所覆盖的交互范围将远远大于只有一个交互贴图时对应的交互范围。
另外,现有的虚拟场景中交互贴图的尺寸为1024*1024,在采用两个层级的交互贴图之后,其尺寸则变为两个512*512,通过此方式,得以显示提供近景的精度。而对于显存而言,现有的虚拟场景的交互中,所需要的两个交互缓冲区大小为1024*1024,共占显存12mb;采用两个层级的交互贴图之后,两个交互缓冲区的大小变为512*512,共占显存5.2mb,显存开销降低了一半。在显存开销降低的情况,还将极大地提升了性能。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机、计算机、服务器或者网络设备等)执行本申请各个实施例的方法。
实施例2
根据本申请实施例,还提供了一种用于实施上述图片处理的方法的图片处理的装置,该图片处理的装置主要用于执行本申请实施例上述内容所提供的图片处理的方法,以下对本申请实施例所提供的图片处理的装置做具体介绍:
图7是根据本申请实施例的图片处理的装置的示意图,如图7所示,该图片处理的装置主要包括第一获取单元71、生成单元73和移动单元75。
所述第一获取单元71用于获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,第二对象中的目标对象位于交互区域内。
所述生成单元73用于生成与交互区域对应的水波动画的法线贴图,其中, 法线贴图中显示有多条波纹。
具体地,可以根据该交互区域(例如,圆形切面)生成与该圆形切面对应的水波动画的法线贴图,并且该法线贴图中包含多条波纹(即,水波纹)。生成的水波动画的法线贴图可以应用到该圆形切面上,生成的法线贴图中的水波纹的条数是该圆形切面在当前图像帧中需要展示的条数。
所述移动单元75利用多条波纹中的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置,其中,第一目标波纹与目标对象相对应。
在本申请实施例中,通过在第一对象和第二对象发生交互时,生成与交互区域相对应的水波动画的法线贴图,并根据该与目标对象对应的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在所述法线贴图中的波纹位置相对应的位置,达到了通过法线贴图调整目标对象的位置的目的,从而实现了第二对象的位置可以随水波动画的法线贴图发生改变的技术效果,进而解决了现有技术中第一对象和第二对象发生交互时,第二对象的显示效果比较差的技术问题。
可选地,移动单元包括:调整子单元,用于根据水波动画的法线贴图将目标对象从第一形态调整到第二形态,其中,在第二形态中目标对象的顶点位于当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置。
可选地,调整子单元包括:第一获取模块,用于获取法线贴图中多条波纹的第一分量和第二分量的取值范围,其中,第一分量为法线贴图x轴方向上的分量,第二分量为法线贴图y轴方向上的分量;第二获取模块,用于获取目标对象在法线贴图的z轴方向上的第三分量;第一确定模块,用于根据第一分量和第二分量确定目标对象的顶点坐标的在x轴和y轴所在平面上的第一偏移距离和第一偏移方向,并且根据第三分量确定目标对象在z轴上的第二偏移距离和第二偏移方向,第一偏移距离处于取值范围内;控制模块,用于控制目标对象的顶点坐标在第一偏移方向上偏移第一偏移距离,在第二偏移方向上偏移第二偏移距离。
可选地,第一获取模块包括:获取子模块,用于获取法线贴图中多条波纹中最外层的波纹在x轴方向上的最大值和最小值,以及最外层的波纹在y轴方向上的最大值和最小值;调整子模块,用于调整多条波纹中每条波纹在x轴和y轴上的坐标,以使x轴方向上的最大值和最小值为预设数值,并且y轴方向上 的最大值和最小值处于预设数值;确定子模块,用于将预设数值所指示的范围作为取值范围。
可选地,生成单元包括:第二确定模块,用于基于与当前图像帧对应的第一图像帧的前一个图像帧的水波动画高度图确定第一图像帧的水波动画高度图的水面高度,其中,第一图像帧为水波动画中的一个图像帧,水面高度为第一图像帧的水波动画高度图中多条波纹的最高水面高度;第一生成模块,用于根据水面高度生成当前图像帧的水波动画高度图;第二生成模块,用于根据当前图像帧的水波动画高度图生成水波动画的法线贴图。
可选地,第一生成模块包括:扩散衰减子模块,用于以水面高度所在的第二目标波纹为中心,向外扩散第二目标波纹,得到扩散后的多条波纹,其中,扩散后的多条波纹的高度由中心向外逐渐降低。
如图7所示的图片处理的装置还可以包括交互区域划分单元,所述交互区域划分单元根据交互区域与视点之间的距离将交互区域划分为不同层级的交互区域,从而使得与交互区域对应的交互贴图具有不同的精度。例如,将交互区域划分为近景交互区域和远景交互区域,近景交互区域和远景交互区域所对应的交互贴图因为距离视点的远近不同而具有不同的精度。近景交互区域使用高精度的交互贴图,而远景交互区域使用低精度的交互贴图。
所述图片处理装置还包括第一交互缓冲区和第二交互缓冲区。根据本申请的一方面,将更新的高度数据载入第一交互缓冲区,不同层级的交互区域对应的高度数据存储于第一交互缓冲区的不同通道中。例如,近景交互区域对应的高度数据存储于第一交互缓冲区中的红色和绿色通道,远景交互区域对应的高度数据存储于第一交互缓冲区中的蓝色和Alpha通道。
在扩散衰减子模块进行高度衰减扩散运算时,可以对第一交互缓冲区中不同通道中存储的高度数据分别计算,例如,可以对红色和绿色通道存储的高度数据以及蓝色和Alpha通道中存储的高度数据分别进行。
在由高度数据生成法线信息之后,可以将法线信息载入第二交互缓冲区,其中,不同层级的交互区域对应的法线信息存储于第二交互缓冲区的不同通道中,例如,第二交互缓冲区中红色和绿色通道存储近景交互区域对应的法线信息,蓝色和Alpha通道存储远景交互区域对应的法线信息。
根据本申请的一方面,所述装置还可包括高度图渲染单元,所述高度图渲染单元根据高度数据输出与各层级交互区域匹配的高度图,并利用高度图对当 前图像帧进行渲染。
根据本申请的一方面,所述装置还包括精度平滑单元,所述精度平滑单元在不同层级的交互区域中,根据相应的交互贴图的精度在相邻交互区域之间进行精度的平滑过渡。
这里,虚拟场景中图像是由多个交互贴图形成的,因此,图像中包括了多个交互区域,每一交互区域均对应于一交互帖图。在虚拟场景所呈现的图像中,由于交互贴图的精度并不相同,因此,导致了各交互区域之间的精度是不均匀的,交互区域相交的边缘区域存在着精度突变的缺陷,需要进行相邻交互区域之间的平滑过渡,进而以此来提高交互精度。在一个实施例中,相邻交互区域之间的平滑过渡可以通过在相邻交互区域之间进行线性插值来实现。
实施例3
根据本申请实施例,还提供了一种用于实施上述图片处理的方法的移动终端,如图8所示,该移动终端主要包括处理器401、显示器402、数据接口403、存储器404和网络接口405,其中:
显示器402主要用于显示图像,例如游戏界面,其中,该游戏界面包括游戏人物或者特技以及与游戏人物或者特技交互的草地。
数据接口403则主要通过数据传输的方式将用户输入的对游戏的控制指令传输给处理器401。
存储器404主要用于存储执行如实施例1所述方法的指令以及相关数据,例如,游戏中的具有水波动画效果的对象(例如草地),以及用户的游戏进度等信息。
网络接口405主要用于与处理器401进行网络通信,为图片的交互渲染提供数据支持。
处理器401主要用于执行如下操作:
获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,第二对象中的目标对象位于交互区域内;生成与交互区域对应的水波动画的法线贴图,其中,法线贴图中显示有多条波纹;利用多条波纹中的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置,其中,第一目标波纹与目标对象相对应。
处理器401还用于根据水波动画的法线贴图将目标对象从第一形态调整到 第二形态,其中,在第二形态中目标对象的顶点位于当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置。
处理器401还用于获取法线贴图中多条波纹的第一分量和第二分量的取值范围,其中,第一分量为法线贴图x轴方向上的分量,第二分量为法线贴图y轴方向上的分量;获取目标对象在法线贴图的z轴方向上的第三分量;根据第一分量和第二分量确定目标对象的顶点坐标的在x轴和y轴所在平面上的第一偏移距离和第一偏移方向,并且根据第三分量确定目标对象在z轴上的第二偏移距离和第二偏移方向,第一偏移距离处于取值范围内;控制目标对象的顶点坐标在第一偏移方向上偏移第一偏移距离,在第二偏移方向上偏移第二偏移距离。
处理器401还用于获取法线贴图中多条波纹中最外层的波纹在x轴方向上的最大值和最小值,以及最外层的波纹在y轴方向上的最大值和最小值;调整多条波纹中每条波纹在x轴和y轴上的坐标,以使x轴方向上的最大值和最小值为预设数值,并且y轴方向上的最大值和最小值处于预设数值;将预设数值所指示的范围作为取值范围。
处理器401还用于基于与当前图像帧对应的第一图像帧的前一个图像帧的水波动画高度图确定第一图像帧的水波动画高度图的水面高度,其中,第一图像帧为水波动画中的一个图像帧,水面高度为第一图像帧的水波动画高度图中多条波纹的最高水面高度;根据水面高度生成当前图像帧的水波动画高度图;根据当前图像帧的水波动画高度图生成水波动画的法线贴图。
处理器401还用于以水面高度所在的第二目标波纹为中心,向外扩散第二目标波纹,得到扩散后的多条波纹,其中,扩散后的多条波纹的高度由中心向外逐渐降低。
处理器401还用于根据交互区域与视点之间的距离将交互区域划分为不同层级的交互区域,从而使得与交互区域对应的交互贴图具有不同的精度。
所述存储器404还包括第一交互缓冲区和第二交互缓冲区。处理器401还可以将更新的高度数据载入第一交互缓冲区,不同层级的交互区域对应的高度数据存储于第一交互缓冲区的不同通道中。在处理器401进行高度衰减扩散运算时,可以对第一交互缓冲区中不同通道中存储的高度数据分别计算。
在处理器401根据高度数据生成法线信息之后,处理器401还可以将法线信息载入第二交互缓冲区,其中,不同层级的交互区域对应的法线信息存储于 第二交互缓冲区的不同通道中。
处理器401还可以根据高度数据输出与各层级交互区域匹配的高度图,并利用高度图对当前图像帧进行渲染。
处理器401还可以在不同层级的交互区域中,根据相应的交互贴图的精度在相邻交互区域之间进行精度的平滑过渡。
实施例4
本申请的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于存储本申请实施例1的图片处理的方法的程序代码。
可选地,在本实施例中,上述存储介质可以位于移动通信网络、广域网、城域网或局域网的网络中的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S1,获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,第二对象中的目标对象位于交互区域内;
S2,生成与交互区域对应的水波动画的法线贴图,其中,法线贴图中显示有多条波纹;
S3,利用多条波纹中的第一目标波纹将目标对象移动到当前图像帧中与第一目标波纹在法线贴图中的波纹位置相对应的位置,其中,第一目标波纹与目标对象相对应。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
可选地,本实施例中的具体示例可以参考上述实施例1和实施例2中所描述的示例,本实施例在此不再赘述。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、 服务器或者网络设备等)执行本申请各个实施例方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (25)

  1. 一种图片处理的方法,其特征在于,包括:
    获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,所述第二对象中的目标对象位于所述交互区域内;
    生成与所述交互区域对应的水波动画的法线贴图,其中,所述法线贴图中显示有多条波纹;
    利用所述多条波纹中的第一目标波纹将目标对象移动到所述当前图像帧中与所述第一目标波纹在所述法线贴图中的波纹位置相对应的位置,其中,所述第一目标波纹与所述目标对象相对应。
  2. 根据权利要求1所述的方法,其特征在于,利用所述多条波纹中的第一目标波纹将目标对象移动到所述当前图像帧中与所述第一目标波纹在所述法线贴图中的波纹位置相对应的位置包括:
    根据所述水波动画的法线贴图将所述目标对象从第一形态调整到第二形态,其中,在所述第二形态中所述目标对象的顶点位于所述当前图像帧中与所述第一目标波纹在所述法线贴图中的波纹位置相对应的位置。
  3. 根据权利要求2所述的方法,其特征在于,根据所述水波动画的法线贴图将所述目标对象从第一形态调整到第二形态包括:
    获取所述法线贴图中多条波纹的第一分量和第二分量的取值范围,其中,所述第一分量为所述法线贴图x轴方向上的分量,所述第二分量为所述法线贴图y轴方向上的分量;
    获取所述目标对象在所述法线贴图的z轴方向上的第三分量;
    根据所述第一分量和所述第二分量确定所述目标对象的顶点坐标的在所述x轴和所述y轴所在平面上的第一偏移距离和第一偏移方向,并且根据所述第三分量确定所述目标对象在所述z轴上的第二偏移距离和第二偏移方向,所述第一偏移距离处于所述取值范围内;
    控制所述目标对象的顶点坐标在所述第一偏移方向上偏移所述第一偏移距离,在所述第二偏移方向上偏移所述第二偏移距离。
  4. 根据权利要求3所述的方法,其特征在于,获取所述法线贴图中多条波纹的第一分量和所述第二分量的取值范围包括:
    获取所述法线贴图中多条波纹中最外层的波纹在所述x轴方向上的最大值和最小值,以及所述最外层的波纹在所述y轴方向上的最大值和最小值;
    调整所述多条波纹中每条波纹在所述x轴和所述y轴上的坐标,以使所述x轴方向上的最大值和最小值为预设数值,并且所述y轴方向上的最大值和最小值处于所述预设数值;
    将所述预设数值所指示的范围作为所述取值范围。
  5. 根据权利要求1所述的方法,其特征在于,生成与所述交互区域对应的水波动画的法线贴图包括:
    基于与所述当前图像帧对应的第一图像帧的前一个图像帧的水波动画高度图确定所述第一图像帧的水波动画高度图的水面高度,其中,所述第一图像帧为所述水波动画中的一个图像帧,所述水面高度为所述第一图像帧的水波动画高度图中多条波纹的最高水面高度;
    根据所述水面高度生成所述当前图像帧的水波动画高度图;
    根据所述当前图像帧的水波动画高度图生成所述水波动画的法线贴图。
  6. 根据权利要求5所述的方法,其特征在于,根据所述水面高度生成所述当前图像帧的水波动画高度图包括:
    以所述水面高度所在的第二目标波纹为中心,向外扩散所述第二目标波纹,得到扩散后的所述多条波纹,其中,扩散后的所述多条波纹的高度由所述中心向外逐渐降低。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    根据交互区域与视点之间的距离将交互区域划分为不同层级的交互区域,从而使得与不同层级的交互区域对应的交互贴图具有不同的精度。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:将水波动画高度图中承载的高度数据载入第一缓冲区,不同层级的交互区域对应的高度数据存储于第一缓冲区的不同通道中。
  9. 根据权利要求8所述的方法,其特征在于,在向外扩散所述第二目标波纹得到扩散后的所述多条波纹过程中,对第一缓冲区中不同通道中存储的高度数据分别进行计算。
  10. 根据权利要求7所述的方法,其特征在于,将用于形成法线贴图的法线信息载入第二缓冲区,其中,不同层级的交互区域对应的法线信息存储于第二缓冲区的不同通道中。
  11. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    根据高度数据输出与各层级交互区域匹配的高度图,并利用高度图对当前 图像帧进行渲染。
  12. 根据权利要求7所述的方法,在不同层级的交互区域中,根据相应的交互贴图的精度在相邻交互区域之间进行精度的平滑过渡。
  13. 一种图片处理的装置,其特征在于,包括:
    第一获取单元,用于获取当前图像帧中第一对象的交互空间与第二对象所在的第一平面相交的交互区域,其中,所述第二对象中的目标对象位于所述交互区域内;
    生成单元,用于生成与所述交互区域对应的水波动画的法线贴图,其中,所述法线贴图中显示有多条波纹;
    移动单元,用于利用所述多条波纹中的第一目标波纹将目标对象移动到所述当前图像帧中与所述第一目标波纹在所述法线贴图中的波纹位置相对应的位置,其中,所述第一目标波纹与所述目标对象相对应。
  14. 根据权利要求13所述的装置,其特征在于,所述移动单元包括:
    调整子单元,用于根据所述水波动画的法线贴图将所述目标对象从第一形态调整到第二形态,其中,在所述第二形态中所述目标对象的顶点位于所述当前图像帧中与所述第一目标波纹在所述法线贴图中的波纹位置相对应的位置。
  15. 根据权利要求14所述的装置,其特征在于,所述调整子单元包括:
    第一获取模块,用于获取所述法线贴图中多条波纹的第一分量和第二分量的取值范围,其中,所述第一分量为所述法线贴图x轴方向上的分量,所述第二分量为所述法线贴图y轴方向上的分量;
    第二获取模块,用于获取所述目标对象在所述法线贴图的z轴方向上的第三分量;
    第一确定模块,用于根据所述第一分量和所述第二分量确定所述目标对象的顶点坐标的在所述x轴和所述y轴所在平面上的第一偏移距离和第一偏移方向,并且根据所述第三分量确定所述目标对象在所述z轴上的第二偏移距离和第二偏移方向,所述第一偏移距离处于所述取值范围内;
    控制模块,用于控制所述目标对象的顶点坐标在所述第一偏移方向上偏移所述第一偏移距离,在所述第二偏移方向上偏移所述第二偏移距离。
  16. 根据权利要求15所述的装置,其特征在于,所述第一获取模块包括:
    获取子模块,用于获取所述法线贴图中多条波纹中最外层的波纹在所述x轴方向上的最大值和最小值,以及所述最外层的波纹在所述y轴方向上的最大 值和最小值;
    调整子模块,用于调整所述多条波纹中每条波纹在所述x轴和所述y轴上的坐标,以使所述x轴方向上的最大值和最小值为预设数值,并且所述y轴方向上的最大值和最小值处于所述预设数值;
    确定子模块,用于将所述预设数值所指示的范围作为所述取值范围。
  17. 根据权利要求13所述的装置,其特征在于,所述生成单元包括:
    第二确定模块,用于基于与所述当前图像帧对应的第一图像帧的前一个图像帧的水波动画高度图确定所述第一图像帧的水波动画高度图的水面高度,其中,所述第一图像帧为所述水波动画中的一个图像帧,所述水面高度为所述第一图像帧的水波动画高度图中多条波纹的最高水面高度;
    第一生成模块,用于根据所述水面高度生成所述当前图像帧的水波动画高度图;
    第二生成模块,用于根据所述当前图像帧的水波动画高度图生成所述水波动画的法线贴图。
  18. 根据权利要求17所述的装置,其特征在于,所述第一生成模块包括:
    扩散衰减子模块,用于以所述水面高度所在的第二目标波纹为中心,向外扩散所述第二目标波纹,得到扩散后的所述多条波纹,其中,扩散后的所述多条波纹的高度由所述中心向外逐渐降低。
  19. 根据权利要求18所述的装置,其特征在于,所述装置还包括:
    交互区域划分单元,用于根据交互区域与视点之间的距离将交互区域划分为不同层级的交互区域,从而使得与不同层级的交互区域对应的交互贴图具有不同的精度。
  20. 根据权利要求19所述的装置,其特征在于,所述装置还包括:
    第一缓冲区,用于存储水波动画高度图中承载的高度数据,其中,不同层级的交互区域对应的高度数据存储于第一缓冲区的不同通道中。
  21. 根据权利要求20所述的装置,其特征在于,在扩散衰减子模块向外扩散所述第二目标波纹得到扩散后的所述多条波纹过程中,对第一缓冲区中不同通道中存储的高度数据分别进行计算。
  22. 根据权利要求19所述的装置,其特征在于,所述装置还包括:
    第二缓冲区,用于存储用于形成法线贴图的法线信息,其中,不同层级的交互区域对应的法线信息存储于第二缓冲区的不同通道中。
  23. 根据权利要求19所述的装置,其特征在于,所述装置还包括:
    高度图渲染单元,用于根据高度数据输出与各层级交互区域匹配的高度图,并利用高度图对当前图像帧进行渲染。
  24. 根据权利要求19所述的装置,其特征在于,所述装置还包括:
    精度平滑单元,用于在不同层级的交互区域中,根据相应的交互贴图的精度在相邻交互区域之间进行精度的平滑过渡。
  25. 一种计算机可读存储介质,在所述计算机可读存储介质上存储有用于执行如权利要求1-12中的任一项所述的方法的程序指令。
PCT/CN2017/079587 2016-04-06 2017-04-06 图片处理的方法和装置 WO2017174006A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020187018499A KR102108244B1 (ko) 2016-04-06 2017-04-06 이미지 처리 방법 및 장치
US16/152,618 US10839587B2 (en) 2016-04-06 2018-10-05 Image processing methods and devices for moving a target object by using a target ripple

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201610210562.8A CN105912234B (zh) 2016-04-06 2016-04-06 虚拟场景的交互方法和装置
CN201610209641.7 2016-04-06
CN201610210562.8 2016-04-06
CN201610209641.7A CN105913471B (zh) 2016-04-06 2016-04-06 图片处理的方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/152,618 Continuation US10839587B2 (en) 2016-04-06 2018-10-05 Image processing methods and devices for moving a target object by using a target ripple

Publications (1)

Publication Number Publication Date
WO2017174006A1 true WO2017174006A1 (zh) 2017-10-12

Family

ID=60000257

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/079587 WO2017174006A1 (zh) 2016-04-06 2017-04-06 图片处理的方法和装置

Country Status (3)

Country Link
US (1) US10839587B2 (zh)
KR (1) KR102108244B1 (zh)
WO (1) WO2017174006A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598777A (zh) * 2018-12-07 2019-04-09 腾讯科技(深圳)有限公司 图像渲染方法、装置、设备及存储介质
WO2022111003A1 (zh) * 2020-11-30 2022-06-02 成都完美时空网络技术有限公司 游戏图像处理方法、装置、程序和可读介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112221150B (zh) * 2020-10-19 2023-01-10 珠海金山数字网络科技有限公司 一种虚拟场景中的涟漪仿真方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1949273A (zh) * 2006-11-27 2007-04-18 北京金山软件有限公司 一种在3d游戏中绘制草坪的方法和系统
CN102663245A (zh) * 2012-03-30 2012-09-12 福建天趣网络科技有限公司 3d游戏世界编辑器
CN102930590A (zh) * 2012-10-17 2013-02-13 沈阳创达技术交易市场有限公司 可交互的地表修饰物渲染方法
US20130116046A1 (en) * 2011-11-08 2013-05-09 Zynga Inc. Method and system for rendering virtual in-game environments
CN103679820A (zh) * 2013-12-16 2014-03-26 北京像素软件科技股份有限公司 一种3d虚拟场景中模拟草体扰动效果的方法
CN105913471A (zh) * 2016-04-06 2016-08-31 腾讯科技(深圳)有限公司 图片处理的方法和装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061067A (en) * 1996-08-02 2000-05-09 Autodesk, Inc. Applying modifiers to objects based on the types of the objects
US5877777A (en) * 1997-04-07 1999-03-02 Colwell; Tyler G. Fluid dynamics animation system and method
JP4183104B2 (ja) * 1998-12-04 2008-11-19 株式会社バンダイナムコゲームス ゲーム装置及び情報記憶媒体
KR20060064142A (ko) * 2004-12-08 2006-06-13 김종현 범프 매핑과 실시간 비디오 텍스쳐를 이용한 인터랙티브 물 효과
KR20060133671A (ko) * 2005-06-21 2006-12-27 이인권 컴퓨터그래픽 캐릭터와의 상호 작용을 위한 실사 배경이미지 내의 오브젝트의 변형 기법
US8888596B2 (en) * 2009-11-16 2014-11-18 Bally Gaming, Inc. Superstitious gesture influenced gameplay
KR101894567B1 (ko) * 2012-02-24 2018-09-03 삼성전자 주식회사 락스크린 운용 방법 및 이를 지원하는 단말기

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1949273A (zh) * 2006-11-27 2007-04-18 北京金山软件有限公司 一种在3d游戏中绘制草坪的方法和系统
US20130116046A1 (en) * 2011-11-08 2013-05-09 Zynga Inc. Method and system for rendering virtual in-game environments
CN102663245A (zh) * 2012-03-30 2012-09-12 福建天趣网络科技有限公司 3d游戏世界编辑器
CN102930590A (zh) * 2012-10-17 2013-02-13 沈阳创达技术交易市场有限公司 可交互的地表修饰物渲染方法
CN103679820A (zh) * 2013-12-16 2014-03-26 北京像素软件科技股份有限公司 一种3d虚拟场景中模拟草体扰动效果的方法
CN105913471A (zh) * 2016-04-06 2016-08-31 腾讯科技(深圳)有限公司 图片处理的方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598777A (zh) * 2018-12-07 2019-04-09 腾讯科技(深圳)有限公司 图像渲染方法、装置、设备及存储介质
WO2022111003A1 (zh) * 2020-11-30 2022-06-02 成都完美时空网络技术有限公司 游戏图像处理方法、装置、程序和可读介质

Also Published As

Publication number Publication date
US10839587B2 (en) 2020-11-17
KR102108244B1 (ko) 2020-05-07
US20190035134A1 (en) 2019-01-31
KR20180088876A (ko) 2018-08-07

Similar Documents

Publication Publication Date Title
US9342918B2 (en) System and method for using indirect texturing to efficiently simulate and image surface coatings and other effects
US9671942B2 (en) Dynamic user interface for inheritance based avatar generation
US20120212491A1 (en) Indirect lighting process for virtual environments
CN105913471B (zh) 图片处理的方法和装置
KR101158255B1 (ko) 화상처리장치, 화상처리방법 및 정보기록매체
US7019742B2 (en) Dynamic 2D imposters of 3D graphic objects
US20100020080A1 (en) Image generation system, image generation method, and information storage medium
US20090244064A1 (en) Program, information storage medium, and image generation system
US20100156918A1 (en) Program, information storage medium, image generation system, and image generation method for generating an image for overdriving the display device
CN104200506A (zh) 三维gis海量矢量数据渲染方法及装置
CN112200902A (zh) 图像渲染方法、装置、电子设备及存储介质
CN112184873B (zh) 分形图形创建方法、装置、电子设备和存储介质
WO2017174006A1 (zh) 图片处理的方法和装置
CN115082607A (zh) 虚拟角色头发渲染方法、装置、电子设备和存储介质
JP6852224B2 (ja) 全視角方向の球体ライトフィールドレンダリング方法
US8400445B2 (en) Image processing program and image processing apparatus
Thorn Learn unity for 2d game development
CN115830210A (zh) 虚拟对象的渲染方法、装置、电子设备及存储介质
JP2005275797A (ja) プログラム、情報記憶媒体、及び画像生成システム
JP2004187731A (ja) ゲーム装置、ゲーム制御方法、及びプログラム
JP2010134671A (ja) 画像生成システム、プログラム及び情報記憶媒体
JP2007164651A (ja) プログラム、情報記憶媒体及び画像生成システム
Liu Complex Scene Loading Optimization Based On Virtual Reality Algorithm
CN116310013A (zh) 动画渲染方法、装置、计算机设备及计算机可读存储介质
CN117274451A (zh) 二维粒子动画的渲染方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20187018499

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17778670

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17778670

Country of ref document: EP

Kind code of ref document: A1