CN112933597B - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN112933597B
CN112933597B CN202110284798.7A CN202110284798A CN112933597B CN 112933597 B CN112933597 B CN 112933597B CN 202110284798 A CN202110284798 A CN 202110284798A CN 112933597 B CN112933597 B CN 112933597B
Authority
CN
China
Prior art keywords
target
vertex
skeleton
image frame
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110284798.7A
Other languages
Chinese (zh)
Other versions
CN112933597A (en
Inventor
肖渊源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110284798.7A priority Critical patent/CN112933597B/en
Publication of CN112933597A publication Critical patent/CN112933597A/en
Application granted granted Critical
Publication of CN112933597B publication Critical patent/CN112933597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics

Abstract

The application discloses an image processing method, an image processing device, computer equipment and a medium, wherein the method comprises the following steps: calling a graphics processor to obtain skin resources of the skeletal animation of the target object from a video memory, wherein the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; one grid corresponds to one pixel block, and the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; acquiring target space information of the associated skeleton of each vertex in a target image frame from a video memory according to the associated skeleton information of each vertex; performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex; and performing image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain a target image frame. The method and the device can better render the skeleton animation, and effectively improve rendering efficiency of the skeleton animation.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, and a computer storage medium.
Background
With the development of science and technology, bone animation technology is developed, and the bone animation is a model animation; in skeletal animation, a model has a skeletal structure of interconnected "bones," and animation is generated for the model by changing the orientation and position of the bones. At present, how to better render skeletal animation becomes a hot research.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, computer equipment and a storage medium, which can better render skeleton animation and effectively improve rendering efficiency of the skeleton animation.
In one aspect, an embodiment of the present application provides an image processing method, where the method includes:
calling a graphics processor to obtain skin resources of skeleton animation of a target object from a video memory, wherein the target object comprises a plurality of skeletons, and the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs;
acquiring target space information of the associated skeleton of each vertex in a target image frame from the video memory according to the associated skeleton information of each vertex, wherein the target image frame is any image frame in the skeleton animation;
performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex;
and performing image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain the target image frame.
In another aspect, an embodiment of the present application provides an image processing apparatus, including:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for calling a graphics processor to acquire skin resources of skeleton animation of a target object from a video memory, the target object comprises a plurality of skeletons, and the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs;
the obtaining unit is further configured to obtain, from the video memory, target space information of the relevant skeleton of each vertex in a target image frame according to the relevant skeleton information of each vertex, where the target image frame is any image frame in the skeleton animation;
the processing unit is used for performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex;
and the rendering unit is used for performing image rendering on the target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain the target image frame.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes an input interface and an output interface, and the computer device further includes:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of:
calling a graphics processor to obtain skin resources of skeleton animation of a target object from a video memory, wherein the target object comprises a plurality of skeletons, and the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs;
acquiring target space information of the associated skeleton of each vertex in a target image frame from the video memory according to the associated skeleton information of each vertex, wherein the target image frame is any image frame in the skeleton animation;
performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex;
and performing image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain the target image frame.
In yet another aspect, an embodiment of the present application provides a computer storage medium storing one or more instructions, where the one or more instructions are adapted to be loaded by a processor and perform the following steps:
calling a graphics processor to obtain skin resources of skeleton animation of a target object from a video memory, wherein the target object comprises a plurality of skeletons, and the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs;
acquiring target space information of the associated skeleton of each vertex in a target image frame from the video memory according to the associated skeleton information of each vertex, wherein the target image frame is any image frame in the skeleton animation;
performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex;
and performing image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain the target image frame.
After a plurality of bones are set for a target object in advance, skin resources of bone animation of the target object and space information of each bone in each image frame in the bone animation can be configured in an off-line mode, so that when the bone animation needs to be rendered, the skin resources can be directly obtained from a video memory by calling a graphics processor, and target space information of the associated bone of each vertex in the target image frame can be directly obtained from the video memory according to the skin resources; therefore, the calculation amount in the image rendering process can be effectively reduced, so that the processing resources are effectively saved, and the rendering process is accelerated. Then, coordinate transformation is carried out on the initial coordinate of each vertex by calling a graphic processor to adopt the target space information of the associated skeleton of each vertex, so as to obtain the target coordinate of each vertex; and performing image rendering on the target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain a target image frame. Because the whole image rendering process is realized in the graphics processor, a central processing unit is not needed for animation calculation, and the resources of the central processing unit can be effectively saved; and the rendering efficiency can be effectively improved and the occupied bandwidth can be reduced by means of the high parallel processing capacity of the graphics processor.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1a is a schematic diagram of a skeleton of a target object provided in an embodiment of the present application;
FIG. 1b is a schematic diagram illustrating an embodiment of the present application for binding a vertex to a bone;
FIG. 2a is a schematic view of a setup interface provided by an embodiment of the present application;
FIG. 2b is a schematic diagram of an image adding interface provided in an embodiment of the present application;
FIG. 2c is a schematic diagram of a resource addition interface provided by an embodiment of the present application;
FIG. 2d is a schematic diagram of a resource generation triggering component provided in an embodiment of the present application;
FIG. 2e is a schematic diagram of a progress-suggestion animation according to an embodiment of the present application;
FIG. 3a is a schematic block diagram of an image processing scheme provided by an embodiment of the present application;
FIG. 3b is a schematic diagram of any image frame of a bone animation provided by an embodiment of the present application;
FIG. 3c is a schematic diagram of a plurality of skeletal animations displayed on a screen according to an embodiment of the present disclosure;
FIG. 3d is a performance test chart involved in rendering a skeletal animation by a computer device prior to using image processing according to an embodiment of the present application;
FIG. 3e is a performance test chart involved in rendering a skeletal animation by a computer device prior to using image processing according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a skin resource provided in an embodiment of the present application;
FIG. 6 is a flowchart illustrating an image processing method according to another embodiment of the present application;
fig. 7a is a schematic structural diagram of event information of a resource mount event according to an embodiment of the present application;
FIG. 7b is a schematic diagram of a target assistant image moving along with a target bone according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In the embodiment of the application, the bone animation can be called as bone skinning animation; it can be understood as an animation formed by driving picture changes through skeletal motion. For a target object (such as any object of a person, an animal, etc.), a corresponding skeleton animation may be generated based on the skeleton of the target object, the model of the target object, and a series of key image frames. Wherein: (1) the skeleton of the target object is constructed based on the skeletal structure characteristics of the target object, the skeleton of the target object can comprise a plurality of skeletons, and any two skeletons can be connected through a joint point; for example, taking the object shown on the left side of fig. 1a as an example, the constructed skeleton diagram can be seen as shown in the right side diagram of fig. 1 a. For the skeleton of the target object, each skeleton in the skeleton can move by the same distance in a certain direction, such movement can be referred to as translational motion of the skeleton, and the distance moved by the skeleton can be referred to as translational parameter of the skeleton; in addition, any bone in the skeleton can also make a circular motion around a joint point which is directly or indirectly connected, such a motion can be referred to as a rotational motion of any bone, and an angle generated by the circular motion of any bone can be referred to as a rotational parameter. (2) The model of the target object may include a plurality of meshes drawn based on shape information (such as clothes, height, skin color, etc.) of the target object, and the shape of the meshes may be triangle, quadrangle, etc.; and the model of the target object needs to be bound with the skeleton of the target object through a skin, wherein the skin refers to a manufacturing technology in three-dimensional animation, and specifically refers to a technology for binding each vertex in the model to the skeleton. For example, see FIG. 1b for an illustration: for vertex a in the model, it can be bound to bone 1, bone 2, and bone 3; for a vertex b of a certain mesh in the model, the vertex b may be bound to the bone 4, and so on; it should be understood that fig. 1b merely exemplarily represents a part of the meshes in the model, and does not limit the number of meshes in the model and the shape of each mesh. (3) One key image frame corresponds to one posture state of the skeleton, and the posture state corresponding to each image frame between any two key image frames can be obtained by interpolating the posture states corresponding to any two key image frames.
In order to better render the skeletal animation and improve rendering efficiency of the skeletal animation, an embodiment of the present application provides an image Processing scheme based on GPU (Graphics Processing Unit) acceleration. The GPU may also be referred to as a display core, a visual processor, a display chip, and the like; the microprocessor is specially used for image and graphic related operation on computer equipment (such as a server, a personal computer, a workstation, a game machine and some mobile equipment (such as a tablet computer, a smart phone and the like)). The GPU acceleration refers to the acceleration of scientific, analytical, engineering, consumption and application program running speeds by simultaneously utilizing a GPU and a central processing unit (central processing unit); the CPU referred to herein refers to the operation and control core of the computer device, and is the final execution unit of information processing and program operation. In a specific implementation, the image processing scheme may include the following two parts:
generating resource data for GPU acceleration:
specifically, an animation editor can be provided for an art producer, so that the art producer can import an animation resource file of skeletal animation of a target object in the animation editor; the animation resource file can comprise: skeleton information of the target object, animation information of the skeletal animation, and the like. In addition, the art producer can select the static grid option 21 from the setting interface shown in fig. 2a and perform a trigger operation on the confirmation importing component 22 in the setting interface to import the static grid into the animation editor. The artwork creator may then edit the skeletal animation by using an animation blueprint (a visual script that may be used to create and control complex animation behavior) of the animation editor, such as adding key image frames to the skeletal animation, one or more auxiliary resources involved in the skeletal animation, and so forth.
The adding mode of the key image frame is as follows: the animation editor may provide an image add-on interface to the art worker as shown in fig. 2b, which may include an image frame position setting area 23, an image add-on area 24, and a confirmation add-on component 25. The art worker may pass the anchor point 232 on the slide rail 232 in the image frame position setting area 23 to slide the anchor point to the position a to which the key image frame is to be added; for example, if an artist wants to add a new key image frame at 5.87 seconds of skeletal animation, the anchor point 232 may be slid to 5.84 seconds of the track. In addition, the art worker can add a key image frame to the image adding area 24, and then can perform a trigger operation on the confirmation adding component 25 to realize the addition of the key image frame.
The auxiliary resources are added in the following way: the animation editor may provide an art worker with a resource addition interface as shown in fig. 2c, which may include a resource addition area 26 and a resource binding area 27. The art worker can add any auxiliary resource, such as auxiliary image, audio data, etc., in the resource adding area 26; taking an auxiliary asset added by an art asset worker as an auxiliary image 28 as an example, when the animation editor detects an id selection operation of the art worker for the auxiliary image 28, the auxiliary image 28 may be displayed in the asset addition interface. The art worker can also add an event trigger of the resource image in the resource binding area 27, wherein the event trigger is used for detecting a mounted resource event which can trigger the output of the resource image; and a mount point may be bound for the auxiliary image in the resource binding region 27 and set to mount to one or more bones. The art worker may then enter a resource addition operation to effect the addition of the secondary resource.
After the art worker performs the above operations, the resource generation triggering component 29 in fig. 2d may perform a triggering operation to trigger the animation editor to generate resource data for GPU acceleration according to a series of operations performed by the art worker. Specifically, the resource data may include at least one of: skinning resources that may be used for GPU-accelerated skeletal animation, chartlet textures that contain animation information (e.g., spatial information for each skeleton in each image frame), event datasets that contain event information for one or more resource mounting events (e.g., resource mounting conditions, auxiliary resource, etc. information), and hanging point datasets that contain hanging point information for one or more hanging points, among others. Wherein the skinning resource may include vertices of the multiple meshes and vertex information (e.g., initial coordinates, associated bone information, rendering information, etc.) for each vertex, each vertex in the skinning resource having been in binding association with one or more bones of the target object.
Optionally, the animation editor may further output a progress-prompting animation during the process of generating the resource data for GPU acceleration, as shown in fig. 2 e. After the animation editor generates the resource data, the animation editor can store the resource data in a system memory of the computer equipment where the animation editor is located; and the resource data can be distributed to other computer equipment through transmission media such as a server, a network, a near field communication assembly and the like, so that the other computer equipment can render and display the bone animation of the target object based on the resource data.
(II) rendering and displaying the bone animation based on the resource data:
after any computer device acquires the resource data, the resource data can be stored in a system memory. Referring to FIG. 3 a: when the skeleton animation needs to be rendered and displayed, the computer equipment can call the CPU to transmit the skin resource and the chartlet texture from the system memory to the video memory, and the GPU carries out animation calculation according to the progress of the skin resource and the chartlet texture. Specifically, for any image frame in the bone animation, the GPU may calculate, according to the skin resource and the texture of the map, coordinates of vertices on each mesh in the skin resource after being affected by the animation (i.e., bone motion); therefore, image rendering can be carried out according to the coordinates of the affected vertexes and the rendering information of each vertex to obtain any image frame, and any image frame in the skeleton animation can be displayed in a terminal screen; for example, the portion of the human population marked with a dashed box in fig. 3b is any image frame in the skeletal animation.
Optionally, if the resource data further includes: an event dataset comprising event information for one or more resource mount events and a mount point dataset comprising mount point information for one or more mount points, the computer device may also invoke the CPU to execute event logic based on the event dataset and the mount point dataset. Specifically, the event logic is as follows: whether any image frame meets the resource mounting condition in any event information in the event data set or not can be detected, and if any image frame meets a certain resource mounting condition, the resource mounting event in any image frame can be determined; at this time, the hanging point information of the hanging point in any image frame can be acquired from the hanging point data set, so that the event visualization part (namely, the auxiliary resource in the event information) is output according to the acquired hanging point information and the event information of the resource hanging event existing in any image frame.
Practice proves that the image processing scheme provided by the embodiment of the application can have the following beneficial effects: (1) because the GPU has high parallel processing capacity, image rendering is realized by the GPU according to resource data such as skin resources, chartlet textures and the like configured in an off-line mode, a CPU (central processing unit) is not required to execute an animation calculation process, the GPU can be effectively accelerated, the memory occupation is reduced, and the running efficiency and the rendering efficiency are improved. (2) In the whole image processing process, the number and the precision of the bone animations in the same screen are not required to be limited, the purpose of rendering the bone animations in the same screen in a large scale can be achieved, and the creativity of art workers on the bone animations is liberated. As shown in fig. 3c, thousands of skeletal animations can be achieved using this image processing scheme. (3) Before the image processing scheme is adopted, a performance test chart involved when the computer device renders the bone animation can be seen in fig. 3 d; after the image processing scheme is used, a performance test chart involved in rendering the bone animation by the computer device can be seen in fig. 3 e. By comparing fig. 3d and fig. 3e, it can be seen that: before the image processing scheme is used, the frame rate represented by the computer equipment is not smooth enough, and obvious blocking and heating phenomena often occur; after the image processing scheme is used, the frame rate of the computer equipment can reach 60 frames, the smoothness is realized, and the phenomena of sharp pricks and heating are avoided. Therefore, the occupation of the bandwidth can be reduced by the GPU acceleration, so that the heating phenomenon of computer equipment caused by high occupation of the bandwidth is reduced, and the frame rate can be effectively improved. (4) The animation editor can be decoupled from the image rendering project, and the tool chain of the animation editor is complete, simple to use and high in reusability.
Based on the above description, the embodiment of the present application provides an image processing method; the image processing method can be executed by a computer device, wherein the computer device can be a terminal or a server; alternatively, the image processing method may be executed by cooperation of the terminal and the server. For convenience of description, the image processing method is described by taking computer equipment as an example, and the computer equipment is a terminal; referring to fig. 4, the image processing method may include the following steps S401 to S404:
s401, calling a graphics processor to acquire skin resources of the skeleton animation of the target object from a video memory.
Wherein the target object may be any object comprising a plurality of bones. For example, the target object may be an avatar of any user in an internet or internet application; such as virtual character objects of any user in game applications, virtual personal images of any user in instant messaging applications, virtual personal images of any user in Social Networking Services (SNS) applications, and so on. As another example, the target object may be any person, animal or other creature in a movie or television show, and so on. It should be noted that, in the embodiments of the present application, the meaning of "a plurality" means at least two.
The skinning resource of the skeletal animation of the target object can be transmitted from a system memory to a video memory by a Central Processing Unit (CPU), and the skinning resource can be understood as a static grid containing boneinfluence information (associated skeletal information); a static grid here refers to a grid that may utilize GPU Instancing technology, which may enable GPU features that render multiple copies of the same grid in a scene at a time in computer graphics. Referring to FIG. 5, the skin resource may include vertices of a plurality of meshes and vertex information for each vertex; the vertex information for any vertex includes initial coordinates, associated bone information, and rendering information. Wherein, the initial coordinate of any vertex can refer to: under the condition that each mesh in the model is not deformed, coordinates of the position of any vertex in the model coordinate system; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which any vertex belongs, and the pixel block referred to herein refers to a pixel block in the terminal screen.
The associated bone information of any vertex may include: bone index and bone weight of the associated bone to which any vertex is bound. Wherein, the definition of the bone index is as follows: since in the skeleton animation, each vertex of each mesh is influenced by 0 or more bones, and the bones change the positions of the influenced vertices when moving; thus, to record which bones each vertex is affected by, each bone may be numbered by a number and then the number of each bone recorded on the vertex it affects so that the vertex indexes the relevant bones based on the number, and the number of each bone may then be considered the bone index for each bone. The bone weights are defined as follows: since each vertex on the mesh may be affected by multiple associated bones, the number of the impact of each bone on the vertex may be defined as a floating point number, and the sum of the floating point numbers is the number 1, so that the number of the impact of a bone on a certain mesh vertex may be used as the bone weight of the bone.
S402, acquiring target space information of the relevant skeleton of each vertex in the target image frame from the video memory according to the relevant skeleton information of each vertex.
The target image frame is any image frame in the skeleton animation; the target spatial information of the associated bone of each vertex in the target image frame may be used to indicate: the translation and rotation parameters of the associated bone for each vertex in the target image frame. As can be seen from the foregoing, the mapping texture may be used to store the spatial information of each bone in each image frame, and the mapping texture may be transferred from the system memory to the video memory by the CPU; therefore, the computer device may first invoke the graphics processor to obtain the tile texture from the video memory, and then obtain the target spatial information from the tile texture.
In one embodiment, after obtaining the texture of the map, the computer device may also traverse the vertices of the multiple meshes first, and determine the associated skeleton of the current vertex according to the currently traversed skeleton identifier of the associated skeleton of the current vertex; then, the target space information of the relevant bone of the current vertex in the target image frame is read from the chartlet texture. In another embodiment, considering that there may be a situation where multiple vertices simultaneously correspond to one associated bone, if the target space information is obtained by traversing the vertices and then performing a read operation on the texture of the map, a situation where a repeated read operation is performed on the texture of the map to obtain the target space information of the same associated bone may be caused, thereby causing a waste of processing resources required for the read operation. Therefore, after obtaining the mapping texture, the computer device can also read the target space information of each bone in the target image frame of the bone animation from the mapping texture; then, the computer equipment can traverse the vertexes of the multiple grids, and according to the bone identification of the associated bone of the currently traversed current vertex, the target space information of the associated bone of the current vertex is obtained from the read target space information; by adopting the mode to acquire the target space information, the processing resource required by the reading operation can be effectively saved.
And S403, performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the related skeleton of each vertex to obtain the target coordinate of each vertex.
In particular implementations, target spatial information due to the associated bone of each vertex may be used to indicate a rotation parameter and a translation parameter of the associated bone of each vertex in the target image frame; therefore, for any vertex, the rotation parameter and the translation parameter in the target space information of the bone associated with the vertex can be used to perform rotation processing and translation processing on the initial coordinate of the vertex, so as to obtain the target coordinate of the vertex. It should be noted that if at least two associated bones exist in any vertex, since any vertex is affected by rotation and translation of each associated bone, if coordinate transformation is performed on the initial coordinate of any vertex according to the spatial information of each associated bone, the finally obtained target coordinate of any vertex may be inaccurate, and further the rendered image frame may have a bone dislocation problem; therefore, when at least two associated bones exist in any vertex, the spatial information of each associated bone of any vertex can be linearly fused, and then the initial coordinate of any vertex is subjected to coordinate transformation according to the fused spatial information to obtain the target coordinate of any vertex so as to improve the accuracy of the target coordinate; linear fusion is a processing algorithm for calculating a common influence value when a plurality of calcaneus affect the same vertex by performing an operation using a linear polynomial.
S404, based on the target coordinates of each vertex and the rendering information of each vertex, performing image rendering on the target pixel block corresponding to the mesh to which each vertex belongs to obtain a target image frame.
In a specific implementation process, each grid in the skin resource can be traversed, and for the current grid traversed currently, the target coordinates of each vertex of the current grid can be mapped to a screen coordinate system from a model coordinate system to obtain the position coordinates of each vertex of the current grid in the screen coordinate system; secondly, according to the position coordinates of each vertex of the current grid, determining a region formed by each vertex of the current grid, and taking a pixel block contained in the determined region as a target pixel block corresponding to the current grid. Then, the rendering information of each vertex of the current grid can be adopted to perform image rendering on the target pixel block corresponding to the current grid; when each grid is traversed, a target image frame can be obtained. It should be noted that steps S402-S404 can be performed by the computer device by invoking a graphics processor.
After a plurality of skeletons are set for a target object in advance, skin resources of skeleton animation of the target object and space information of each skeleton in each image frame of the skeleton animation can be configured in an off-line manner, so that when the skeleton animation needs to be rendered, the skin resources can be directly obtained from a display memory by calling a graphics processor, and the target space information of the associated skeleton of each vertex in the target image frame can be directly obtained from the display memory according to the skin resources; therefore, the calculation amount in the image rendering process can be effectively reduced, so that the processing resources are effectively saved, and the rendering process is accelerated. Then, coordinate transformation is carried out on the initial coordinate of each vertex by calling a graphic processor to adopt the target space information of the associated skeleton of each vertex, so as to obtain the target coordinate of each vertex; and performing image rendering on the target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain a target image frame. Because the whole image rendering process is realized in the graphics processor, a central processing unit is not needed for animation calculation, and the resources of the central processing unit can be effectively saved; and by means of the high parallel processing capacity of the graphic processor, the rendering efficiency is effectively improved, and the occupied bandwidth is reduced.
Fig. 6 is a schematic flow chart of another image processing method according to an embodiment of the present application. The embodiment of the application provides a method; the image processing method can be executed by a computer device, wherein the computer device can be a terminal or a server; alternatively, the image processing method may be executed by the terminal and the server in cooperation. For convenience of explanation, the image processing method is described by taking a computer device as an example, and the computer device is a terminal; as shown in fig. 6, the image processing method may include the following steps S601 to S609:
s601, acquiring resource data of the skeleton animation, and caching the resource data into a system memory, wherein the resource data comprises skin resources and chartlet textures.
And S602, if a rendering trigger event aiming at the skeletal animation is detected, calling a central processing unit to transmit resource data from a system memory to a video memory.
S603, calling a graphics processor to acquire skin resources of the skeleton animation of the target object from the video memory.
And S604, acquiring target space information of the associated skeleton of each vertex in the target image frame from the video memory according to the associated skeleton information of each vertex.
The target image frame is any image frame in the skeleton animation; the target spatial information of each vertex's associated skeleton in the target image frame is obtained from the texture of the map in the video memory. Specifically, the specific implementation of step S604 may include the following steps S11-S13:
and s11, acquiring the mapping texture from the video memory.
The computer equipment can call a graphics processor to obtain the texture of the map from the video memory; specifically, the texture of the map includes: spatial information of each bone in the image frames of the bone animation. Because any spatial information can comprise a rotation parameter and a translation parameter, and each vertex can be influenced by the rotation and translation of a plurality of associated bones, the spatial information of each associated bone of each vertex needs to be linearly fused under the condition, so that the vertex can be subjected to coordinate transformation according to the fused spatial information to prevent the rendered image frame from having the bone dislocation problem; then, in order to save storage space and facilitate the subsequent linear fusion of the spatial information of the associated bones of the vertices, the rotation parameters and the translation parameters in any one of the spatial information may be stored by using Dual Quaternions (Dual Quaternions), that is, the spatial information of any one of the bones in any image frame of the bone animation includes Dual Quaternions.
The dual quaternion is the organic combination of a pair even number and a quaternion in a multidimensional space, can be understood as the pair even number of which the element is a quaternion, and can also be understood as the quaternion of which the element is the pair even number; compared with a quaternion which only 3D rotation can be represented, and even numbers only translation can be represented, the advantage of dual quaternion is that the quaternion inherits the common characteristics of the two, so that rotation and translation can be represented uniformly. That is, dual quaternions refer to numerical values used to represent translational and rotational parameters; specifically, the general form of the dual quaternion can be seen in the following formula 1.1:
Figure GDA0003786946460000121
wherein r and s are both four elements, r = r 0 +ir 1 +jr 2 +kr 3 ,s=s 0 +is 1 +js 2 +ks 3 (ii) a ε is the dual operator, and ε satisfies the following condition: ε ≠ 0 and ε 2 And =0. The relationship between the dual quaternion and the translation parameter t and the rotation parameter R can be seen in the following equations 1.2 and 1.3:
Figure GDA0003786946460000122
Figure GDA0003786946460000123
in the above-described formulas 1.2 and 1.3,
Figure GDA0003786946460000124
r 123 =[r 1 r 2 r 3 ] T
Figure GDA0003786946460000125
the finishing can be carried out as follows:
Figure GDA0003786946460000126
Figure GDA0003786946460000127
based on the above description, when the spatial information of any bone in any image frame of the bone animation includes dual four elements, the chartlet texture may include a plurality of pixel points, each of which may include one or more color channels; the dual quaternion of any skeleton in any image frame can be stored by adopting each color channel of the associated pixel point of any skeleton in the chartlet texture. For example, when one pixel includes four color channels RGBA; wherein, R color channel refers to red channel, G color channel refers to green channel, B refers to blue channel, A refers to color transparency channel; if each color channel consists of a 32-bit single-precision floating point number, each color channel of every two pixel points can store a dual quaternion, namely, any skeleton has two associated pixel points in the texture of the map. Wherein, the associated pixel point of any skeleton in the texture of the map can be randomly allocated to any skeleton; alternatively, the texture of the map may be determined by coordinate sampling from the bone index of any bone. It should be noted that, for different image frames, the associated pixel points of any skeleton may be different; for example, for a first frame image frame in a skeleton animation, the associated pixel points for storing the dual quaternion of any skeleton in the first frame image frame may include pixel point a and pixel point b; for the second frame image frame, the associated pixel points for storing the dual quaternion of any bone in the second frame image frame may include pixel point f and pixel point h.
For a current image frame (any image frame), coordinate sampling is performed on a map texture according to a bone index of any bone, so that a mode of determining a related pixel point of any bone is as follows:
firstly, aiming at a current image frame, a frame identifier of the current image frame can be obtained; the frame identification of the current image frame is used for indicating the arrangement position of the current image frame in the skeletal animation, namely, is used for indicating that the current image frame is the frame in the skeletal animation. Secondly, calculating a one-dimensional reference coordinate of any skeleton according to the frame identification of the current image frame and the skeleton index of any skeleton by a coordinate sampling formula; wherein the coordinate sampling formula is: the frame identification of the current image frame + bone index + number of bones of the target object = one-dimensional reference coordinate x. Then, the width of the texture of the map can be set as w, and the height of the texture of the map can be set as h; then, the texture coordinates (u, v) of either bone with respect to the current image frame may be: u = x% w, v = x/w. Wherein "%" represents a remainder operation, and "/" represents a round-down operation; for example, x equals 17, w equals 5, then u equals 2, and v equals 3 since the quotient of 18 divided by 5 is 3 and the remainder is 2. Then, the pixel point at the texture coordinate (u, v) can be used as the associated pixel point of any bone.
It should be understood that, if the dual quaternion of any bone in the current image frame needs to be stored by using the color channel values of two associated pixel points, after the texture coordinates (u, v) of any bone are obtained in the above manner; and adding one to the one-dimensional reference coordinate x to update the value of x, and respectively performing remainder operation and downward rounding operation on w by adopting the updated x to obtain a new texture coordinate (u) 1 ,v 1 ). In this case, two associated pixel points required by the dual quaternion of any bone in the current image frame are: pixel point at texture coordinate (u, v), and new texture coordinate (u 1 ,v 1 ) The pixel point of (c).
And s12, reading target space information of each bone in a target image frame of the bone animation from the chartlet texture.
In a particular implementation, the computer device may traverse a plurality of bones of the target object and determine associated pixel points for the current bone from the texture of the map. As can be seen from the foregoing, each pixel point in the tile texture has a two-dimensional texture coordinate; therefore, in specific implementation, the bone identifier of the current bone can be obtained firstly; calculating a one-dimensional reference coordinate corresponding to the current skeleton according to the frame identification of the target image frame, the number of the skeletons of the target object and the skeleton identification of the current skeleton; then, the one-dimensional reference coordinate can be mapped into the paste pattern to obtain a target texture coordinate corresponding to the current skeleton; and finally, taking the pixel point positioned at the target texture coordinate as the associated pixel point of the current skeleton. It should be noted that, for specific embodiments of each step involved in the process of determining the associated pixel point of the current bone, reference may be made to the specific embodiment of the associated pixel point of any bone mentioned in the step s11, and details are not described herein again. After the associated pixel point of the current skeleton is determined, the dual quaternion of the current skeleton in the target image frame can be read from the color channel of the associated pixel point of the current skeleton, and the dual quaternion is used as target space information of the current skeleton in the target image frame.
And s13, traversing the vertexes of the grids, and acquiring the target space information of the relevant skeleton of the current vertex from the read target space information according to the skeleton identification of the relevant skeleton of the current vertex traversed currently.
And S605, performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the related skeleton of each vertex to obtain the target coordinate of each vertex.
In particular implementations, vertices of multiple meshes may be traversed and the number of associated bones of the current vertex currently traversed may be determined. If the number is 1, the initial coordinates of the current vertex can be subjected to coordinate transformation by directly adopting the target space information of the related skeleton of the current vertex to obtain the target coordinates of the current vertex; specifically, adopt
Figure GDA0003786946460000143
Target space information (i.e., target dual quaternion) representing the skeleton associated with the current vertex, and using p to represent the initial coordinates of the current vertex, using p 1 Representing the target coordinate of the current vertex, and performing coordinate transformation on the initial coordinate of the current vertex to obtain the transformation of the target coordinate of the current vertexThe formula can be seen in the following formula 1.4:
Figure GDA0003786946460000141
wherein q is * Representing target dual quaternion
Figure GDA0003786946460000142
Conjugate dual quaternion of (1).
If the number is greater than or equal to 2, the target space information of each relevant bone of the current vertex can be fused to obtain fused space information. Specifically, as can be seen from the foregoing, the associated bone information of any vertex includes the bone weight of each associated bone of any vertex; the target spatial information of any one of the associated bones includes a target dual quaternion of any one of the associated bones in the target image frame. Therefore, the bone weight of each associated bone of the current vertex can be obtained from the associated bone information of the current vertex; and performing linear fusion (such as weighted summation) on the target dual quaternion of each associated bone of the current vertex by adopting the bone weight of each associated bone of the current vertex to obtain a fusion dual quaternion. Wherein fusing spatial information comprises fusing dual quaternions. After the fusion spatial information is obtained, the fusion spatial information may be used to perform coordinate transformation on the initial coordinate of the current vertex to obtain the target coordinate of the current vertex, and the specific implementation is similar to formula 1.4, which is not described herein again.
S606, based on the target coordinates of each vertex and the rendering information of each vertex, performing image rendering on the target pixel block corresponding to the mesh to which each vertex belongs to obtain a target image frame.
S607, if the target image frame is detected to meet the resource mounting condition, the mounting point information of the target mounting point in the target image frame and the target auxiliary resource mounted by the target mounting point are obtained.
In particular implementations, the resource data may also include an event data set and a hanging point data set; correspondingly, the computer device may also invoke the central processing unit to read the event data set and the hanging point data set from the system memory. As can be seen from the foregoing, the event data set includes event information of one or more resource mount events; the resource mounting events mentioned here are of various types, and can be freely expanded according to specific services; the mount resource event may include a mount special effects event, a mount audio event, a mount image event, and the like. Referring to fig. 7a, the event information of any resource mount event may include, but is not limited to: resource mounting conditions, resource canceling conditions, associated auxiliary resources, mounting point identifiers of mounting points of resource mounting, and the like; the resource mount condition mentioned herein may be set according to a business requirement or an empirical value, and is not limited thereto. For example, the resource mount condition of any resource mount event may include at least one of: the method comprises the steps of identifying frames of a specified image frame in the skeletal animation, and detecting a trigger operation related to any resource mounting event within a preset time period. For any mounted special effect event of a game, a user is usually required to execute a specified game operation before a special effect display is triggered, and then the specified game operation can be used as a trigger operation for mounting the special effect event.
It should be understood that the event information of different kinds of resource mounting events has both the same part and different places; for example, event information for mounting a special effects event may include a resource path for a special effect, event information for mounting an audio event may include a resource path for audio data, but event information for both resource mounting events may include a mounting point identification for a mounting point for a resource mounting.
After the event data set is obtained, the target image frame can be adopted to carry out hit processing on the resource mounting conditions in each event information in the event data set; if the target image frame successfully hits the resource mounting condition in the target event information in the event data set, it can be determined that the target image frame meets the resource mounting condition. In this case, the computer device may determine the mount point indicated by the mount point identifier in the target event information as the target mount point, and obtain the target auxiliary resource mounted by the target mount point from the target event information; when the target object is any virtual character object in any game, the target auxiliary resource can be a game resource of any game; the game resources include at least one of: game effects, game item images, and three-dimensional game audio data, among others. When the target object is an virtual personal image of any user in the instant messaging application, the target auxiliary resource can be an image decoration element; when the target object is any person, animal or other creature in the movie, the target auxiliary resource may be a special effect resource involved in the movie, and the like.
In addition, hanging point information of the target hanging point in the target image frame can be obtained from the hanging point data set. The hanging point information of the target hanging point comprises the following steps: hanging point identification of the target hanging point, skeleton identification of a target skeleton corresponding to the target hanging point, and relative posture information between the target hanging point and the target skeleton; the relative pose information referred to herein may include: one or more of displacement information, rotation information, and scaling information of the target hanging point relative to the target bone.
And S608, determining a target skeleton from the plurality of skeletons according to the skeleton identification in the hanging point information of the target hanging point, wherein the target skeleton refers to the skeleton hung by the target hanging point.
And S609, outputting the target auxiliary resource based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton when the target image frame is displayed.
In one embodiment, if the target auxiliary resource includes a target auxiliary image, the auxiliary image is an image that is other than the skeleton animation and can be used for decorating a target image frame in the skeleton animation, such as a special effect image, a prop image, and the like; specific embodiments of step S609 may include: when the target image frame is displayed, the hanging point position of the target hanging point in the target image frame is determined based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton. Then, the target position of the target auxiliary image in the target image frame can be determined according to the hanging point position and the relative position information between the target hanging point and the target auxiliary image; the relative position information between the target hanging point and the target auxiliary image can be set in advance according to an empirical value. The target auxiliary image may then be rendered and displayed at the target location in the target image frame.
In another embodiment, if the target auxiliary resource further includes three-dimensional audio data associated with the target auxiliary image; the three-dimensional audio data means: audio data imitating a seemingly existing but fictional sound with a speaker, such as audio data of a sound emitted from a certain virtual character object in a game screen, a whistle sound emitted from a certain vehicle in a movie or television play screen, or the like; specific embodiments of step S609 may further include: firstly, determining a target hearing point in a target image frame, wherein the target hearing point refers to the position of a listener listening to the three-dimensional audio data; it should be understood that the listener referred to herein may be the real user viewing the target image frame, in which case the target auditory point in the target image frame may be the mapped location determined by mapping the real location of the real user to the target image frame; alternatively, the listener may be an object other than the target object in the target image frame, and the target auditory point in this case may be a position of the other object in the target image frame. Then, the target distance between the target auditory point and the target position can be determined, and the target volume corresponding to the target distance is searched from the corresponding relation table between the distance and the volume; the target distance and the target volume may be inversely related, i.e., the larger the target distance, the smaller the target volume. And finally, playing the three-dimensional audio data according to the target volume.
It should be understood that, if a target hanging point corresponds to an image frame sequence in the skeleton animation, the image frame sequence includes a target image frame and one or more associated image frames, where the associated image frame is an image frame in the skeleton animation, except for the target image frame, in which the target hanging point exists. In each image frame in the image frame sequence, when the target hanging point moves along with the target skeleton, the target auxiliary resource moves along with the target hanging point; that is, when the position of the target hanging point in each image frame in the image frame sequence changes, the position of the target auxiliary map resource in each image frame may change with the change of the position of the hanging point, as shown in fig. 7 b.
After a plurality of bones are set for a target object in advance, skin resources of bone animation of the target object and space information of each bone in each image frame in the bone animation can be configured in an off-line mode, so that when the bone animation needs to be rendered, the skin resources can be directly obtained from a video memory by calling a graphics processor, and target space information of the associated bone of each vertex in the target image frame can be directly obtained from the video memory according to the skin resources; therefore, the calculation amount in the image rendering process can be effectively reduced, so that the processing resources are effectively saved, and the rendering process is accelerated. Then, coordinate transformation is carried out on the initial coordinate of each vertex by calling a graphic processor to adopt the target space information of the associated skeleton of each vertex, so as to obtain the target coordinate of each vertex; and performing image rendering on the target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain a target image frame. Because the whole image rendering process is realized in the graphic processor, a central processor is not needed for animation calculation, and the resources of the central processor can be effectively saved; and by means of the high parallel processing capacity of the graphic processor, the rendering efficiency is effectively improved, and the occupied bandwidth is reduced.
In practical applications, the image processing methods shown in fig. 4 and fig. 6 can be applied to various application scenes according to actual requirements, such as rendering scenes of game pictures, rendering scenes of movie and television play pictures, and the like. The following explains a specific application process of the image processing method by taking a rendering scene of a game picture as an example:
first, the art creator selects any virtual character object in the target game as a target object, and creates an animation resource file of a skeletal animation of the target object (for example, various information such as skeletal information of the target object and animation information of the skeletal animation). Then, the art producer may import an animation resource file of the target object by using an animation editor (e.g., a non 4 Engine), edit the skeletal animation of the target object, and finally trigger the animation editor to generate resource data (skinning resources and charting textures) for GPU acceleration. The target game mentioned here can be any type of game such as end game, hand game, page game, cloud game, etc.; the end-play is a game executed based on a game application in a PC (Personal Computer)/Computer end, the hand-play is a game executed based on a game application in a mobile device such as a mobile phone, and the page-play is a game executed based on a web page; the cloud game is a game running in a container in the cloud game server, and the cloud game displays the video stream by transmitting the video stream to a game client running in the user terminal, and drives the game picture to change according to a user operation event uploaded by the game client.
Any computer device can obtain the resource data of the skeletal animation of the target object from the animation editor and cache the resource data into the system memory. When it is detected that the target user starts the target game, it may be considered that a rendering trigger event for the skeletal animation is detected, and the CPU in the computer device transmits the resource data from the system memory to the video memory. Secondly, a GPU in the computer equipment can acquire skin resources of the skeleton animation of the target object from a video memory and acquire a mapping texture from the video memory; and reading from the map texture, the target dual quaternion of the associated skeleton of each vertex in the target image frame. Then, the GPU may perform coordinate transformation on the initial coordinates of each vertex by using the target dual quaternion of the associated skeleton of each vertex, to obtain target coordinates of each vertex. Then, the GPU may perform image rendering on the target pixel block corresponding to the mesh to which each vertex belongs, based on the target coordinates of each vertex and the rendering information of each vertex, to obtain a target image frame.
Optionally, the CPU in the computer device may further detect whether the target image frame meets the mounted resource condition; if the target image frame is detected to meet the resource mounting condition, the CPU can also acquire mounting point information of a target mounting point in the target image frame and a target auxiliary resource mounted by the target mounting point. Then, determining a target skeleton from a plurality of skeletons according to the skeleton identification in the hanging point information of the target hanging point; and when the target image frame is displayed, outputting the target auxiliary resource based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton.
Therefore, by establishing a complete set of complete workflow in the target game, the method and the system enable the art production personnel to use an Unreal4 (UE 4) Engine to accelerate the GPU of the skeleton animation, and achieve the purpose of rendering the skeleton animation on the same screen in a large scale; therefore, the method is beneficial to art makers to enrich the types and the number of the virtual character objects in the game pictures and improve the richness of the game pictures. Moreover, image rendering is realized by the GPU according to resource data such as skin resources and chartlet textures configured offline, a CPU is not required to execute an animation calculation process, the GPU can be effectively accelerated, the memory occupation is reduced, and the operation efficiency and the rendering efficiency are improved; therefore, the display timeliness of the game picture of the target game is improved, and the fluency of the target game is further improved. In addition, animation editing and self-defining of various event triggers can be supported, so that the animation not only is animation, but also has the function of influencing the game environment; the heating phenomenon of the computer equipment due to high bandwidth occupation can be effectively reduced, and the high frame rate display of the game picture can be realized by increasing the frame rate, so that the comfort level of the target user in the game playing process is increased, and the viscosity of the user is effectively increased.
Based on the description of the above-mentioned embodiments of the image processing method, the embodiments of the present application also disclose an image processing apparatus, which may be a computer program (including program code) running in a computer device. The image processing apparatus may perform the method shown in fig. 4 or fig. 6. Referring to fig. 8, the image processing apparatus may operate the following units:
an obtaining unit 801, configured to invoke a graphics processor to obtain a skinning resource of a skeletal animation of a target object from a video memory, where the target object includes multiple skeletons, and the skinning resource includes vertices of multiple meshes and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs;
the obtaining unit 801 is further configured to obtain, from the video memory, target space information of the associated skeleton of each vertex in a target image frame according to the associated skeleton information of each vertex, where the target image frame is any image frame in the skeleton animation;
a processing unit 802, configured to perform coordinate transformation on the initial coordinate of each vertex by using the target space information of the associated bone of each vertex, to obtain a target coordinate of each vertex;
and a rendering unit 803, configured to perform image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex, so as to obtain the target image frame.
In one embodiment, the associated bone information for any vertex includes a bone identification of the associated bone for said any vertex; correspondingly, when the obtaining unit 801 is configured to obtain, from the video memory, the target space information of the relevant skeleton of each vertex in the target image frame according to the relevant skeleton information of each vertex, the obtaining unit may be specifically configured to:
obtaining a mapping texture from the video memory, wherein the mapping texture comprises: spatial information of each bone in image frames of the bone animation;
reading target space information of each bone in a target image frame of the bone animation from the chartlet texture;
and traversing the vertexes of the grids, and acquiring the target space information of the relevant skeleton of the current vertex from the read target space information according to the skeleton identification of the relevant skeleton of the current vertex traversed currently.
In another embodiment, the spatial information of any bone in any image frame of the bone animation comprises dual quaternion, which refers to the numerical values representing the translation parameter and the rotation parameter;
the chartlet texture comprises a plurality of pixel points, each pixel point comprises one or more color channels, and the dual quaternion of any skeleton in any image frame is stored by adopting each color channel of the pixel point associated with any skeleton in the chartlet texture;
accordingly, the obtaining unit 801, when configured to read the target spatial information of each bone in the target image frame of the bone animation from the chartlet texture, may specifically be configured to:
traversing a plurality of skeletons of the target object, and determining associated pixel points of the current skeleton from the paste image texture;
reading the dual quaternion of the current skeleton in the target image frame from the color channel of the associated pixel point of the current skeleton, and taking the dual quaternion as the target space information of the current skeleton in the target image frame.
In another embodiment, each pixel point in the paste pattern has a two-dimensional texture coordinate; correspondingly, when the obtaining unit 801 is configured to determine the pixel point associated with the current bone from the texture of the map, the obtaining unit may be specifically configured to:
acquiring a bone identifier of the current bone;
calculating a one-dimensional reference coordinate corresponding to the current skeleton according to the frame identification of the target image frame, the number of the skeletons of the target object and the skeleton identification of the current skeleton;
mapping the one-dimensional reference coordinate to the paste pattern to obtain a target texture coordinate corresponding to the current skeleton;
and taking the pixel point positioned at the target texture coordinate as the associated pixel point of the current skeleton.
In another embodiment, when the processing unit 802 is configured to perform coordinate transformation on the initial coordinate of each vertex according to the target space information of the associated bone of each vertex, to obtain the target coordinate of each vertex, the processing unit may be specifically configured to:
traversing vertices of the plurality of meshes and determining a number of associated bones of a currently traversed current vertex;
if the number is 1, performing coordinate transformation on the initial coordinate of the current vertex by adopting the target space information of the related skeleton of the current vertex to obtain a target coordinate of the current vertex;
if the number is greater than or equal to 2, fusing target space information of each relevant bone of the current vertex to obtain fused space information; and carrying out coordinate transformation on the initial coordinate of the current vertex by adopting the fusion space information to obtain the target coordinate of the current vertex.
In yet another embodiment, the associated bone information of any vertex includes bone weights of respective associated bones of said any vertex; the target space information of any relevant bone comprises a target dual quaternion of the relevant bone in the target image frame; correspondingly, when the processing unit 802 is configured to fuse the target spatial information of each relevant bone of the current vertex to obtain the fused spatial information, it may be specifically configured to:
obtaining the bone weight of each associated bone of the current vertex from the associated bone information of the current vertex;
performing linear fusion on target dual quaternions of all the relevant bones of the current vertex by adopting the bone weights of all the relevant bones of the current vertex to obtain fusion dual quaternions; wherein the fused spatial information comprises the fused dual quaternion.
In still another embodiment, the obtaining unit 801 may further be configured to:
acquiring resource data of the skeleton animation, and caching the resource data into a system memory; wherein the resource data comprises the skinning resources and a chartlet texture comprising spatial information for each bone in each image frame of the bone animation;
and if the rendering trigger event aiming at the skeletal animation is detected, calling a central processing unit to transmit the resource data from the system memory to the video memory, and executing a step of calling a graphic processor to acquire skin resources of the skeletal animation of the target object from the video memory.
In yet another embodiment, the processing unit 802 is further configured to:
if the target image frame is detected to meet the resource mounting condition, acquiring mounting point information of a target mounting point in the target image frame and a target auxiliary resource mounted by the target mounting point; the hanging point information of the target hanging point comprises a skeleton identification of a target skeleton corresponding to the target hanging point and relative posture information between the target hanging point and the target skeleton;
determining the target skeleton from the plurality of skeletons according to the skeleton identification in the hanging point information of the target hanging point;
when the target image frame is displayed, outputting the target auxiliary resource based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton.
In yet another embodiment, the target auxiliary resource includes a target auxiliary image; accordingly, the processing unit 802, when configured to output the target auxiliary resource based on the position of the target bone in the target image frame and the relative posture information between the target hanging point and the target bone when displaying the target image frame, may be specifically configured to:
determining a hanging point position of the target hanging point in the target image frame based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton when the target image frame is displayed;
determining the target position of the target auxiliary image in the target image frame according to the hanging point position and the relative position information between the target hanging point and the target auxiliary image;
rendering and displaying the target auxiliary image at the target position in the target image frame.
In yet another embodiment, the target auxiliary resource further includes three-dimensional audio data associated with the target auxiliary image; accordingly, the processing unit 802 is further operable to:
determining a target auditory point in the target image frame and determining a target distance between the target auditory point and the target position;
searching a target volume corresponding to the target distance from a corresponding relation table between the distance and the volume;
and playing the three-dimensional audio data according to the target volume.
In another embodiment, if a target hanging point corresponds to an image frame sequence in the skeleton animation, the image frame sequence includes the target image frame and one or more associated image frames, where the associated image frame is an image frame of the skeleton animation, except the target image frame, where the target hanging point exists;
the target auxiliary image moves with the target hanging point when the target hanging point moves with the target bone in each image frame in the image frame sequence.
In another embodiment, the target object is any virtual character object in any game, and the target auxiliary resource is a game resource of any game;
wherein the game resources include at least one of: game special effects, game prop images and three-dimensional game audio data.
According to an embodiment of the present application, each step involved in the method shown in fig. 4 or fig. 6 may be performed by each unit in the image processing apparatus shown in fig. 8. For example, steps S401 to S402 shown in fig. 4 may each be performed by the acquisition unit 801 shown in fig. 8, and steps S403 and S404 may be performed by the processing unit 802 and the rendering unit 803 shown in fig. 8, respectively; as another example, steps S601 to S604 shown in fig. 6 may be all performed by the acquisition unit 801 shown in fig. 8, steps S605 and steps S607 to S609 may be all performed by the processing unit 802 shown in fig. 8, step S606 may be performed by the rendering unit 803 shown in fig. 8, and so on.
According to another embodiment of the present application, the units in the image processing apparatus shown in fig. 8 may be respectively or entirely combined into one or several other units to form the image processing apparatus, or some unit(s) may be further split into multiple units with smaller functions to form the image processing apparatus, which may achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present application. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present application, the image processing apparatus may also include other units, and in practical applications, these functions may also be implemented by assistance of other units, and may be implemented by cooperation of multiple units.
According to another embodiment of the present application, the image processing apparatus device as shown in fig. 8 may be configured by running a computer program (including program codes) capable of executing the steps involved in the respective methods as shown in fig. 4 or fig. 6 on a general-purpose computing device such as a computer including a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and the like, and a storage element, and the image processing method of the embodiment of the present application may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
After a plurality of bones are set for a target object in advance, skin resources of bone animation of the target object and space information of each bone in each image frame in the bone animation can be configured in an off-line mode, so that when the bone animation needs to be rendered, the skin resources can be directly obtained from a video memory by calling a graphics processor, and target space information of the associated bone of each vertex in the target image frame can be directly obtained from the video memory according to the skin resources; therefore, the calculation amount in the image rendering process can be effectively reduced, so that the processing resources are effectively saved, and the rendering process is accelerated. Then, coordinate transformation is carried out on the initial coordinate of each vertex by calling a graphic processor to adopt the target space information of the associated skeleton of each vertex, so as to obtain the target coordinate of each vertex; and performing image rendering on the target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain a target image frame. Because the whole image rendering process is realized in the graphics processor, a central processing unit is not needed for animation calculation, and the resources of the central processing unit can be effectively saved; and the rendering efficiency can be effectively improved and the occupied bandwidth can be reduced by means of the high parallel processing capacity of the graphics processor.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application further provides a computer device. Referring to fig. 9, the computer device comprises at least a processor 901, an input interface 902, an output interface 903 and a computer storage medium 904. Wherein the processor 901, the input interface 902, the output interface 903, and the computer storage medium 904 within the computer device may be connected by a bus or other means. A computer storage medium 904 may be stored in the memory of the computer device, the computer storage medium 904 being used for storing a computer program comprising program instructions, the processor 901 being used for executing the program instructions stored by the computer storage medium 904. The processor 901 (or Central Processing Unit, CPU) is a computing core and a control core of the computer device, and is adapted to implement one or more instructions, and in particular, adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function.
In an embodiment, the processor 901 according to the embodiment of the present application may be configured to perform a series of image processing, which specifically includes: calling a graphics processor to obtain skin resources of skeleton animation of a target object from a video memory, wherein the target object comprises a plurality of skeletons, and the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs; acquiring target space information of the associated skeleton of each vertex in a target image frame from the video memory according to the associated skeleton information of each vertex, wherein the target image frame is any image frame in the skeleton animation; performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex; and performing image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinates of each vertex and the rendering information of each vertex to obtain the target image frame, and the like.
An embodiment of the present application further provides a computer storage medium (Memory), which is a Memory device in a computer device and is used to store programs and data. It is understood that the computer storage medium herein may include both built-in storage media in the computer device and extended storage media supported by the computer device. The computer storage medium provides a storage space that stores an operating system of the computer device. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 901. It should be noted that the computer storage medium herein may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 901 to implement the corresponding steps of the methods described above with respect to the image processing method embodiments shown in FIG. 4 or FIG. 6; in particular implementations, one or more instructions in the computer storage medium are loaded by processor 901 and perform the following steps:
calling a graphics processor to obtain skin resources of bone animation of a target object from a video memory, wherein the target object comprises a plurality of bones, and the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs;
acquiring target space information of the associated skeleton of each vertex in a target image frame from the video memory according to the associated skeleton information of each vertex, wherein the target image frame is any image frame in the skeleton animation;
performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex;
and performing image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain the target image frame.
In one embodiment, the associated bone information for any vertex includes a bone identification of the associated bone for said any vertex; correspondingly, when the target space information of the associated bone of each vertex in the target image frame is acquired from the video memory according to the associated bone information of each vertex, the one or more instructions may be loaded and specifically executed by the processor 901:
obtaining a map texture from the video memory, wherein the map texture comprises: spatial information of each bone in image frames of the bone animation;
reading target space information of each bone in a target image frame of the bone animation from the chartlet texture;
and traversing the vertexes of the grids, and acquiring the target space information of the relevant skeleton of the current vertex from the read target space information according to the skeleton identification of the relevant skeleton of the current vertex traversed currently.
In still another embodiment, the spatial information of any bone in any image frame of the bone animation comprises a dual quaternion, wherein the dual quaternion refers to a numerical value used for representing a translation parameter and a rotation parameter;
the chartlet texture comprises a plurality of pixel points, each pixel point comprises one or more color channels, and the dual quaternion of any skeleton in any image frame is stored by adopting each color channel of the pixel point associated with any skeleton in the chartlet texture;
accordingly, when reading the target spatial information of each bone in the target image frame of the bone animation from the chartlet texture, the one or more instructions may be loaded and specifically executed by the processor 901:
traversing a plurality of skeletons of the target object, and determining associated pixel points of the current skeleton from the paste image texture;
reading the dual quaternion of the current skeleton in the target image frame from the color channel of the associated pixel point of the current skeleton as the target space information of the current skeleton in the target image frame.
In another embodiment, each pixel point in the paste pattern has a two-dimensional texture coordinate; accordingly, when determining the associated pixel points of the current skeleton from the tile texture, the one or more instructions may be loaded and specifically executed by processor 901:
acquiring a bone identifier of the current bone;
calculating a one-dimensional reference coordinate corresponding to the current skeleton according to the frame identification of the target image frame, the number of the skeletons of the target object and the skeleton identification of the current skeleton;
mapping the one-dimensional reference coordinate to the paste pattern to obtain a target texture coordinate corresponding to the current skeleton;
and taking the pixel point positioned at the target texture coordinate as the associated pixel point of the current skeleton.
In another embodiment, when performing coordinate transformation on the initial coordinates of each vertex according to the target space information of the associated bone of each vertex to obtain the target coordinates of each vertex, the one or more instructions may be loaded and specifically executed by the processor 901:
traversing vertices of the plurality of meshes and determining a number of associated bones of a currently traversed current vertex;
if the number is 1, performing coordinate transformation on the initial coordinate of the current vertex by adopting the target space information of the related skeleton of the current vertex to obtain a target coordinate of the current vertex;
if the number is greater than or equal to 2, fusing target space information of each relevant bone of the current vertex to obtain fused space information; and carrying out coordinate transformation on the initial coordinate of the current vertex by adopting the fusion space information to obtain the target coordinate of the current vertex.
In yet another embodiment, the associated bone information of any vertex includes bone weights of respective associated bones of said any vertex; the target space information of any relevant bone comprises a target dual quaternion of the relevant bone in the target image frame; correspondingly, when the target space information of each relevant bone of the current vertex is fused to obtain fused space information, the one or more instructions may be loaded and specifically executed by the processor 901:
obtaining the bone weight of each associated bone of the current vertex from the associated bone information of the current vertex;
adopting the bone weight of each associated bone of the current vertex to perform linear fusion on the target dual quaternion of each associated bone of the current vertex to obtain a fusion dual quaternion; wherein the fused spatial information comprises the fused dual quaternion.
In yet another embodiment, the one or more instructions may be further loaded and specifically executed by the processor 901:
acquiring resource data of the skeleton animation, and caching the resource data into a system memory; wherein the resource data comprises the skinning resources and a chartlet texture comprising spatial information of each bone in each image frame of the bone animation;
and if the rendering trigger event aiming at the skeletal animation is detected, calling a central processing unit to transmit the resource data from the system memory to the video memory, and executing the step of calling a graphics processor to acquire skin resources of the skeletal animation of the target object from the video memory.
In yet another embodiment, the one or more instructions may be further loaded and specifically executed by the processor 901:
if the target image frame is detected to meet the resource mounting condition, acquiring mounting point information of a target mounting point in the target image frame and a target auxiliary resource mounted by the target mounting point; the hanging point information of the target hanging point comprises a skeleton identification of a target skeleton corresponding to the target hanging point and relative posture information between the target hanging point and the target skeleton;
determining the target skeleton from the plurality of skeletons according to the skeleton identification in the hanging point information of the target hanging point;
when the target image frame is displayed, outputting the target auxiliary resource based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton.
In yet another embodiment, the target auxiliary resource includes a target auxiliary image; accordingly, when the target auxiliary resource is output based on the position of the target bone in the target image frame and the relative posture information between the target hanging point and the target bone when the target image frame is displayed, the one or more instructions may be loaded and specifically executed by the processor 901:
determining a hanging point position of the target hanging point in the target image frame based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton when the target image frame is displayed;
determining the target position of the target auxiliary image in the target image frame according to the hanging point position and the relative position information between the target hanging point and the target auxiliary image;
rendering display the target auxiliary image at the target position in the target image frame.
In yet another embodiment, the target auxiliary resource further includes three-dimensional audio data associated with the target auxiliary image; accordingly, the one or more instructions may also be loaded and specifically executed by processor 901:
determining a target hearing point in the target image frame and determining a target distance between the target hearing point and the target position;
searching a target volume corresponding to the target distance from a corresponding relation table between the distance and the volume;
and playing the three-dimensional audio data according to the target volume.
In another embodiment, if a target hanging point corresponds to an image frame sequence in the skeleton animation, the image frame sequence includes the target image frame and one or more associated image frames, where the associated image frame is an image frame of the skeleton animation, except the target image frame, where the target hanging point exists;
the target auxiliary image moves with the target hanging point when the target hanging point moves with the target bone in each image frame in the image frame sequence.
In another embodiment, the target object is any virtual character object in any game, and the target auxiliary resource is a game resource of any game;
wherein the game resources include at least one of: game special effects, game prop images and three-dimensional game audio data.
After a plurality of bones are set for a target object in advance, skin resources of bone animation of the target object and space information of each bone in each image frame in the bone animation can be configured in an off-line mode, so that when the bone animation needs to be rendered, the skin resources can be directly obtained from a video memory by calling a graphics processor, and target space information of the associated bone of each vertex in the target image frame can be directly obtained from the video memory according to the skin resources; therefore, the calculation amount in the image rendering process can be effectively reduced, so that the processing resources are effectively saved, and the rendering process is accelerated. Then, coordinate transformation is carried out on the initial coordinate of each vertex by calling a graphic processor to adopt the target space information of the associated skeleton of each vertex, so as to obtain the target coordinate of each vertex; and performing image rendering on the target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain a target image frame. Because the whole image rendering process is realized in the graphics processor, a central processing unit is not needed for animation calculation, and the resources of the central processing unit can be effectively saved; and the rendering efficiency can be effectively improved and the occupied bandwidth can be reduced by means of the high parallel processing capacity of the graphics processor.
It should be noted that according to an aspect of the present application, a computer program product or a computer program is also provided, and the computer program product or the computer program includes computer instructions, and the computer instructions are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternatives in the aspect of the embodiment of the image processing method shown in fig. 4 or fig. 6 described above.
It should be understood that the above disclosure is only for the preferred embodiment of the present application and should not be taken as limiting the scope of the present application, so that the present application can be covered by the claims of the present application.

Claims (14)

1. An image processing method, characterized by comprising:
calling a graphics processor to obtain skin resources of skeleton animation of a target object from a video memory, wherein the target object comprises a plurality of skeletons, and the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated bone information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs;
acquiring target space information of the associated skeleton of each vertex in a target image frame from the video memory according to the associated skeleton information of each vertex, wherein the target image frame is any image frame in the skeleton animation;
performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex;
performing image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain the target image frame;
if the target image frame is detected to meet the resource mounting condition, acquiring mounting point information of a target mounting point in the target image frame and a target auxiliary resource mounted by the target mounting point; the hanging point information of the target hanging point comprises a skeleton identification of a target skeleton corresponding to the target hanging point and relative posture information between the target hanging point and the target skeleton;
determining the target skeleton from the plurality of skeletons according to the skeleton identification in the hanging point information of the target hanging point;
when the target image frame is displayed, outputting the target auxiliary resource based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton.
2. The method of claim 1, wherein the associated bone information of any vertex includes a bone identification of the associated bone of said any vertex; the obtaining, from the video memory, target space information of the associated skeleton of each vertex in a target image frame according to the associated skeleton information of each vertex includes:
obtaining a mapping texture from the video memory, wherein the mapping texture comprises: spatial information of each bone in each image frame of the bone animation;
reading target space information of each bone in a target image frame of the bone animation from the chartlet texture;
and traversing the vertexes of the grids, and acquiring the target space information of the relevant skeleton of the current vertex from the read target space information according to the skeleton identification of the relevant skeleton of the current vertex traversed currently.
3. The method of claim 2, wherein spatial information of any bone in any image frame of the bone animation comprises a dual quaternion, the dual quaternion being a value representing a translation parameter and a rotation parameter;
the chartlet texture comprises a plurality of pixel points, each pixel point comprises one or more color channels, and the dual quaternion of any skeleton in any image frame is stored by adopting each color channel of the pixel point associated with any skeleton in the chartlet texture;
the step of reading the target space information of each bone in the target image frame of the bone animation from the map texture comprises the following steps:
traversing a plurality of skeletons of the target object, and determining associated pixel points of the current skeletons from the paste image texture;
reading the dual quaternion of the current skeleton in the target image frame from the color channel of the associated pixel point of the current skeleton, and taking the dual quaternion as the target space information of the current skeleton in the target image frame.
4. The method of claim 3, wherein each pixel in the tiled texture has a two-dimensional texture coordinate; the determining of the associated pixel point of the current bone from the tile texture comprises:
acquiring a skeleton identification of the current skeleton;
calculating a one-dimensional reference coordinate corresponding to the current skeleton according to the frame identification of the target image frame, the number of the skeletons of the target object and the skeleton identification of the current skeleton;
mapping the one-dimensional reference coordinate to the paste pattern to obtain a target texture coordinate corresponding to the current skeleton;
and taking the pixel point positioned at the target texture coordinate as the associated pixel point of the current skeleton.
5. The method of claim 1, wherein the coordinate transforming the initial coordinates of each vertex according to the target space information of the associated bone of each vertex to obtain the target coordinates of each vertex comprises:
traversing the vertexes of the multiple grids, and determining the number of associated bones of the currently traversed current vertex;
if the number is 1, performing coordinate transformation on the initial coordinate of the current vertex by adopting the target space information of the related skeleton of the current vertex to obtain a target coordinate of the current vertex;
if the number is greater than or equal to 2, fusing target space information of each associated bone of the current vertex to obtain fused space information; and carrying out coordinate transformation on the initial coordinate of the current vertex by adopting the fusion space information to obtain the target coordinate of the current vertex.
6. The method of claim 5, wherein the associated bone information of any vertex includes a bone weight of each associated bone of said any vertex; the target space information of any associated bone comprises a target dual quaternion of the any associated bone in the target image frame;
the fusing the target space information of each relevant bone of the current vertex to obtain fused space information comprises the following steps:
obtaining the bone weight of each associated bone of the current vertex from the associated bone information of the current vertex;
adopting the bone weight of each associated bone of the current vertex to perform linear fusion on the target dual quaternion of each associated bone of the current vertex to obtain a fusion dual quaternion; wherein the fused spatial information comprises the fused dual quaternion.
7. The method of any one of claims 1-6, further comprising:
acquiring resource data of the skeleton animation, and caching the resource data into a system memory; wherein the resource data comprises the skinned resource and a map texture, the map texture comprising spatial information for each bone in a respective image frame of the bone animation;
and if the rendering trigger event aiming at the skeletal animation is detected, calling a central processing unit to transmit the resource data from the system memory to the video memory, and executing a step of calling a graphic processor to acquire skin resources of the skeletal animation of the target object from the video memory.
8. The method of claim 1, wherein the target auxiliary resource comprises a target auxiliary image; the outputting the target auxiliary resource based on the position of the target bone in the target image frame and the relative posture information between the target hanging point and the target bone when the target image frame is displayed comprises:
when the target image frame is displayed, determining the hanging point position of the target hanging point in the target image frame based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton;
determining the target position of the target auxiliary image in the target image frame according to the hanging point position and the relative position information between the target hanging point and the target auxiliary image;
rendering display the target auxiliary image at the target position in the target image frame.
9. The method of claim 8, wherein the target auxiliary resource further comprises three-dimensional audio data associated with the target auxiliary image; the method further comprises the following steps:
determining a target hearing point in the target image frame and determining a target distance between the target hearing point and the target position;
searching a target volume corresponding to the target distance from a corresponding relation table between the distance and the volume;
and playing the three-dimensional audio data according to the target volume.
10. The method of claim 8 or 9, wherein if a target hanging point corresponds to an image frame sequence in the skeletal animation, the image frame sequence comprises the target image frame and one or more associated image frames, and the associated image frame is an image frame in the skeletal animation except the target image frame where the target hanging point exists;
the target auxiliary image moves with the target hanging point when the target hanging point moves with the target bone in each image frame in the image frame sequence.
11. The method of claim 1, wherein the target object is any virtual character object in any game, and the target auxiliary resource is a game resource of the any game;
wherein the game resources include at least one of: game special effects, game prop images and three-dimensional game audio data.
12. An image processing apparatus characterized by comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for calling a graphics processor to acquire skin resources of skeleton animation of a target object from a video memory, the target object comprises a plurality of skeletons, and the skin resources comprise vertexes of a plurality of grids and vertex information of each vertex; the vertex information of any vertex comprises initial coordinates, associated skeleton information and rendering information; the coordinates of any vertex are used for determining a pixel block corresponding to the mesh to which the vertex belongs;
the obtaining unit is further configured to obtain, from the video memory, target space information of the associated skeleton of each vertex in a target image frame according to the associated skeleton information of each vertex, where the target image frame is any one image frame in the skeleton animation;
the processing unit is used for performing coordinate transformation on the initial coordinate of each vertex by adopting the target space information of the associated skeleton of each vertex to obtain a target coordinate of each vertex;
the rendering unit is used for performing image rendering on a target pixel block corresponding to the mesh to which each vertex belongs based on the target coordinate of each vertex and the rendering information of each vertex to obtain the target image frame;
the processing unit is further configured to, if it is detected that the target image frame meets a resource mounting condition, obtain mounting point information of a target mounting point in the target image frame and a target auxiliary resource mounted by the target mounting point; the hanging point information of the target hanging point comprises a skeleton identification of a target skeleton corresponding to the target hanging point and relative posture information between the target hanging point and the target skeleton; determining the target skeleton from the plurality of skeletons according to the skeleton identification in the hanging point information of the target hanging point; when the target image frame is displayed, outputting the target auxiliary resource based on the position of the target skeleton in the target image frame and the relative posture information between the target hanging point and the target skeleton.
13. A computer device comprising an input interface and an output interface, further comprising:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium having stored thereon one or more instructions adapted to be loaded by the processor and to perform the image processing method according to any of claims 1-11.
14. A computer storage medium having stored thereon one or more instructions adapted to be loaded by a processor and to perform the image processing method according to any of claims 1-11.
CN202110284798.7A 2021-03-16 2021-03-16 Image processing method, image processing device, computer equipment and storage medium Active CN112933597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110284798.7A CN112933597B (en) 2021-03-16 2021-03-16 Image processing method, image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110284798.7A CN112933597B (en) 2021-03-16 2021-03-16 Image processing method, image processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112933597A CN112933597A (en) 2021-06-11
CN112933597B true CN112933597B (en) 2022-10-14

Family

ID=76228712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110284798.7A Active CN112933597B (en) 2021-03-16 2021-03-16 Image processing method, image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112933597B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450435B (en) * 2021-06-25 2023-05-26 网易(杭州)网络有限公司 Vertex animation processing method and device
CN113546415B (en) * 2021-08-11 2024-03-29 北京字跳网络技术有限公司 Scenario animation playing method, scenario animation generating method, terminal, device and equipment
CN113947657A (en) * 2021-10-18 2022-01-18 网易(杭州)网络有限公司 Target model rendering method, device, equipment and storage medium
CN114332311B (en) * 2021-12-05 2023-08-04 北京字跳网络技术有限公司 Image generation method, device, computer equipment and storage medium
CN114898022B (en) * 2022-07-15 2022-11-01 杭州脸脸会网络技术有限公司 Image generation method, image generation device, electronic device, and storage medium
CN115619911B (en) * 2022-10-26 2023-08-08 润芯微科技(江苏)有限公司 Virtual image generation method based on Unreal Engine
CN115908678B (en) * 2023-02-25 2023-05-30 深圳市益玩网络科技有限公司 Bone model rendering method and device, electronic equipment and storage medium
CN115984447B (en) * 2023-03-16 2023-06-23 腾讯科技(深圳)有限公司 Image rendering method, device, equipment and medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108711182A (en) * 2018-05-03 2018-10-26 广州爱九游信息技术有限公司 Render processing method, device and mobile terminal device
CN111558223B (en) * 2020-04-17 2023-06-13 完美世界(北京)软件科技发展有限公司 Method and device for generating tailing special effects, storage medium and computer equipment
CN111870095A (en) * 2020-07-02 2020-11-03 扬州哈工科创机器人研究院有限公司 Demonstration control method and device for whale carrying line

Also Published As

Publication number Publication date
CN112933597A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN112933597B (en) Image processing method, image processing device, computer equipment and storage medium
US11270497B2 (en) Object loading method and apparatus, storage medium, and electronic device
CN112037311B (en) Animation generation method, animation playing method and related devices
EP2051533B1 (en) 3D image rendering apparatus and method
CN107358649B (en) Processing method and device of terrain file
US11654633B2 (en) System and method of enhancing a 3D printed model
CN111803945B (en) Interface rendering method and device, electronic equipment and storage medium
CN105635712A (en) Augmented-reality-based real-time video recording method and recording equipment
JP5055214B2 (en) Image processing apparatus and image processing method
CN105959814B (en) Video barrage display methods based on scene Recognition and its display device
CN111462313A (en) Implementation method, device and terminal of fluff effect
EP3649621B1 (en) 3d modelling for 3d printing of objects having zero-thickness parts
JPH10255081A (en) Image processing method and image processor
US20200051030A1 (en) Platform and method for collaborative generation of content
US11625900B2 (en) Broker for instancing
Trapp et al. Colonia 3D communication of virtual 3D reconstructions in public spaces
CN111950057A (en) Loading method and device of Building Information Model (BIM)
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN114627149A (en) Time-space trajectory simulation method and device, electronic equipment and computer storage medium
CN114913277A (en) Method, device, equipment and medium for three-dimensional interactive display of object
CN114219888A (en) Method and device for generating dynamic silhouette effect of three-dimensional character and storage medium
CN113034350A (en) Vegetation model processing method and device
CN111476873A (en) Mobile phone virtual doodling method based on augmented reality
CN115019019B (en) Method for realizing 3D special effect editor
CN117541688A (en) Virtual image generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40045894

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant