CN117224963A - Virtual asset processing method and device, storage medium and electronic device - Google Patents

Virtual asset processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN117224963A
CN117224963A CN202310968972.9A CN202310968972A CN117224963A CN 117224963 A CN117224963 A CN 117224963A CN 202310968972 A CN202310968972 A CN 202310968972A CN 117224963 A CN117224963 A CN 117224963A
Authority
CN
China
Prior art keywords
virtual assets
coordinate system
original
data
original virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310968972.9A
Other languages
Chinese (zh)
Inventor
施雨宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310968972.9A priority Critical patent/CN117224963A/en
Publication of CN117224963A publication Critical patent/CN117224963A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method and a device for processing a virtual asset, a storage medium and an electronic device. The method comprises the following steps: acquiring an original virtual asset sequence; performing center point mapping on any two original virtual assets in the original virtual asset sequence based on origin information of the original virtual asset sequence to obtain mapping data of any two original virtual assets, wherein the origin information is used for representing position information obtained by mapping origins of a world coordinate system to a screen coordinate system, and the mapping data comprises one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets; and modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets. The application solves the technical problem of lower efficiency of processing the virtual asset in the related technology.

Description

Virtual asset processing method and device, storage medium and electronic device
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method and apparatus for processing a virtual asset, a storage medium, and an electronic apparatus.
Background
At present, in the game making process using sequence frame animation as an art carrier, a plurality of sets of sequence frame assets are often required to be mapped with each other in batches, and the center point of the virtual asset is required to be mapped with each other. In response to this need, the main implementation is to develop a tool chain in a digital content production tool for baking sequential frame assets, and then modify the rendering environment for baking sequential frame assets in batches, but this method is cumbersome to operate and has low processing efficiency for virtual assets.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present disclosure provide a method, an apparatus, a storage medium, and an electronic device for processing a virtual asset, so as to at least solve a technical problem in the related art that the efficiency of processing the virtual asset is low.
According to one embodiment of the present disclosure, there is provided a method for processing a virtual asset, including: acquiring an original virtual asset sequence, wherein a plurality of original virtual assets in the original virtual asset sequence are used for generating videos; performing center point mapping on any two original virtual assets in the original virtual asset sequence based on origin information of the original virtual asset sequence to obtain mapping data of any two original virtual assets, wherein the origin information is used for representing position information obtained by mapping origins of a world coordinate system to a screen coordinate system, and the mapping data comprises one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets; and modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets.
There is also provided, in accordance with an embodiment of the present disclosure, a processing apparatus for a virtual asset, including: the system comprises a sequence acquisition module, a video generation module and a video generation module, wherein the sequence acquisition module is used for acquiring an original virtual asset sequence, and a plurality of original virtual assets in the original virtual asset sequence are used for generating videos; the asset mapping module is used for performing center point mapping on any two original virtual assets in the original virtual asset sequence based on the original point information of the original virtual asset sequence to obtain mapping data of the any two original virtual assets, wherein the original point information is used for representing position information obtained by mapping an original point of a world coordinate system to a screen coordinate system, and the mapping data comprises one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets; and the asset modification module is used for modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets.
According to one embodiment of the present disclosure, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the method of processing a virtual asset in any one of the above-mentioned claims when run.
There is further provided, in accordance with an embodiment of the present disclosure, an electronic device including a memory having a computer program stored therein and a processor configured to run the computer program to perform the method of processing a virtual asset in any of the above.
In at least some embodiments of the present disclosure, the sequence of original virtual assets is obtained; performing center point mapping on any two original virtual assets in the original virtual asset sequence based on the original point information of the original virtual asset sequence to obtain mapping data of the any two original virtual assets; the method for modifying the plurality of original virtual assets to obtain the plurality of target virtual assets based on mapping data includes that origin information is used for representing position information obtained by mapping an origin of a world coordinate system to a screen coordinate system, and the mapping data includes one or more of the following: the scaling data used for scaling the plurality of original virtual assets, the offset data used for offset processing the plurality of original virtual assets and the trimming data used for trimming the plurality of original virtual assets are capable of directly modifying the plurality of original virtual assets based on the mapping data without using a content production tool when modifying the virtual assets to obtain a plurality of target virtual assets, so that the technical problems that the content production tool is only used when generating an original virtual asset sequence, the virtual assets are not required to be processed and output by re-entering the content production tool are solved, the processing flow of the virtual assets is simplified, the processing efficiency is improved, and the technical problem that the processing efficiency of the virtual assets is low in the related art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a mobile terminal of a method of processing a virtual asset according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of processing a virtual asset according to one embodiment of the present disclosure;
FIG. 3a is a schematic diagram of a virtual asset according to one embodiment of the present disclosure;
FIG. 3b is a schematic diagram of a scaled virtual asset according to one embodiment of the present disclosure;
FIG. 3c is a schematic diagram of a virtual asset after correction is successful, according to one embodiment of the present disclosure;
FIG. 4a is a schematic illustration of a virtual asset with texture boundary overflow according to one embodiment of the present disclosure;
FIG. 4b is a schematic diagram of a trimmed virtual asset in accordance with one embodiment of the disclosure;
FIG. 5 is a block diagram of a processing device of a virtual asset according to one embodiment of the present disclosure;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that those skilled in the art will better understand the present disclosure, a technical solution in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In one possible implementation manner, aiming at the technical problem that the efficiency of processing the virtual asset is low after the inventor is practiced and studied carefully, which is generally adopted in the background in the field of processing the virtual asset, based on the technical problem that the game scene applied by the embodiment of the disclosure can be a game with sequence frame animation as an art carrier, and the game type aimed at is generally a role playing type, a processing method of the virtual asset is provided, and an original virtual asset sequence is obtained; performing center point mapping on any two original virtual assets in the original virtual asset sequence based on the original point information of the original virtual asset sequence to obtain mapping data of the any two original virtual assets; the method for modifying the plurality of original virtual assets to obtain the plurality of target virtual assets based on mapping data includes that origin information is used for representing position information obtained by mapping an origin of a world coordinate system to a screen coordinate system, and the mapping data includes one or more of the following: the scaling data used for scaling the plurality of original virtual assets, the offset data used for offset processing the plurality of original virtual assets and the trimming data used for trimming the plurality of original virtual assets are capable of directly modifying the plurality of original virtual assets based on the mapping data without using a content production tool when modifying the virtual assets to obtain a plurality of target virtual assets, so that the technical problems that the content production tool is only used when generating an original virtual asset sequence, the virtual assets are not required to be processed and output by re-entering the content production tool are solved, the processing flow of the virtual assets is simplified, the processing efficiency is improved, and the technical problem that the processing efficiency of the virtual assets is low in the related art is solved.
The above-described method embodiments to which the present disclosure relates may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, the mobile terminal can be a terminal device such as a smart phone, a tablet computer, a palm computer, a mobile internet device, a game machine and the like. Fig. 1 is a hardware configuration block diagram of a mobile terminal of a virtual asset processing method according to an embodiment of the present disclosure. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a Central Processing Unit (CPU), a Graphics Processor (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data, and in one embodiment of the present disclosure, may further include: input output device 108 and display device 110.
In some optional embodiments, which are based on game scenes, the device may further provide a human-machine interaction interface with a touch-sensitive surface, where the human-machine interaction interface may sense finger contacts and/or gestures to interact with a Graphical User Interface (GUI), where the human-machine interaction functions may include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
In accordance with one embodiment of the present disclosure, an embodiment of a method of processing a virtual asset is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
FIG. 2 is a flow chart of a method of processing a virtual asset, as shown in FIG. 2, according to one embodiment of the present disclosure, the method comprising the steps of:
step S202, an original virtual asset sequence is acquired, wherein a plurality of original virtual assets in the original virtual asset sequence are used for generating videos.
Where the sequence of original virtual assets may be understood as being generated directly by the content production tool, comprising a sequence of a plurality of original virtual assets, which may be understood as a collection of digitized assets present in the virtual world, which may include, but are not limited to, virtual currency, virtual props, virtual properties, virtual identities, etc., the virtual assets may be transacted, transferred, and used in the virtual world, in embodiments of the present disclosure, the plurality of original virtual assets may be a plurality of frame images for generating video.
It will be appreciated that since each virtual asset is produced individually by the content production tool, there will be a difference between each virtual asset, i.e., the spatial and resolution corresponding to the different virtual assets will be different.
It should be noted that, since all virtual assets correspond to the same virtual space, and only the rendering displays are different on the screen, the space can be understood as a display space after rendering displays.
In an alternative embodiment, multiple original virtual assets may be video generated by video editing software.
Step S204, performing center point mapping on any two original virtual assets in the original virtual asset sequence based on the original point information of the original virtual asset sequence to obtain mapping data of any two original virtual assets, wherein the original point information is used for representing position information obtained by mapping an original point of a world coordinate system to a screen coordinate system, and the mapping data comprises one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets.
The origin information may be understood as coordinate information of an origin of the virtual space corresponding to the original virtual asset sequence, that is, a world coordinate origin of the virtual space. The center point map may be understood as a kind of data that maps the center point of one original virtual asset to another point such that the other point coincides with the center point of another original virtual asset, the map data may be understood as data that can fulfill the requirement of the center point map, i.e. data that can map the center point of one original virtual asset with the center point of another original virtual asset, may include, but is not limited to, scaling data, offset data, cropping data, etc., the world coordinate system may be understood as a coordinate system describing the position and direction of an object in a virtual scene, which is typically a three-dimensional coordinate system, three coordinate axes (typically X, Y and Z axes) are used to represent the position in space, each object in the virtual scene has a position and direction relative to the world coordinate system, the screen coordinate system may be understood as a screen coordinate system of a camera, the data may be understood as data for processing the virtual asset, the offset data may be understood as data for performing offset processing the virtual asset, and the cropping data may be understood as data for scaling the virtual asset processing.
In an alternative embodiment, the origin of the world coordinate system may be represented by float3 (0, 0).
It should be noted that, recording the origin information of the original virtual asset sequence may facilitate the subsequent mapping of the remaining plurality of original virtual assets, and if the origin information is not recorded, the subsequent flow cannot be supported.
It can be understood that the center point mapping is performed on any two original virtual assets in the original virtual asset sequence, so that the alignment and matching between any two original virtual assets can be realized, and the process is very important for performing the operations of animation synthesis, special effect production, scene construction and the like.
Specifically, in a virtual scenario, the center point mapping of any two original virtual assets in the sequence of original virtual assets may have the following effects: different virtual assets in a sequence of virtual assets may come from different sources, and their center points may not be consistent in position, and by mapping the center points, they may be aligned to the same position, making the animation transition smoother; certain special effects may need to interact with the virtual asset sequence, for example, the explosion effect needs to be precisely matched with the position of the explosion center point, and the consistency of the positions of the special effects and the virtual asset sequence can be ensured through mapping the center point, so that the effect is more vivid; in a field Jing Da building, it may be desirable to group different virtual assets together, such as combining characters and backgrounds. Through the mapping center point, the positions of a plurality of virtual assets can be conveniently aligned, and the construction efficiency is improved. In short, the alignment degree and the matching degree between the assets can be improved by mapping the center points of different virtual assets in one virtual asset sequence, so that the works such as animation production, special effect production, scene construction and the like are more accurate and efficient.
Step S206, modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets.
Wherein, the plurality of target virtual assets may be understood as virtual assets obtained by modifying the plurality of original virtual assets based on the mapping data.
It can be appreciated that through the above process, a tool independent of the digital content production tool can be manufactured, so that the mutual mapping of the center points of the plurality of sets of sequential frame assets can be accurately realized, meanwhile, the method has strong universality, the efficiency of modifying the assets is greatly improved, and the game development cost is reduced.
Through the steps, an original virtual asset sequence is obtained; performing center point mapping on any two original virtual assets in the original virtual asset sequence based on the original point information of the original virtual asset sequence to obtain mapping data of the any two original virtual assets; the method for modifying the plurality of original virtual assets to obtain the plurality of target virtual assets based on mapping data includes that origin information is used for representing position information obtained by mapping an origin of a world coordinate system to a screen coordinate system, and the mapping data includes one or more of the following: the scaling data used for scaling the plurality of original virtual assets, the offset data used for offset processing the plurality of original virtual assets and the trimming data used for trimming the plurality of original virtual assets are capable of directly modifying the plurality of original virtual assets based on the mapping data without using a content production tool when modifying the virtual assets to obtain a plurality of target virtual assets, so that the technical problems that the content production tool is only used when generating an original virtual asset sequence, the virtual assets are not required to be processed and output by re-entering the content production tool are solved, the processing flow of the virtual assets is simplified, the processing efficiency is improved, and the technical problem that the processing efficiency of the virtual assets is low in the related art is solved.
Optionally, based on origin information of the original virtual asset sequence, performing center point mapping on any two original virtual assets in the original virtual asset sequence to obtain mapping data of any two original virtual assets, including: acquiring attribute information of any two original virtual assets, wherein the attribute information at least comprises: pixel size and coordinate system information; determining a virtual asset to be mapped and a standard virtual asset in any two virtual assets based on the pixel size; mapping the coordinate system information to be mapped of the virtual asset to be mapped based on the standard coordinate system information of the standard virtual asset to obtain scaling data; restoring the coordinate system information to be mapped based on the original point information to obtain offset data; and mapping the pixel size to be mapped of the virtual asset to be mapped based on the standard coordinate system information to obtain the cut data.
The attribute information may be understood as information reflecting specific characteristics of the virtual asset, and may include, but is not limited to, pixel size and coordinate system information, the virtual asset to be mapped may be understood as a virtual asset to be mapped in any two original virtual assets, the standard virtual asset may be understood as a virtual asset serving as a mapping standard in any two original virtual assets, the standard coordinate system information may be understood as coordinate system information corresponding to the standard virtual asset, the coordinate system information to be mapped may be understood as coordinate system information corresponding to the virtual asset to be mapped, and the pixel size to be mapped may be understood as the pixel size of the virtual asset to be mapped.
It will be appreciated that since multiple sets of assets may have different pixel precision, it is required to unify pixel units, that is, to unify units for assets with lower pixel precision, and therefore, based on the pixel size, determine the virtual asset to be mapped and the standard virtual asset in any two virtual assets, for example, may be to use the virtual asset with smaller pixel size as the standard virtual asset and use the virtual asset with larger pixel size as the virtual asset to be mapped, where the virtual asset to be mapped needs to be modified with the pixel size of the standard virtual asset as the standard.
Specifically, after the scaling data, the offset data and the trimming data are obtained by the method, a plurality of original virtual assets can be modified based on the data to obtain a plurality of target virtual assets, so that the purpose that a content production tool is used only when an original virtual asset sequence is generated, and the processing and the output of the virtual assets are not needed to be carried out by reentering the content production tool is achieved, the technical effects of simplifying the processing flow of the virtual assets and improving the processing efficiency are achieved, and the technical problem that the processing efficiency of the virtual assets is lower in the related art is solved.
In an alternative embodiment, the attribute information for any two original virtual assets may be obtained through the use of a computer graphics tool.
Optionally, mapping the coordinate system information to be mapped of the virtual asset to be mapped based on the standard coordinate system information of the standard virtual asset to obtain scaling data, including: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; and determining scaling data based on second coordinate axis information and unit length, wherein the second coordinate axis information is used for representing coordinate axis information except the first coordinate axis information in the coordinate system information to be mapped.
The first coordinate axis information may be understood as related information of a coordinate axis in the coordinate system to be mapped, including but not limited to a width and a height of the first coordinate axis, the standard coordinate axis information may be understood as related information of a coordinate axis in the standard coordinate system, including but not limited to a width and a height of the standard coordinate axis, the unit length may be understood as an actual length or a numerical value corresponding to a unit length represented on the standard coordinate axis, and the second coordinate axis information may be understood as related information of other coordinate axes except the first coordinate axis information in the coordinate system to be mapped.
Specifically, the above procedure may be represented using pseudo code as follows:
ReScale=new Vector4(width/standardPixel.x,height/standardPixel.y,0,0),
this pseudo-code representation creates a Vector4 object named ReScale, which contains four elements, the first element being the width (width) of the coordinate system to be mapped divided by the unit length (standard pixel. X) of the x-axis of the standard coordinate system, representing the width scaling; the second element is the result of the height (height) divided by the unit length of the y-axis of the standard coordinate system (standard pixel. Y), representing the height scale; the third and fourth elements are both 0, indicating that scaling is performed without changing depth and position. The purpose of this section of pseudo code is to calculate the scaling of width and height relative to standard pixel and store the results in ReScale, by using which the width and height can be scaled in subsequent operations.
Optionally, restoring the coordinate system information to be mapped based on the origin information to obtain offset data, including: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; determining a first origin coordinate of an origin of a standard coordinate system in a coordinate system to be mapped; determining a second origin coordinate of the origin information in the standard coordinate system; the offset data is determined based on the first origin coordinates, the second origin coordinates, the coordinate system information to be mapped, and the unit length in the standard coordinate system.
The first origin coordinate may be understood as a coordinate of an origin of the standard coordinate system in the coordinate system to be mapped, and the second origin coordinate may be understood as a coordinate of an origin to be converted in the standard coordinate system.
It can be appreciated that, because the position of the virtual asset in the canvas is offset due to the operation of uniform pixel precision, the position needs to be corrected by using an origin resetting algorithm, and the point in the scaled coordinate system to be mapped can be translated into a standard coordinate system by calculating an offset value.
In particular, the specific flow of determining the offset data may be represented by the following pseudo code:
OriginPointOffset=new Vector4(-((standardOx-((standardPixel.x-width)/2+Ox))/standardPixel.x/(width/standardPixel.x)),((standardOy-((standardPixel.y-height)/2+Oy))/standardPixel.y/(height/standardPixel.y)),0,0),
wherein, standard Ox and standard Oy represent coordinates of an origin of the standard coordinate system in the coordinate system to be mapped, namely the first origin coordinate, standard pixel. X and standard pixel. Y represent lengths of unit lengths of the standard coordinate system in the coordinate system to be mapped, width and height represent widths and heights of the coordinate system to be mapped, ox and Oy represent coordinates of an origin to be converted in the standard coordinate system, namely the second origin coordinate.
The purpose of this section of pseudo code is to calculate the offset of the origin of the coordinate system to be mapped relative to the origin of the standard coordinate system. The specific calculation process is as follows:
First, the offset in the X-axis direction is calculated: subtracting half of the width of the coordinate system to be mapped from the X coordinate of the origin of the coordinate system to be mapped, and adding the X coordinate of the origin of the standard coordinate system to obtain the X coordinate value in the standard coordinate system. Then, this value is subtracted by the X coordinate of the origin of the standard coordinate system, divided by the X size of the standard pixel, and finally divided by the ratio of the width of the coordinate system to be mapped to the standard pixel. The result is the offset in the X-axis direction, and the offset in the Y-axis direction is calculated: subtracting half of the height of the coordinate system to be mapped from the Y coordinate of the origin of the coordinate system to be mapped, and adding the Y coordinate of the origin of the standard coordinate system to obtain the Y coordinate value in the standard coordinate system. Then, subtracting the Y coordinate of the origin of the standard coordinate system from the value, dividing by the Y size of the standard pixel, and dividing by the ratio of the height of the coordinate system to be mapped to the standard pixel. The result is an offset in the Y-axis direction, and then the offsets in the X-axis and Y-axis directions are combined into a vector, where the Z-axis and W-axis values are both 0, and the resulting vector is the origin Pointoffset. That is, the pseudo code is obtained by calculating an offset of the origin of the coordinate system to be mapped with respect to the origin of the standard coordinate system, and storing the result in a vector, and the calculation of the offset takes the size and position of the coordinate system to be mapped, and the origin and pixel size of the standard coordinate system into consideration.
Fig. 3a is a schematic diagram of a virtual asset according to an embodiment of the disclosure, as shown in fig. 3a, a hollow rectangle indicates a virtual asset to be mapped, a solid rectangle indicates a standard virtual asset, an oval solid line indicates a range where the virtual asset is located, and as can be seen from fig. 3a, there may be a case where pixel precision is not uniform between two virtual assets, which may be represented by that pixel sizes of the two virtual assets are different.
FIG. 3b is a schematic diagram of a scaled virtual asset, as shown in FIG. 3b, in which a hollow rectangle represents a virtual asset to be mapped, a solid rectangle represents a standard virtual asset, and an oval solid line represents a range of the virtual asset, as can be seen from FIG. 3b, a virtual asset with a larger pixel size, i.e., a virtual asset to be mapped, can be scaled down with a virtual asset with a smaller pixel size as a standard, and in this way, pixel accuracy of the two virtual assets can be unified, but a shift of the position of the virtual asset in the canvas may be caused.
FIG. 3c is a schematic diagram of a successfully modified virtual asset, as shown in FIG. 3c, with a dashed rectangle representing the successfully modified virtual asset and an oval solid line representing the range of the virtual asset, where the effect of the offset of the virtual asset in the canvas caused by the scaling of the virtual asset has been eliminated, according to one embodiment of the present disclosure.
Optionally, mapping the pixel size to be mapped of the virtual asset to be mapped to obtain the clipping data, including: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; the clipping data is determined based on the pixel size to be mapped, the standard coordinate system information, and the unit length in the standard coordinate system.
It will be appreciated that by calculating the clipping value by an algorithm, the pixel rendering extension that is reused for virtual asset boundaries due to the texture addressing pattern can be clipped away.
In particular, the specific flow of determining the cutoff data may be represented by the following pseudo code:
cutValue=new Vector4((standardPixel.x-bodyImages[0].width)/2/standardPixel.x,(standardPixel.y-bodyImages[0].height)/2/standardPixel.y,0,0),
cutTerm=(saturate((uv.x-cutValue.x)*10000000)*(1-saturate((uv.x-(1-cutvalue.x))*10000000)))*(saturate((uv.y-cutValue.y)*10000000)*(1-saturate((uv.y-(1-cutvalue.y))*10000000))),
final=lerp(float4(0,0,0,0),final,cutTerm),
the cutValue is a four-dimensional vector, which represents clipping data, xy components of the vector respectively represent texture coordinates of an upper left corner of the image in a standard coordinate system, and calculation processes of cutTerm and final can be understood as a process of clipping the virtual asset based on the clipping data. The pseudo code is obtained by subtracting the width and the height of the image from the width and the height of a standard coordinate system, dividing by 2, then transmitting the cutValue into a loader, comparing the texture coordinate with the cutValue, limiting the calculation result to 0,1 by using a saturation function, judging whether the pixel area is cut or not, if the pixel area is in a cutting range, setting a cutting coefficient cutTerm to be 1, and if the pixel area is out of the cutting range, setting the cutting coefficient cutTerm to be 0, and using the cutTerm to interpolate the image color and the background color to achieve the cutting effect. The aim is to calculate a clipping factor (cutTerm) and apply it to the final result (final).
The specific process is as follows: first, a new vector (cutValue) is obtained by dividing the difference between the original image (body images 0) and the standard pixel (standard pixel) by the value of 2 times the standard pixel, the vector represents the clipping amount of the image on the x and y axes, namely the clipping data, then a clipping coefficient (cutTerm) is obtained by calculating the difference between uv coordinates and the cutValue, multiplying the result by a large number (10000000), and limiting the result between 0 and 1 by a saturate function. The calculation mode of the clipping coefficient is that the offset of the uv coordinate on the x and y axes is limited, so that the value in the cutValue range is 1, the value exceeding the cutValue range is 0, finally the clipping coefficient is applied to a final result (final) through a lerp function, the pixel value exceeding the cutValue range is set to 0, and the pixel value remaining in the cutValue range is unchanged. In short, the purpose of this section of pseudo code is to set the pixel value out of the specified range to 0 by calculating the cropping coefficient to achieve the cropping effect of the image.
FIG. 4a is a schematic diagram of a virtual asset with texture boundary overflow, as shown in FIG. 4a, where the diagonal line filled portion represents texture boundary overflow errors, the open rectangle represents the virtual asset, the dashed rectangle represents the projection of the virtual asset onto the canvas, and the oval solid line represents the range of the virtual asset, as can be seen from FIG. 4a, the texture addressing mode may result in texture boundary overflow errors.
FIG. 4b is a schematic diagram of a trimmed virtual asset, as shown in FIG. 4b, with open rectangles representing virtual assets, dashed rectangles representing projection of virtual assets onto canvas, and oval solid lines representing the extent of the virtual assets, as can be seen from FIG. 4b, to eliminate texture boundary overflow errors caused by texture addressing mode after trimming.
Optionally, obtaining the original virtual asset sequence includes: acquiring a file directory corresponding to an original virtual asset sequence through a script; reading virtual assets corresponding to the file catalogue to obtain a plurality of first virtual assets; unifying texture formats of the plurality of first virtual assets to obtain a plurality of second virtual assets; and storing the plurality of second virtual assets into a preset texture list to obtain an original virtual asset sequence.
The script may be understood as a section of computer program code for executing a specific task, the file directory may be understood as a structural manner for organizing and storing files, by writing the script, the file directory corresponding to the original virtual asset sequence may be obtained, and various operations may be performed on the file directories, the first virtual assets may be understood as all virtual assets under the file directory, the second virtual assets may be understood as virtual assets after the texture format is unified, and the preset texture list may be understood as a list preset in advance for performing batch texture processing.
It will be appreciated that in the process of obtaining the file directory corresponding to the original virtual asset sequence, this may be achieved by writing a script. This script may read the data of the virtual asset sequence and parse the data according to specific rules to determine the structure and path of the file directories that the script may process and manipulate according to needs, such as creating, deleting, moving, copying, or renaming files and folders. The above process may be implemented in a real-time three-dimensional development platform and editor (Unity 3D, simply Unity) engine.
In an alternative embodiment, the local sequence frame asset file directory may be obtained through an application programming interface, and all sequence frame assets under the file directory, that is, the plurality of first virtual assets, may be read.
Specifically, the above procedure can be understood as: and acquiring a local sequence frame asset file directory, reading all sequence frame assets under the file directory, and adding the processed assets into the two-dimensional texture list through a circulation algorithm.
It will be appreciated that the round robin algorithm may merge multiple assets into one texture, reducing the number of rendering calls, thereby improving rendering performance, so that adding processed assets into a two-dimensional texture list by the round robin algorithm may merge multiple small assets into one large texture, reducing the load on the graphics processor (Graphics Processing Unit, simply referred to as GPU), and improving rendering efficiency.
It should be noted that, storing the plurality of second virtual assets in the preset texture list to obtain the original virtual asset sequence can facilitate the subsequent batch processing of all the two-dimensional textures, if the operation is not performed, all the textures need to be processed one by one, so that the processing efficiency of the method is low.
Optionally, unifying the texture formats of the plurality of first virtual assets to obtain a plurality of second virtual assets, including: adjusting the linear interpolation modes of the textures of the plurality of first virtual assets to preset modes, wherein the preset modes are used for representing that the linear interpolation of the textures of the plurality of first virtual assets is forbidden; the texture addressing mode of the plurality of first virtual assets is adjusted to a clamp mode.
The texture linear interpolation mode may be understood as an interpolation mode for texture mapping in which texture coordinates are generally non-integer, and the texture linear interpolation mode may be understood as a mode in which color values of intermediate points of a texture are obtained by interpolating the texture coordinates, the preset mode may be understood as a mode in which linear interpolation is not performed, the texture addressing mode may be understood as a method for determining a position of texture coordinates in a texture image in computer graphics, which determines how sampling of a texture is processed when the texture coordinates exceed a boundary of the texture image, and the clamp mode may be understood as a mode in which a portion exceeding a range of the texture coordinates may be limited to the boundary of the texture.
It can be appreciated that the texture linear interpolation mode may provide a smooth texture mapping effect, especially suitable for the case where the texture coordinate changes greatly, but in the texture linear interpolation mode, interpolation calculation may be performed between pixel points, which results in inaccurate values and blurred images, so in the embodiment of the present disclosure, the texture linear interpolation mode may be adjusted to a mode in which linear interpolation is not performed.
It should be noted that, in the clamping mode, the portion beyond the range of the texture coordinates is limited to the boundary of the texture, and the processing such as repetition or mirroring is not performed, for example, the range of the texture coordinates may be limited to be between [0,1], and the portion beyond the range may be cut. The clamping mode can avoid the situation of boundary repetition or mirroring in texture mapping, so that the texture can still be correctly displayed when the texture coordinates are out of range.
In addition to the clamp mode, common texture addressing modes may include: repeating, when the texture coordinates are beyond the range of [0,1], mapping the texture coordinates back into the range of [0,1], namely filling the beyond part by repeating the texture image; mirror image repetition, when the texture coordinates exceed the range of [0,1], mapping the texture coordinates back to the range of [0,1], but filling the exceeding part in a mirror image symmetry mode; mirror image clamping, when texture coordinates exceed the range of [0,1], limiting the texture coordinates within the range of [0,1], but cutting off the exceeding part in a mirror image symmetry mode; and when the texture coordinates are beyond the range of [0,1], mapping the texture coordinates to the designated frame colors. Different texture addressing modes are suitable for different application scenes, and the proper mode can be selected according to actual requirements to process the excess part of the texture.
Optionally, modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets, including: modifying the plurality of original virtual assets based on the mapping data through a preset interface of the content production tool to obtain a plurality of modified virtual assets; drawing the plurality of modified virtual assets onto the render texture target; and performing format conversion on the rendering texture target to obtain a plurality of target virtual assets.
The preset interface may be understood as an application programming interface preset in advance, and the rendering texture target may be used to capture, store and reuse rendering results in graphics rendering, and may store the rendered image data in one texture object for use in a subsequent rendering process.
In an alternative embodiment, the render texture target is formatted, which may be converted into a portable network graphics (Portable Network Graphics, simply referred to as PNG) format.
Specifically, the above process can be understood as applying the calculated bias value, scaling value and clipping value to the virtual asset through the application programming interface, then drawing the applied asset onto the rendering texture target, storing the rendering texture target in the memory, and finally converting the rendering texture target in the memory into PNG format and storing the PNG format in local by cycling batch.
Optionally, modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of modified virtual assets, including: scaling the original texture coordinates of the plurality of original virtual assets based on the scaling data to obtain first texture coordinates; performing offset processing on the first texture coordinates based on the offset data to obtain second texture coordinates; and cutting the plurality of original virtual assets based on the cutting data and the second texture coordinates to obtain a plurality of modified virtual assets.
The original texture coordinates may be understood as texture coordinates of a plurality of original virtual assets, the first texture coordinates may be understood as coordinates obtained by scaling the original texture coordinates, and the second texture coordinates may be understood as coordinates obtained by offsetting the first texture coordinates.
It can be understood that, through the above process, the original texture coordinates are first scaled, and because pixel precision is required to be unified in the scaling process, the position of the virtual asset in the canvas is offset, so that the offset process is required to be continued after the scaling process to obtain the second texture coordinates, and finally, the plurality of original virtual assets are cut based on the cutting data and the second texture coordinates, so that the overflow error of the texture boundary caused by the texture addressing mode can be corrected, and finally, a plurality of modified virtual assets are obtained.
Optionally, scaling the original texture coordinates of the plurality of original virtual assets based on the scaling data to obtain first texture coordinates, including: determining a scaling factor and a translation factor based on the scaling data; obtaining the product of the scaling coefficient and the original texture coordinate to obtain a scaling texture coordinate; and obtaining the sum of the scaled texture coordinates and the translation coefficients to obtain first texture coordinates.
Where the scaling factor may be understood as a factor describing the dimensional change of the graphic, which may be, for example, -0.5, the translation factor may be understood as a factor describing the position movement of the graphic, and the scaled texture coordinates may be understood as coordinates where scaling of the original texture coordinates of the plurality of original virtual assets is required.
Specifically, the above procedure can be expressed by pseudo code as:
uvScale=1/ReScale.xy*uv+(1/ReScale.xy*-0.5)+0.5,
the function of this section of pseudo code is to scale and translate the texture coordinates to accommodate the new size. First, uvScale is a new texture coordinate whose value is equal to the original texture coordinate multiplied by 1/rescale xy, which scales the x and y components of the original texture coordinate to the new size, and then uvScale also requires a translation operation, which can be implemented by adding 1/rescale xy-0.5 to uvScale. This operation will translate the origin (0, 0) of the texture coordinates to a new position and finally, in order to guarantee that the range of texture coordinates is between [0,1], uvScale will also require an offset operation, i.e. 0.5 is added to uvScale, which will map the range of texture coordinates from [ -0.5,0.5] to [0,1]. Through these several operations, we can scale, translate and map the original texture coordinates to new dimensions to accommodate the new texture coordinate range. That is, after calculating the scaling value required for converting the pixel coordinate system into the standard coordinate system, the asset to be modified is operated in the standard coordinate system in the loader. And translating the scaled texture coordinates (1/rescale. Xy-0.5) to align the texture center to the center point of the standard coordinate system, and finally adjusting the texture coordinates by +0.5 to change the range of the texture coordinates from-0.5 to 0.1.
Optionally, cropping the plurality of original virtual assets based on the cropping data and the second texture coordinates to obtain a plurality of modified virtual assets, including: comparing the second texture coordinates with the cutting data to determine a cutting range; comparing the pixels of the plurality of original virtual assets with the cutting range to obtain the cutting coefficients of the pixels; and interpolating the colors of the plurality of original virtual assets based on the clipping coefficients of the pixels to obtain a plurality of modified virtual assets.
The clipping range may be understood as a reasonable range for clipping the pixel region, and the clipping coefficient may be understood as a coefficient describing clipping of the pixel region, which is determined based on the second texture coordinates and clipping data.
The above-mentioned comparison of the pixels of the plurality of original virtual assets with the clipping range to obtain the clipping coefficient of the pixel may be understood as that if the pixels of the plurality of original virtual assets are within the clipping range, the clipping coefficient may be regarded as 1, and if the pixels of the plurality of original virtual assets are outside the clipping range, the clipping coefficient may be regarded as 0.
Specifically, the above procedure can be understood as: comparing the texture coordinates with a clipping range, limiting the calculation result to 0,1, judging whether the pixel area is clipped, if so, setting the clipping coefficient to 1, and if not, setting the clipping coefficient to 0, and interpolating the image color and the background color by using the clipping coefficient to achieve the clipping effect.
Optionally, the method further comprises: origin information is acquired from the content production tool through a preset interface of the content production tool.
A content production tool, which may be understood as a software, application, or platform for creating, editing, and publishing digital content, may include, but is not limited to, image editing software, video editing software, audio editing software, web page design software, social media management tools, etc., such as Microsoft Office software (Microsoft Office), adobe creative cloud (Adobe Creative Cloud), etc., that may help a user easily create and modify digital content to meet different authoring requirements.
Specifically, the above procedure can be understood as: the position of world coordinate origin float3 (0, 0) is mapped to the screen coordinates of the camera through an application programming interface in the digital content production tool, recorded and output into a JS object numbered musical notation (JavaScript Object Notation, json) file.
It will be appreciated that by the above process, the origin can be recorded within the digital content production tool via the programming interface in preparation for the subsequent mapping of the plurality of virtual assets to each other.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present disclosure may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present disclosure.
The embodiment also provides a device for processing the virtual asset, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. As used below, the terms "subunit," "unit," "module" may be a combination of software and/or hardware that implements the predetermined functionality. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
FIG. 5 is a block diagram of a virtual asset processing device according to one embodiment of the disclosure, as shown, the device comprising: a sequence acquisition module 502, configured to acquire an original virtual asset sequence, where a plurality of original virtual assets in the original virtual asset sequence are used to generate a video; the asset mapping module 504 is configured to perform center point mapping on any two original virtual assets in the original virtual asset sequence based on origin information of the original virtual asset sequence, so as to obtain mapping data of any two original virtual assets, where the origin information is used to characterize position information obtained by mapping an origin of a world coordinate system to a screen coordinate system, and the mapping data includes one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets; the asset modification module 506 is configured to modify the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets.
The asset mapping module 504 includes: the attribute acquisition unit is used for acquiring attribute information of any two original virtual assets, wherein the attribute information at least comprises: pixel size and coordinate system information; an asset determining unit configured to determine a virtual asset to be mapped and a standard virtual asset from any two virtual assets based on the pixel size; the first mapping unit is used for mapping the coordinate system information to be mapped of the virtual asset to be mapped based on the standard coordinate system information of the standard virtual asset to obtain scaling data; the restoring unit is used for restoring the coordinate system information to be mapped based on the original point information to obtain offset data; and the second mapping unit is used for mapping the pixel size to be mapped of the virtual asset to be mapped based on the standard coordinate system information to obtain the cutting data.
The first mapping unit includes: a first length determining subunit, configured to determine a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; and the scaling data determining subunit is used for determining scaling data based on second coordinate axis information and unit length, wherein the second coordinate axis information is used for representing coordinate axis information except the first coordinate axis information in the coordinate system information to be mapped.
The reduction unit includes: a second length determining subunit configured to determine a unit length in a standard coordinate system based on the first coordinate axis information in the coordinate system information to be mapped and the standard coordinate axis information in the standard coordinate system; the first coordinate determination subunit is used for determining a first origin coordinate of an origin of the standard coordinate system in the coordinate system to be mapped; a second coordinate determination subunit, configured to determine a second origin coordinate of the origin information in the standard coordinate system; and the offset data determining subunit is used for determining the offset data based on the first origin coordinate, the second origin coordinate, the coordinate system information to be mapped and the unit length in the standard coordinate system.
The second mapping unit includes: a third length determining subunit, configured to determine a unit length in a standard coordinate system based on the first coordinate axis information in the coordinate system information to be mapped and the standard coordinate axis information in the standard coordinate system; and the clipping data determining subunit is used for determining clipping data based on the size of the pixel to be mapped, the standard coordinate system information and the unit length in the standard coordinate system.
The sequence acquisition module 502 includes: the catalog acquisition unit is used for acquiring a file catalog corresponding to the original virtual asset sequence through a script; the directory reading unit is used for reading the virtual assets corresponding to the file directory to obtain a plurality of first virtual assets; the format unifying unit is used for unifying the texture formats of the plurality of first virtual assets to obtain a plurality of second virtual assets; and the asset storage unit is used for storing the plurality of second virtual assets into a preset texture list to obtain an original virtual asset sequence.
The format unification unit includes: a first mode adjustment subunit, configured to adjust a texture linear interpolation mode of the plurality of first virtual assets to a preset mode, where the preset mode is used to characterize prohibition of linear interpolation of textures of the plurality of first virtual assets; and a second mode adjustment subunit for adjusting the texture addressing mode of the plurality of first virtual assets to a clamp mode.
The asset modification module 506 includes: the asset modification unit is used for modifying the plurality of original virtual assets based on the mapping data through a preset interface of the content production tool to obtain a plurality of modified virtual assets; a drawing unit for drawing the plurality of modified virtual assets onto the rendering texture target; and the format conversion unit is used for carrying out format conversion on the rendering texture target to obtain a plurality of target virtual assets.
The asset modification unit includes: the scaling subunit is used for scaling the original texture coordinates of the plurality of original virtual assets based on the scaling data to obtain first texture coordinates; the offset subunit is used for performing offset processing on the first texture coordinate based on the offset data to obtain a second texture coordinate; and the cutting subunit is used for cutting the plurality of original virtual assets based on the cutting data and the second texture coordinates to obtain a plurality of modified virtual assets.
The scaling subunit may be implemented by: determining a scaling factor and a translation factor based on the scaling data; obtaining the product of the scaling coefficient and the original texture coordinate to obtain a scaling texture coordinate; and obtaining the sum of the scaled texture coordinates and the translation coefficients to obtain first texture coordinates.
The cutting subunit may be implemented by the following procedure: comparing the second texture coordinates with the cutting data to determine a cutting range; comparing the pixels of the plurality of original virtual assets with the cutting range to obtain the cutting coefficients of the pixels; and interpolating the colors of the plurality of original virtual assets based on the clipping coefficients of the pixels to obtain a plurality of modified virtual assets.
The device further comprises: and the information acquisition module is used for acquiring the original point information from the content production tool through a preset interface of the content production tool.
It should be noted that each of the above sub-units, and modules may be implemented by software or hardware, and the latter may be implemented by, but is not limited to: the subunits, the units and the modules are all positioned in the same processor; alternatively, each of the above sub-units, and modules may be located in any combination in different processors.
Embodiments of the present disclosure also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
acquiring an original virtual asset sequence, wherein a plurality of original virtual assets in the original virtual asset sequence are used for generating videos;
performing center point mapping on any two original virtual assets in the original virtual asset sequence based on origin information of the original virtual asset sequence to obtain mapping data of any two original virtual assets, wherein the origin information is used for representing position information obtained by mapping origins of a world coordinate system to a screen coordinate system, and the mapping data comprises one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets;
And modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: acquiring attribute information of any two original virtual assets, wherein the attribute information at least comprises: pixel size and coordinate system information; determining a virtual asset to be mapped and a standard virtual asset in any two virtual assets based on the pixel size; mapping the coordinate system information to be mapped of the virtual asset to be mapped based on the standard coordinate system information of the standard virtual asset to obtain scaling data; restoring the coordinate system information to be mapped based on the original point information to obtain offset data; and mapping the pixel size to be mapped of the virtual asset to be mapped based on the standard coordinate system information to obtain the cut data.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; and determining scaling data based on second coordinate axis information and unit length, wherein the second coordinate axis information is used for representing coordinate axis information except the first coordinate axis information in the coordinate system information to be mapped.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; determining a first origin coordinate of an origin of a standard coordinate system in a coordinate system to be mapped; determining a second origin coordinate of the origin information in the standard coordinate system; the offset data is determined based on the first origin coordinates, the second origin coordinates, the coordinate system information to be mapped, and the unit length in the standard coordinate system.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; the clipping data is determined based on the pixel size to be mapped, the standard coordinate system information, and the unit length in the standard coordinate system.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: acquiring a file directory corresponding to an original virtual asset sequence through a script; reading virtual assets corresponding to the file catalogue to obtain a plurality of first virtual assets; unifying texture formats of the plurality of first virtual assets to obtain a plurality of second virtual assets; and storing the plurality of second virtual assets into a preset texture list to obtain an original virtual asset sequence.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: adjusting the linear interpolation modes of the textures of the plurality of first virtual assets to preset modes, wherein the preset modes are used for representing that the linear interpolation of the textures of the plurality of first virtual assets is forbidden; the texture addressing mode of the plurality of first virtual assets is adjusted to a clamp mode.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: modifying the plurality of original virtual assets based on the mapping data through a preset interface of the content production tool to obtain a plurality of modified virtual assets; drawing the plurality of modified virtual assets onto the render texture target; and performing format conversion on the rendering texture target to obtain a plurality of target virtual assets.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: scaling the original texture coordinates of the plurality of original virtual assets based on the scaling data to obtain first texture coordinates; performing offset processing on the first texture coordinates based on the offset data to obtain second texture coordinates; and cutting the plurality of original virtual assets based on the cutting data and the second texture coordinates to obtain a plurality of modified virtual assets.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining a scaling factor and a translation factor based on the scaling data; obtaining the product of the scaling coefficient and the original texture coordinate to obtain a scaling texture coordinate; and obtaining the sum of the scaled texture coordinates and the translation coefficients to obtain first texture coordinates.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: comparing the second texture coordinates with the cutting data to determine a cutting range; comparing the pixels of the plurality of original virtual assets with the cutting range to obtain the cutting coefficients of the pixels; and interpolating the colors of the plurality of original virtual assets based on the clipping coefficients of the pixels to obtain a plurality of modified virtual assets.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: origin information is acquired from the content production tool through a preset interface of the content production tool.
In the computer-readable storage medium of this embodiment, a technical solution for processing of virtual assets is provided. Obtaining an original virtual asset sequence; performing center point mapping on any two original virtual assets in the original virtual asset sequence based on the original point information of the original virtual asset sequence to obtain mapping data of the any two original virtual assets; the method for modifying the plurality of original virtual assets to obtain the plurality of target virtual assets based on mapping data includes that origin information is used for representing position information obtained by mapping an origin of a world coordinate system to a screen coordinate system, and the mapping data includes one or more of the following: the scaling data used for scaling the plurality of original virtual assets, the offset data used for offset processing the plurality of original virtual assets and the trimming data used for trimming the plurality of original virtual assets are capable of directly modifying the plurality of original virtual assets based on the mapping data without using a content production tool when modifying the virtual assets to obtain a plurality of target virtual assets, so that the technical problems that the content production tool is only used when generating an original virtual asset sequence, the virtual assets are not required to be processed and output by re-entering the content production tool are solved, the processing flow of the virtual assets is simplified, the processing efficiency is improved, and the technical problem that the processing efficiency of the virtual assets is low in the related art is solved.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present application, a computer-readable storage medium stores thereon a program product capable of implementing the method described above in this embodiment. In some possible implementations, aspects of the disclosed embodiments may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of the disclosure, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present disclosure may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the present disclosure is not limited thereto, and in the embodiments of the present disclosure, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Any combination of one or more computer readable media may be employed by the program product described above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF) and the like, or any suitable combination of the foregoing.
Embodiments of the present disclosure also provide an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
acquiring an original virtual asset sequence, wherein a plurality of original virtual assets in the original virtual asset sequence are used for generating videos;
performing center point mapping on any two original virtual assets in the original virtual asset sequence based on origin information of the original virtual asset sequence to obtain mapping data of any two original virtual assets, wherein the origin information is used for representing position information obtained by mapping origins of a world coordinate system to a screen coordinate system, and the mapping data comprises one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets;
And modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets.
Optionally, the above processor may be further configured to perform the following steps by a computer program: acquiring attribute information of any two original virtual assets, wherein the attribute information at least comprises: pixel size and coordinate system information; determining a virtual asset to be mapped and a standard virtual asset in any two virtual assets based on the pixel size; mapping the coordinate system information to be mapped of the virtual asset to be mapped based on the standard coordinate system information of the standard virtual asset to obtain scaling data; restoring the coordinate system information to be mapped based on the original point information to obtain offset data; and mapping the pixel size to be mapped of the virtual asset to be mapped based on the standard coordinate system information to obtain the cut data.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; and determining scaling data based on second coordinate axis information and unit length, wherein the second coordinate axis information is used for representing coordinate axis information except the first coordinate axis information in the coordinate system information to be mapped.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; determining a first origin coordinate of an origin of a standard coordinate system in a coordinate system to be mapped; determining a second origin coordinate of the origin information in the standard coordinate system; the offset data is determined based on the first origin coordinates, the second origin coordinates, the coordinate system information to be mapped, and the unit length in the standard coordinate system.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining a unit length in a standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system; the clipping data is determined based on the pixel size to be mapped, the standard coordinate system information, and the unit length in the standard coordinate system.
Optionally, the above processor may be further configured to perform the following steps by a computer program: acquiring a file directory corresponding to an original virtual asset sequence through a script; reading virtual assets corresponding to the file catalogue to obtain a plurality of first virtual assets; unifying texture formats of the plurality of first virtual assets to obtain a plurality of second virtual assets; and storing the plurality of second virtual assets into a preset texture list to obtain an original virtual asset sequence.
Optionally, the above processor may be further configured to perform the following steps by a computer program: adjusting the linear interpolation modes of the textures of the plurality of first virtual assets to preset modes, wherein the preset modes are used for representing that the linear interpolation of the textures of the plurality of first virtual assets is forbidden; the texture addressing mode of the plurality of first virtual assets is adjusted to a clamp mode.
Optionally, the above processor may be further configured to perform the following steps by a computer program: modifying the plurality of original virtual assets based on the mapping data through a preset interface of the content production tool to obtain a plurality of modified virtual assets; drawing the plurality of modified virtual assets onto the render texture target; and performing format conversion on the rendering texture target to obtain a plurality of target virtual assets.
Optionally, the above processor may be further configured to perform the following steps by a computer program: scaling the original texture coordinates of the plurality of original virtual assets based on the scaling data to obtain first texture coordinates; performing offset processing on the first texture coordinates based on the offset data to obtain second texture coordinates; and cutting the plurality of original virtual assets based on the cutting data and the second texture coordinates to obtain a plurality of modified virtual assets.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining a scaling factor and a translation factor based on the scaling data; obtaining the product of the scaling coefficient and the original texture coordinate to obtain a scaling texture coordinate; and obtaining the sum of the scaled texture coordinates and the translation coefficients to obtain first texture coordinates.
Optionally, the above processor may be further configured to perform the following steps by a computer program: comparing the second texture coordinates with the cutting data to determine a cutting range; comparing the pixels of the plurality of original virtual assets with the cutting range to obtain the cutting coefficients of the pixels; and interpolating the colors of the plurality of original virtual assets based on the clipping coefficients of the pixels to obtain a plurality of modified virtual assets.
Optionally, the above processor may be further configured to perform the following steps by a computer program: origin information is acquired from the content production tool through a preset interface of the content production tool.
In the electronic device of this embodiment, a technical solution for processing a virtual asset is provided. Obtaining an original virtual asset sequence; performing center point mapping on any two original virtual assets in the original virtual asset sequence based on the original point information of the original virtual asset sequence to obtain mapping data of the any two original virtual assets; the method for modifying the plurality of original virtual assets to obtain the plurality of target virtual assets based on mapping data includes that origin information is used for representing position information obtained by mapping an origin of a world coordinate system to a screen coordinate system, and the mapping data includes one or more of the following: the scaling data used for scaling the plurality of original virtual assets, the offset data used for offset processing the plurality of original virtual assets and the trimming data used for trimming the plurality of original virtual assets are capable of directly modifying the plurality of original virtual assets based on the mapping data without using a content production tool when modifying the virtual assets to obtain a plurality of target virtual assets, so that the technical problems that the content production tool is only used when generating an original virtual asset sequence, the virtual assets are not required to be processed and output by re-entering the content production tool are solved, the processing flow of the virtual assets is simplified, the processing efficiency is improved, and the technical problem that the processing efficiency of the virtual assets is low in the related art is solved.
Fig. 6 is a schematic diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 6, the electronic device 600 is merely an example, and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic apparatus 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: the at least one processor 610, the at least one memory 620, a bus 630 connecting the different system components (including the memory 620 and the processor 610), and a display 640.
Wherein the memory 620 stores program code that can be executed by the processor 610 to cause the processor 610 to perform the steps according to various exemplary implementations of the present disclosure described in the above method section of the embodiment of the present application.
The memory 620 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 6201 and/or cache memory 6202, and may further include Read Only Memory (ROM) 6203, and may also include nonvolatile memory, such as one or more magnetic storage devices, flash memory, or other nonvolatile solid state memory.
In some examples, memory 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. The memory 620 may further include memory remotely located relative to the processor 610, which may be connected to the electronic device 600 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processor 610, or using any of a variety of bus architectures.
Display 640 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 600.
Optionally, the electronic apparatus 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 600, and/or with any device (e.g., router, modem, etc.) that enables the electronic apparatus 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. As shown in fig. 6, network adapter 660 communicates with other modules of electronic device 600 over bus 630. It should be appreciated that although not shown in fig. 6, other hardware and/or software modules may be used in connection with the electronic device 600, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk array (Redundant Array of Independent Disks, simply RAID) systems, tape drives, data backup storage systems, and the like.
The electronic device 600 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power supply, and/or a camera.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 6 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the electronic device 600 may also include more or fewer components than shown in fig. 6, or have a different configuration than shown in fig. 1. The memory 620 may be used to store computer programs and corresponding data, such as corresponding to the method of processing a virtual asset in embodiments of the present disclosure. The processor 610 executes a computer program stored in the memory 620 to perform various functional applications and data processing, i.e., to implement the above-described virtual asset processing method.
In the foregoing embodiments of the present disclosure, the descriptions of the various embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present disclosure, and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present disclosure, which are intended to be comprehended within the scope of the present disclosure.

Claims (15)

1. A method of processing a virtual asset, comprising:
acquiring an original virtual asset sequence, wherein a plurality of original virtual assets in the original virtual asset sequence are used for generating videos;
performing center point mapping on any two original virtual assets in the original virtual asset sequence based on origin information of the original virtual asset sequence to obtain mapping data of the any two original virtual assets, wherein the origin information is used for representing position information obtained by mapping origins of a world coordinate system to a screen coordinate system, and the mapping data comprises one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets;
And modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets.
2. The method of claim 1, wherein performing center point mapping on the arbitrary two original virtual assets in the original virtual asset sequence based on the origin information of the original virtual asset sequence to obtain the mapping data of the arbitrary two original virtual assets, comprises:
acquiring attribute information of any two original virtual assets, wherein the attribute information at least comprises: pixel size and coordinate system information;
determining a virtual asset to be mapped and a standard virtual asset in the arbitrary two virtual assets based on the pixel size;
mapping the coordinate system information to be mapped of the virtual asset to be mapped based on the standard coordinate system information of the standard virtual asset to obtain the scaling data;
restoring the coordinate system information to be mapped based on the origin information to obtain the offset data;
and mapping the pixel size to be mapped of the virtual asset to be mapped based on the standard coordinate system information to obtain the trimming data.
3. The method of claim 2, wherein mapping the coordinate system information to be mapped for the virtual asset to be mapped based on the standard coordinate system information for the standard virtual asset to obtain the scaling data comprises:
determining a unit length in the standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system;
and determining the scaling data based on second coordinate axis information and the unit length, wherein the second coordinate axis information is used for representing coordinate axis information except the first coordinate axis information in the coordinate system information to be mapped.
4. The method according to claim 2, wherein restoring the coordinate system information to be mapped based on the origin information to obtain the offset data includes:
determining a unit length in the standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system;
determining a first origin coordinate of the origin of the standard coordinate system in the coordinate system to be mapped;
Determining a second origin coordinate of the origin information in the standard coordinate system;
the offset data is determined based on the first origin coordinates, the second origin coordinates, the coordinate system information to be mapped, and a unit length in the standard coordinate system.
5. The method of claim 2, wherein mapping the pixel size to be mapped of the virtual asset to be mapped based on the standard coordinate system information to obtain the cropping data comprises:
determining a unit length in the standard coordinate system based on first coordinate axis information in the coordinate system information to be mapped and standard coordinate axis information in the standard coordinate system;
and determining the trimming data based on the pixel size to be mapped, the standard coordinate system information and the unit length in the standard coordinate system.
6. The method of claim 1, wherein obtaining the original virtual asset sequence comprises:
acquiring a file directory corresponding to the original virtual asset sequence through a script;
reading the virtual assets corresponding to the file catalogue to obtain a plurality of first virtual assets;
unifying texture formats of the plurality of first virtual assets to obtain a plurality of second virtual assets;
And storing the plurality of second virtual assets to a preset texture list to obtain the original virtual asset sequence.
7. The method of claim 6, wherein unifying the texture formats of the plurality of first virtual assets to obtain the plurality of second virtual assets comprises:
adjusting the linear interpolation modes of the textures of the plurality of first virtual assets to preset modes, wherein the preset modes are used for representing that the linear interpolation of the textures of the plurality of first virtual assets is forbidden;
the texture addressing mode of the plurality of first virtual assets is adjusted to a clamp mode.
8. The method of claim 1, wherein modifying the plurality of original virtual assets based on the mapping data results in the plurality of target virtual assets, comprising:
modifying the plurality of original virtual assets based on the mapping data through a preset interface of a content production tool to obtain a plurality of modified virtual assets;
drawing the plurality of modified virtual assets onto a render texture target;
and performing format conversion on the rendering texture target to obtain the plurality of target virtual assets.
9. The method of claim 8, wherein modifying the plurality of original virtual assets based on the mapping data results in the plurality of modified virtual assets, comprising:
scaling the original texture coordinates of the plurality of original virtual assets based on the scaling data to obtain first texture coordinates;
performing offset processing on the first texture coordinates based on the offset data to obtain second texture coordinates;
and cutting the plurality of original virtual assets based on the cutting data and the second texture coordinates to obtain the plurality of modified virtual assets.
10. The method of claim 9, wherein scaling the original texture coordinates of the plurality of original virtual assets based on the scaling data to obtain the first texture coordinates comprises:
determining a scaling factor and a panning factor based on the scaling data;
obtaining the product of the scaling coefficient and the original texture coordinate to obtain a scaling texture coordinate;
and obtaining the sum of the scaled texture coordinates and the translation coefficients to obtain the first texture coordinates.
11. The method of claim 9, wherein cropping the plurality of original virtual assets based on the cropping data and the second texture coordinates results in the plurality of modified virtual assets, comprising:
Comparing the second texture coordinates with the cutting data to determine a cutting range;
comparing the pixels of the plurality of original virtual assets with the cutting range to obtain cutting coefficients of the pixels;
and interpolating the colors of the plurality of original virtual assets based on the clipping coefficients of the pixels to obtain the plurality of modified virtual assets.
12. The method according to claim 1, wherein the method further comprises:
and acquiring the origin information from the content production tool through a preset interface of the content production tool.
13. A virtual asset processing apparatus, comprising:
a sequence acquisition module for acquiring an original virtual asset sequence, wherein a plurality of original virtual assets in the original virtual asset sequence are used for generating videos;
the asset mapping module is configured to perform center point mapping on any two original virtual assets in the original virtual asset sequence based on origin information of the original virtual asset sequence, so as to obtain mapping data of the any two original virtual assets, where the origin information is used to represent position information obtained by mapping an origin of a world coordinate system to a screen coordinate system, and the mapping data includes one or more of the following: scaling data for scaling the plurality of original virtual assets, offset data for offset processing the plurality of original virtual assets, and trimming data for trimming the plurality of original virtual assets;
And the asset modification module is used for modifying the plurality of original virtual assets based on the mapping data to obtain a plurality of target virtual assets.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program, wherein the computer program is arranged to perform the method of any of claims 1 to 12 when being run by a processor.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1 to 12.
CN202310968972.9A 2023-08-02 2023-08-02 Virtual asset processing method and device, storage medium and electronic device Pending CN117224963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310968972.9A CN117224963A (en) 2023-08-02 2023-08-02 Virtual asset processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310968972.9A CN117224963A (en) 2023-08-02 2023-08-02 Virtual asset processing method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN117224963A true CN117224963A (en) 2023-12-15

Family

ID=89097463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310968972.9A Pending CN117224963A (en) 2023-08-02 2023-08-02 Virtual asset processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN117224963A (en)

Similar Documents

Publication Publication Date Title
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN112288665A (en) Image fusion method and device, storage medium and electronic equipment
US10733793B2 (en) Indexed value blending for use in image rendering
JPWO2003001457A1 (en) Information processing equipment
JP2023029984A (en) Method, device, electronic apparatus, and readable storage medium for generating virtual image
WO2023226371A1 (en) Target object interactive reproduction control method and apparatus, device and storage medium
CN109448123B (en) Model control method and device, storage medium and electronic equipment
CN111462205B (en) Image data deformation, live broadcast method and device, electronic equipment and storage medium
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
CN112860839A (en) Water environment quality real-time monitoring method and device based on Unity3D
CN109829963B (en) Image drawing method and device, computing equipment and storage medium
CN114041111A (en) Handwriting drawing method, apparatus, electronic device, medium, and program product
CN109598672B (en) Map road rendering method and device
CN111950057A (en) Loading method and device of Building Information Model (BIM)
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN114979592B (en) Image curved surface geometric correction method and device, electronic equipment and storage medium
WO2023056879A1 (en) Model processing method and apparatus, device, and medium
CN117224963A (en) Virtual asset processing method and device, storage medium and electronic device
CN114022616B (en) Model processing method and device, electronic equipment and storage medium
CN114119831A (en) Snow accumulation model rendering method and device, electronic equipment and readable medium
WO2020192212A1 (en) Picture processing method, picture set processing method, computer device, and storage medium
CN117351126A (en) Method and device for generating special effects of rain and snow in virtual scene and electronic equipment
CN116883575A (en) Building group rendering method, device, computer equipment and storage medium
CN117745892A (en) Particle generation performance control method, device, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination