CN117576287A - Model processing method and device, electronic equipment and computing storage medium - Google Patents
Model processing method and device, electronic equipment and computing storage medium Download PDFInfo
- Publication number
- CN117576287A CN117576287A CN202311572882.4A CN202311572882A CN117576287A CN 117576287 A CN117576287 A CN 117576287A CN 202311572882 A CN202311572882 A CN 202311572882A CN 117576287 A CN117576287 A CN 117576287A
- Authority
- CN
- China
- Prior art keywords
- highlight
- model
- map
- vertex
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 42
- 238000009877 rendering Methods 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000013507 mapping Methods 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 6
- 230000005484 gravity Effects 0.000 abstract description 4
- 210000004209 hair Anatomy 0.000 description 52
- 238000004364 calculation method Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 13
- 238000006243 chemical reaction Methods 0.000 description 7
- 238000004519 manufacturing process Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010845 search algorithm Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
The embodiment of the invention provides a model processing method, a device, electronic equipment and a computing storage medium, wherein the method comprises the following steps: acquiring a highlight model and a specified object model required to be subjected to highlight baking, and acquiring a plurality of triangular grids of the highlight model; determining target triangle grids mapped by each triangle grid in a specified object model; and merging the highlight map of the highlight model and the object map of the appointed object model based on the barycentric coordinates of the target triangle mesh to obtain the target model after highlight baking. Based on the application of the gravity center coordinates in the triangle, the highlight map is recombined into the object map, the combination of the highlight map and the object map is completed, automation of baking the highlight onto the appointed object is realized, the performance pressure of a processor can be reduced when a large number of roles are rendered on the same screen, the highlight baking processing under the conditions of the same screen rendering and poor performance of the processor is facilitated, and the rendering performance of the same screen role is improved.
Description
Technical Field
The present invention relates to the field of rendering technologies, and in particular, to a model processing method, a model processing apparatus, a corresponding electronic device, and a corresponding computer-readable storage medium.
Background
In cartoon game making, the hair with strong hand-painting sense is usually drawn for the character, so that the character has obvious layering sense between the highlight and the cartoon hair during movement, and in particular, the highlight is generally wrapped around the hair, and the highlight is locally jogged and flicked along with the swinging of the hair to form a staggered whole with the hair.
The baking mode of the highlight of cartoon hair is mainly characterized in that the highlight is split into a model to be used for cartoon rendering, but when the cartoon rendering is carried out on the same screen, for example, the cartoon hair is easy to cause serious performance problems under the condition that a large number of roles are involved on the same screen.
Disclosure of Invention
In view of the above, embodiments of the present invention have been made to provide a model processing method, a model processing apparatus, a corresponding electronic device, and a corresponding computer-readable storage medium that overcome or at least partially solve the above problems.
The embodiment of the invention discloses a model processing method, which comprises the following steps:
acquiring a highlight model and a specified object model required to be subjected to highlight baking, and acquiring a plurality of triangular grids of the highlight model;
determining target triangle grids mapped by each triangle grid on the specified object model;
And combining the highlight map of the highlight model and the object map of the appointed object model based on the barycentric coordinates of the target triangle mesh to obtain the target model after highlight baking.
The embodiment of the invention also discloses a model processing device, which comprises:
the triangular grid acquisition module is used for acquiring a highlight model and a specified object model required to be subjected to highlight baking, and acquiring a plurality of triangular grids of the highlight model;
the grid mapping module is used for determining a target triangle grid mapped by each triangle grid on the specified object model;
and the mapping merging module is used for merging the highlight mapping of the highlight model and the object mapping of the appointed object model based on the barycentric coordinates of the target triangle mesh to obtain the target model after highlight baking.
The embodiment of the invention also discloses an electronic device, which comprises: a processor, a memory, and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements any of the model processing methods.
The embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the model processing method when being executed by a processor.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, a plurality of triangular grids in a highlight model are determined, target triangular grids mapped in a designated object model are combined, and a highlight map and an object map are obtained after highlight baking based on the barycentric coordinates of the target triangular grids, namely, the highlight map is recombined into the object map based on application of the barycentric coordinates in the triangle, so that the combination of the highlight map and the object map is completed, and automation of the highlight baking on the designated object is realized. Furthermore, the highlight map and the object map are combined into one model, so that the performance pressure of the processor can be reduced when a large number of roles are rendered on the same screen, the highlight baking treatment under the conditions of the same screen rendering and poor performance of the processor is facilitated, and the rendering performance of the same screen roles is improved.
Drawings
FIG. 1 is a flow chart of steps of an embodiment of a model processing method of the present invention;
figure 2 is a flow chart of the steps of another embodiment of the modeling eating method of the present invention;
fig. 3 is a schematic process diagram of a KD-Tree nearest point algorithm provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a calculation process of key coordinates according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating the conversion of a highlight map according to an embodiment of the present invention;
fig. 6 is a block diagram showing the structure of an embodiment of a model processing apparatus of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
To facilitate an understanding of the invention by those skilled in the art, the following description of terms or nouns used in connection with the various embodiments of the invention described below are provided:
triangle barycentric coordinates: is a coordinate representation method for interpolation calculation of points inside a triangle.
KD Tree nearest point algorithm: is an algorithm for accelerating the search of the nearest points of the grid.
UV coordinates: the texture map coordinates, horizontal U and vertical V, are a coordinate representation for locating any one pixel on an image.
For the baking mode of the highlight of cartoon hair, as an example, the highlight can be split into models for cartoon rendering, specifically, a model can be independently manufactured, and the highlight model is wrapped around the hair, so that the highlight can locally micro-move and flicker along with the swinging of the hair, and the highlight forms a staggered whole with the hair. The mode of splitting the highlight into the models can provide more choices for cartoon rendering and is convenient for animation production, but when the cartoon rendering is carried out on the same screen, for example, a large number of roles are involved on the same screen, and the performance of a processor is relatively poor, serious performance problems are easily caused; as yet another example, the highlight pattern may be deleted, only the hair pattern used, and the highlight effect manually drawn on the hair pattern, to effect the baking of the cartoon highlight, but this manual process would result in a significant amount of repetitive work in the presence of the highlight requirement of manually drawing the hair of hundreds of characters.
The embodiment of the invention reorganizes the highlight map into the object map based on the application of the gravity center coordinates in the triangle, so as to complete the combination of the highlight map and the object map and realize the automation of baking the highlight onto the appointed object. Furthermore, the high-light map and the object map can be combined into one model, so that the performance pressure of the processor is reduced when a large amount of roles are rendered on the same screen, the high-light baking treatment under the conditions of the same screen rendering and poor performance of the processor is facilitated, and the rendering performance of the same screen roles is improved. In addition, the processing mode of automatically baking the highlight onto the appointed object is automatic processing, manual drawing of a map is not needed, and adjustment of a model is not needed, a great amount of repeated work can be avoided by the mode of one-key automatic processing, the manufacturing time of the cartoon highlight of the character is saved, the baking efficiency of the highlight baking is improved, the manufacturing efficiency is further improved, and the resource iteration period is reduced.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a model processing method according to the present invention may specifically include the following steps:
step 101, acquiring a highlight model and a specified object model required to be subjected to highlight baking, and acquiring a plurality of triangular grids of the highlight model;
The highlight model can be mainly used for describing how light interacts with the surface of an object, and can be generally used for scenes such as cartoon rendering, for example, the highlight model is used for highlight baking of cartoon hair; the specified object model may be any model of a desired highlight bake, and illustratively, in a scene of highlight baking cartoon hair, the specified object model may be a hair model, to which embodiments of the present invention are not limited.
In graphics rendering, the mesh may provide the shape and structure of the 3D model, and the texture map may provide visual effects and details for the model, which is image data applied to the mesh surface, and the texture map is typically applied to the mesh surface, so that the 3D model is more realistic in appearance and has details. Wherein the mesh may comprise a plurality of vertices, i.e. a plurality of vertices, for constituting the mesh, the texture map may comprise a plurality of pixels, the model surface being divided into a number of small triangles or quadrilaterals during the texture map, the vertices of these triangles or quadrilaterals being mapped onto the pixels of the texture image, the positions of the pixels on the texture image being determinable mainly based on texture coordinates.
In one embodiment of the present invention, the highlight model may be assembled from triangular meshes based on application of key coordinates inside the triangle, and the specified object model may also be assembled from triangular meshes, where a plurality of triangular meshes of the highlight model may be obtained so as to determine a target triangular mesh corresponding to each triangular mesh on the specified object model, and determine a mapping point of a vertex of each triangle of the highlight model on the specified object model.
The highlight model is composed of triangular grids, and the method for acquiring a plurality of triangular grids of the highlight model can be specifically represented by scattering the highlight model into triangular grids, wherein the scattering strategy can be a reverse process of model calculation and assembly, and the embodiment of the invention is not limited.
102, determining target triangle grids mapped by each triangle grid in a specified object model;
in order to map all the highlight vertices of the highlight model onto the specified object model, a target triangle mesh to which each triangle mesh of the highlight model is mapped on the specified object model may be determined at this time.
In practical application, each triangular mesh of the highlight model can be traversed, highlight vertices forming the triangular mesh are obtained, the target triangular mesh of the triangular mesh formed by each highlight vertex in the designated object model is determined, and therefore mapping points of each highlight vertex in the designated object model are determined.
And step 103, merging the highlight map of the highlight model and the object map of the appointed object model based on the barycentric coordinates of the target triangle mesh to obtain the target model after highlight baking.
After determining the target triangle mesh mapped by each triangle mesh in the specified object model, the barycentric coordinates of the target triangle mesh may be obtained, where the barycentric coordinates may be used to indicate the mapping points of each highlight vertex in the specified object model, i.e. the texture coordinates of the highlight vertices may be mapped onto the object texture map.
In one embodiment of the invention, the mapping points can be used as the substitute points of the highlight vertexes on the appointed object model so as to multiplex the texel information of the corresponding highlight vertexes on the original highlight chartlet, realize the conversion of the texture coordinates of the highlight chartlet, enable the texture coordinates to be directly combined on the object chartlet of the appointed object model, combine the highlight chartlet and the object chartlet into the same model, and reduce the performance pressure of a processor when rendering a large amount of colors on the same screen.
Specifically, the method can generate a new highlight map based on the original highlight map of the highlight model by using the barycentric coordinates, specifically can be expressed as a new highlight map is recomposed after the original highlight map is scattered, and the scattering and recomposition modes can be mainly realized by using the texture coordinates.
Wherein, in the generated new highlight map, the highlight texture coordinates of the corresponding pixels thereof have been converted from the coordinate system of the highlight model to the highlight texture coordinates under the coordinate system of the specified object model, which can be directly used when rendering the specified object model.
In an exemplary scenario of highlight baking rendering of cartoon hair, the designated object model may be a hair model, and the object map may be a hair map, and at this time, the new highlight map and the object map, that is, the hair map may be used to directly render the hair model, so as to obtain the target model after highlight baking.
In the embodiment of the invention, a plurality of triangular grids in a highlight model are determined, target triangular grids mapped in a designated object model are combined, and a highlight map and an object map are obtained after highlight baking based on the barycentric coordinates of the target triangular grids, namely, the highlight map is recombined into the object map based on application of the barycentric coordinates in the triangle, so that the combination of the highlight map and the object map is completed, and automation of the highlight baking on the designated object is realized. Furthermore, the highlight map and the object map are combined into one model, so that the performance pressure of the processor can be reduced when a large number of roles are rendered on the same screen, the highlight baking treatment under the conditions of the same screen rendering and poor performance of the processor is facilitated, and the rendering performance of the same screen roles is improved.
Referring to fig. 2, a flowchart illustrating steps of another embodiment of a model processing method of the present invention may specifically include the following steps:
step 201, traversing each triangular grid of the highlight model, and determining a target triangular grid of each triangular grid on the appointed object model;
in the embodiment of the invention, in order to map all the highlight vertices of the highlight model onto the specified object model, the target triangle mesh mapped by each triangle mesh of the highlight model on the specified object model can be determined.
Specifically, in order to facilitate determination of mapping points of the highlight vertices, since the triangle mesh of the highlight model may be formed by the highlight vertices, for the purpose of performing batch mapping processing on the highlight vertices, each triangle mesh of the highlight model may be traversed, the highlight vertices forming the triangle mesh may be obtained, and mapping points of each highlight vertex in the specified object model are determined, so as to implement mapping of all the highlight vertices of the highlight model to the specified object model.
The mapping point may be a projection point where the barycentric coordinate calculated by the highlight vertex in the specified object model satisfies a preset condition, where the preset condition may be that the calculated barycentric coordinate is a non-negative number.
In a specific implementation, the highlight vertices may be projected onto the specified object model to obtain barycentric coordinates of the highlight vertices on the specified object model, where the barycentric coordinates may be used to indicate mapping points of the highlight vertices in the specified object model, and at this time, object texture coordinates of vertices of the triangle mesh on the specified object model may be determined based on the barycentric coordinates, so as to determine the target triangle mesh mapped in the specified object model based on the object texture coordinates.
In the process of projecting the vertex coordinates to the specified object model, the object vertex closest to the highlight vertex may be first determined on the specified object model, so that the highlight vertex is projected.
For the determination mode of the object vertex closest to the highlight vertex on the appointed object model, in order to avoid huge calculation amount caused by using a violence method to calculate the distances between all the highlight vertices on the highlight model and all the object vertices on the appointed object model, a grid closest point search algorithm, such as a KD Tree closest point algorithm, can be adopted to search the object vertex closest to the highlight vertex, specifically, a node Tree can be generated by acquiring the object vertex of the appointed object model, then the highlight vertex is searched by adopting the grid closest point search algorithm to search the node Tree, so that the object vertex closest to the highlight vertex is determined on the appointed object model. For example, in a scene where cartoon hair is highlight baked, all hair vertices on the hair model may be passed into a KD-Tree generating node Tree to calculate each highlight vertex on the highlight model as the nearest hair vertex on the hair model using KD-Tree.
Specifically, the specified object model may be subjected to XYZ three-direction segmentation by using KD Tree, and then each highlight vertex is compared with only the median object vertex at a time by traversing the highlight vertices of each triangle mesh forming the highlight model, so as to increase the distance calculation speed.
For example, as shown in fig. 3, taking the 2D plane KD Tree as an example, the coordinate values of the coordinate system X axis tend to increase to the right, the coordinate values of the coordinate system Y axis tend to increase downward, and for the 3D space, only one Z direction needs to be correspondingly expanded, assuming that 7 object vertices a-G exist, and the M points are highlight vertices for searching. At this time, the object vertex of the object model is firstly adopted to generate a node tree, specifically, the method can be expressed as that the calculation ranges in the X and Y directions are calculated firstly, for example, the calculation range in the horizontal direction is X (G) -X (D), the calculation range in the vertical direction is Y (A) -Y (D), the range in the horizontal direction is larger, at this time, the arrangement of the points from small to large [ D, E, B, A, C, F, G ], the most middle point is A (the point with the middle number, 1-7 is the 3 rd most middle), so that A is taken as a root node, the left side [ D, E, B ] of A is taken as a standby node of the left node, and the right side [ C, F, G ] of A is taken as a right standby node; then, sorting [ D, E, B ] on the left side of A in the Y direction to be [ D, B, E ] because B is an intermediate node and serves as a root node on the left side, and E are left nodes and right nodes respectively; and sorting the right side [ C, F, G ] of A in the Y direction to be [ F, C, G ], wherein C is an intermediate node, F and G are distributed to be left nodes and right nodes as root nodes on the right side, so that the node Tree is generated, and the right node diagram shown in FIG. 3 is a Tree node diagram of the calculated KD Tree.
After obtaining the node tree, the tree node diagram can be used for searching the object vertex closest to the highlight vertex, starting shortest distance calculation and finding the point closest to the M point. Illustratively, if the conventional way is adopted at this time, i.e. the distance between the M point and all object vertices is calculated, and then the smallest point is found, the number of calculations required for this way is 7 times, the calculation complexity is O (n), and in the case of the KD Tree algorithm, the generated KD Tree can be searched only in the left side [ D, E, B ] in the X direction because M.x < A.x; then, in the Y direction M.y > B.y, only the right search is performed, that is, the nearest point is point E, where point E is only the point closer to point M, not necessarily the nearest point, and a backward search from the search path is required: e- > B- > A, the concrete expression is that firstly backtracking B, taking M as the center of a circle, taking the distance of ME as the radius to draw a circle, wherein the circle is not intersected with a cutting line passing through B horizontally to the left, so that the distance M between B and a sub left node [ D ] can be further, and the distance between B and the sub left node [ D ] is not needed to be calculated; then backtracking A, taking M as a circle center, taking the distance of ME as a radius, and drawing a circle, wherein the circle is not intersected with a cutting line which passes through A and is vertically upwards, so that the distance M between A and [ F, C, G ] on the right side can be further, and calculation of the distance between A and M is not needed; then. At this time, it can be determined that the point closest to the point M among the seven points a-G is E, the number of times of calculating the distance is reduced to 3, and the calculation complexity is represented by O (log n), that is, the calculated amount of KD Tree is reduced compared with the conventional method.
In one embodiment of the invention, the object vertex currently determined on the specified object model that is closest to the highlight vertex is only the closest point in the specified object model, and not necessarily the closest surface point to the highlight vertex on the specified object model surface, at which time the determination of the aforementioned surface point can be achieved by calculation with the aid of the barycentric coordinates.
Specifically, vertex coordinates of a highlight vertex may be obtained first, where the vertex coordinates are used to indicate a position of the highlight vertex in the three-dimensional space of the highlight model, then the vertex coordinates may be projected onto a preset triangle on the specified object model to obtain a projection point, where the preset triangle may be a triangle within a preset range of the object vertex that is determined to be closest to the highlight vertex on the specified object model, and after the projection point is determined, center of gravity coordinates of the projection point in the preset triangle may be calculated. It should be noted that, the triangle within the preset range of the object vertex closest to the highlight vertex may be a triangle where the object vertex is located or a triangle adjacent to the triangle where the object vertex is located, where the triangle where the object vertex is located may include, for example, a triangle for forming a surface where the object vertex is located or a triangle where the object vertex is constructed as a triangle vertex, which is not limited in the embodiment of the present invention.
For the calculation process of the barycentric coordinates, as shown in fig. 4, assuming that the three-point coordinates of the preset triangle are A, B, C, there may be one point (x, y) in the plane that can be written as a linear combination of these three-point coordinates, i.e., (x, y) =αa+βb+γc and satisfies α+β+γ=1. In this example, the point (x, y) may refer to the object texture coordinate of the projected point on the object map, and then weights α, β, γ of the 3 coordinates A, B, C may be calculated based on the known coordinate values, and the corresponding values of the weights α, β, γ may be referred to as the barycentric coordinates of the point (x, y).
The barycentric coordinates may be used to determine whether the projection points (x, y) are located inside the triangle, and in a preferred embodiment, if the calculated barycentric coordinates satisfy a predetermined condition, the corresponding projection point may be determined to be the surface point closest to the highlight vertex on the surface of the specified object model. The preset condition may be that the calculated barycentric coordinates are non-negative numbers, that is, if the calculated values of α, β, γ are all non-negative numbers, it may be determined that the projected point is inside the triangle for the two-dimensional coordinate point (x, y), and then the point (x, y) may be determined as a surface point closest to the highlight vertex on the surface of the specified object model.
For example, in a scene where cartoon hairs are highlight baked, assuming that the vertex of an object on a hair model closest to a highlight vertex P (highlight) on the highlight model is found to be P (hair), the found P (hair) is simply the vertex of a triangle mesh, and the model has, in addition to the vertex, a surface to which the vertices are connected, typically a triangle surface, P (nearest) being the point on the triangle surface closest to the highlight vertex, the calculated distance of this surface typically being closer than the calculated distance of the vertex, i.e., the determined P (hair) is not the nearest hair vertex to the highlight vertex P (highlight); in order to determine the point P (nearest) closest to P (highlight) on the entire hair model surface, the concept of barycentric coordinates can be introduced at this time, since P (nearest) must be a point inside a certain triangle around P (hair). Specifically, the projection point P (proj) can be obtained by projecting the highlight point P (highlight) onto the triangle around the hair vertex P (hair), then the barycentric coordinates of this projection point P (proj) in the triangle can be calculated, and if the calculated barycentric coordinates α, β, γ are all non-negative numbers, then the projection point P (proj) can be determined as the surface point P (nearest) on the hair model surface, which is closest to the highlight point P (highlight).
The barycentric coordinates can be used for judging whether the projection points are positioned in the triangle or not, and can also be used for subsequent texture coordinate calculation when the highlight map is covered on the appointed object model, namely, under the condition that the texture coordinates of three vertexes of the triangle are known, the texture coordinates of the points in the triangle can be subjected to interpolation calculation by adopting the barycentric coordinates.
Specifically, after the barycentric coordinates are obtained by calculation, the object texture coordinates of the vertices of the triangle mesh on the specified object model may be determined according to the barycentric coordinates, so as to determine the target triangle mesh mapped on the specified object model based on the object texture coordinates. The calculation mode of the object texture coordinates can be specifically expressed as obtaining the highlight texture coordinates of the vertices of the triangle mesh on the highlight model, namely obtaining the UV coordinates of three vertices of the triangle, which are expressed as follows:
uv0=hightlightModel.uvs[i].uv0;
uv1=hightlightModel.uvs[i].uv1;
uv2=hightlightModel.uvs[i].uv2;
at this time, the barycentric coordinates and the highlight texture coordinates may be used to calculate the object texture coordinates, which may be expressed as the following, for example, by calculating the UV coordinates of three vertices of the triangle on the hair model according to the barycentric coordinates:
uv0_in_hair=getUVByBarycentricCoordinates(uv0,bc0);
uv1_in_hair=getUVByBarycentricCoordinates(uv1,bc1);
uv2_in_hair=getUVByBarycentricCoordinates(uv2,bc2);
wherein bc0, bc1 and bc2 may refer to the projection of three vertex coordinates of a triangular mesh of a highlight model onto a specified object model, for example, the barycentric coordinates of each projection point obtained on a hair model, respectively, the calculation of bc0, bc1 and bc2 may be expressed as follows:
bc0=getProjectionBarycentricCoordinates(pos0);
bc1=getProjectionBarycentricCoordinates(pos1);
bc2=getProjectionBarycentricCoordinates(pos2);
Wherein pos0, pos1, and pos2 may refer to three vertex coordinates of a triangle mesh of the highlight model, respectively, which may be expressed as follows:
pos0=hightlightModel.vertex[i].pos0;
pos1=hightlightModel.vertex[i].pos1;
pos2=hightlightModel.vertex[i].pos2。
step 202, generating a new highlight map from an original highlight map of a highlight model based on the barycentric coordinates of a target triangle mesh;
and the barycentric coordinates of the target triangle mesh are barycentric coordinates of projection points of the highlight vertexes of the triangle mesh of the highlight model on the appointed object model.
The barycentric coordinates may be used to indicate mapping points of each highlight vertex in the specified object model, in one embodiment of the present invention, the mapping points may be used as substitute points of the highlight vertices on the specified object model, so as to multiplex texel information of the corresponding highlight vertices on the original highlight map thereof, implement conversion of texture coordinates of the highlight map, enable the texture coordinates to be directly combined onto the object map of the specified object model, combine the highlight map and the object map into the same model, and reduce performance pressure of the processor when rendering a large number of roles on the same screen.
Specifically, the method can generate a new highlight map based on the original highlight map of the highlight model by using the barycentric coordinates, specifically can be expressed as a new highlight map is recomposed after the original highlight map is scattered, and the scattering and recomposition modes can be mainly realized by using the texture coordinates.
In a specific implementation, texel information corresponding to the target triangle mesh can be sampled from the original highlight map based on the barycentric coordinates, and a new highlight map is generated based on the texel information.
In one embodiment of the present invention, if the barycentric coordinates satisfy the preset condition, that is, α, β, γ are all non-negative numbers, at this time, the projection point may be determined to be a surface point closest to the highlight vertex on the triangular surface formed by the target triangular mesh, then the object texture coordinates of the surface point closest to the surface point may be determined to be the object texture coordinates of the highlight vertex on the specified object model, and a correspondence relationship between the object texture coordinates of the highlight vertex on the specified object model and the highlight texture coordinates of the highlight vertex on the highlight model may be established. Then, when the texel information is sampled, the original highlight texture coordinates of the corresponding pixel points in the target triangle on the original highlight texture map can be obtained based on the corresponding relation, and the pixel values of the corresponding pixel points of the original highlight texture coordinates are sampled from the original highlight texture map.
For example, in a scene where cartoon hair is highlight baked, assuming that the surface point closest to the highlight point P (high) on the hair model is calculated to be P (nearest), the mapping conversion calculation of texture coordinates can be performed using the foregoing succession of points, assuming that the UV coordinates of P (high) are labeled UV (high), and the UV coordinates of P (nearest) are labeled UV (nearest), since the highlight map can be regarded as a rectangular grid composed of different triangular grids, the useful partial triangles in the highlight map are in one-to-one correspondence with the triangular grids on the highlight model, and at this time, each texel point on the highlight map can be corresponded to a certain point on the model surface by the barycentric coordinates within the target triangle. The method specifically can be expressed as iterating in all triangle grids of the highlight model, sampling pixel information of all corresponding pixel points in the triangle grids from an original highlight map (a left part shown in fig. 5) and copying the pixel information into a new highlight map (a right part shown in fig. 5), namely filling pixel values into the new highlight map, assigning the pixel values to corresponding pixel points at the same positions as the barycentric coordinates, and completing the generation of the new highlight map.
In a preferred embodiment, a rectangular box surrounding the target triangle mesh may be found, and each pixel it contains is iteratively texel information extracted and filled in the rectangular box.
Illustratively, to facilitate a person skilled in the art understanding of the completion process of generating a new highlight map in an embodiment of the present invention, in a scene where cartoon hair is highlight baked rendered, the complete pseudo code to be executed may be as follows:
and 203, rendering the specified object model by adopting the new highlight map and the object map to obtain a target model.
In the generated new highlight map, the highlight texture coordinates of the corresponding pixels are already converted from the coordinate system of the highlight model to the highlight texture coordinates under the coordinate system of the appointed object model, and at the moment, the highlight map can be directly read from the direct model, namely, the new highlight map can be directly used when the appointed object model is rendered.
In one embodiment of the invention, the specified object model may be rendered using the new highlight map and the object map to obtain the target model.
For example, in a scene of performing highlight baking rendering on cartoon hair, the designated object model may be a hair model, and the object map may be a hair map, and since the corresponding highlight texture coordinates in the generated new highlight map have been converted from the coordinate system of the highlight model to the highlight texture coordinates under the coordinate system of the hair model, when the hair model is rendered, a new generated highlight map may be sampled and superimposed into the hair rendering, so that the target model after highlight baking is obtained. In this example, when the specific application of the highlight baking is performed on the hair model, the highlight model can be discarded, and the new highlight map and the new hair map are adopted to render the hair model, so that the performance cost of the processor can be reduced compared with the prior art under the condition of ensuring the rendering effect.
In the embodiment of the invention, the highlight map is recombined into the object map based on the application of the gravity center coordinates in the triangle, so that the combination of the highlight map and the object map is completed, and the automation of baking the highlight onto the appointed object is realized. Furthermore, the high-light map and the object map can be combined into one model, so that the performance pressure of the processor is reduced when a large amount of roles are rendered on the same screen, the high-light baking treatment under the conditions of the same screen rendering and poor performance of the processor is facilitated, and the rendering performance of the same screen roles is improved. In addition, the processing mode of automatically baking the highlight onto the appointed object is automatic processing, manual drawing of a map is not needed, and adjustment of a model is not needed, a great amount of repeated work can be avoided by the mode of one-key automatic processing, the manufacturing time of the cartoon highlight of the character is saved, the baking efficiency of the highlight baking is improved, the manufacturing efficiency is further improved, and the resource iteration period is reduced.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 6, a block diagram of an embodiment of a model processing device of the present invention is shown, and may specifically include the following modules:
the triangle mesh acquisition module 601 is configured to acquire a highlight model and a specified object model required to be subjected to highlight baking, and acquire a plurality of triangle meshes of the highlight model;
a mesh mapping module 602, configured to determine a target triangle mesh mapped by each triangle mesh on the specified object model;
and a map merging module 603, configured to merge the highlight map of the highlight model and the object map of the specified object model based on the barycentric coordinates of the target triangle mesh, to obtain a target model after highlight baking.
In one embodiment of the invention, grid mapping module 602 may include the following sub-modules:
the target triangle mesh determining submodule is used for traversing each triangle mesh of the highlight model to obtain highlight vertexes forming the triangle meshes; projecting the highlight vertex onto the appointed object model to obtain the barycentric coordinate of the highlight vertex on the appointed object model; according to the barycentric coordinates of the highlight vertexes on the appointed object model, determining the object texture coordinates of the vertexes of the triangle meshes on the appointed object model; determining a target triangle mesh mapped in the specified object model based on the object texture coordinates
In one embodiment of the invention, the target triangle mesh determination submodule may include the following cells:
a barycentric coordinate calculation unit, configured to obtain vertex coordinates of the highlight vertex, and determine an object vertex closest to the highlight vertex on the specified object model; projecting and calculating the vertex coordinates to a preset triangle on the appointed object model to obtain projection points; the preset triangle is a triangle which is determined to be in a preset range of the object vertex closest to the highlight vertex on the appointed object model; and calculating the barycenter coordinates of the projection points in the preset triangle.
In one embodiment of the present invention, the barycentric coordinates calculation unit may include the following sub-units:
an object vertex searching subunit, configured to obtain an object vertex of the specified object model, and generate a node tree using the object vertex; and searching the node tree by adopting a grid closest point searching algorithm to obtain an object vertex which is determined to be closest to the highlight vertex on the appointed object model.
In one embodiment of the invention, the target triangle mesh determination submodule may include the following cells:
An object texture coordinate calculation unit, configured to obtain a highlight texture coordinate of a vertex of the triangle mesh on the highlight model; and calculating the texture coordinates of the object by adopting the barycentric coordinates and the Gao Guangwen physical coordinates.
In one embodiment of the invention, the map merging module 603 may include the following sub-modules:
the mapping merging sub-module is used for generating a new highlight mapping from the original highlight mapping of the highlight model based on the barycentric coordinates; and rendering the specified object model by adopting the new highlight map and the object map to obtain the target model.
In one embodiment of the invention, the map merging sub-module may comprise the following units:
the highlight map conversion unit is used for sampling from the original highlight map based on the barycentric coordinates to obtain texel information corresponding to the target triangle mesh; and generating a new highlight map based on the texel information.
In one embodiment of the present invention, the barycentric coordinates of the target triangle mesh are barycentric coordinates of projection points of highlight vertices of the triangle mesh on the specified object model; the high light map conversion unit may include the following sub-units:
The highlight mapping conversion subunit is used for determining the projection point as a surface point closest to the highlight vertex on a triangular surface formed by the target triangular grid when the barycentric coordinates of the target triangular grid meet a preset condition; determining the object texture coordinates of the surface point closest to the surface point as the object texture coordinates of the highlight vertex on the appointed object model; establishing a corresponding relation between an object texture coordinate of the highlight vertex on the appointed object model and a highlight texture coordinate of the highlight vertex on the highlight model; based on the corresponding relation, acquiring an original highlight texture coordinate of a corresponding pixel point in the target triangle on the original highlight texture map, and sampling a pixel value of a corresponding pixel point of the original highlight texture coordinate from the original highlight texture map; and filling the pixel value into the new highlight map, and assigning the pixel value to a corresponding pixel point at the same position as the barycentric coordinate.
In the embodiment of the invention, the model processing device provided by the embodiment of the invention obtains the target model after the highlight baking by determining a plurality of triangular grids in the highlight model, mapping the target triangular grids in the appointed object model and combining the highlight map and the object map based on the barycentric coordinates of the target triangular grids, namely, recombining the highlight map into the object map based on the application of the barycentric coordinates in the triangle, and completing the combination of the highlight map and the object map, thereby realizing the automation of the highlight baking on the appointed object. Furthermore, the highlight map and the object map are combined into one model, so that the performance pressure of the processor can be reduced when a large number of roles are rendered on the same screen, the highlight baking treatment under the conditions of the same screen rendering and poor performance of the processor is facilitated, and the rendering performance of the same screen roles is improved.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the invention also provides electronic equipment, which comprises:
the method comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the computer program realizes the processes of the embodiment of the model processing method when being executed by the processor, can achieve the same technical effects, and is not repeated here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the above model processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has outlined a detailed description of a model processing method, a model processing device, a corresponding electronic device, and a corresponding computer readable storage medium, wherein specific examples are provided herein to illustrate the principles and embodiments of the present invention, and the above examples are only for the purpose of aiding in the understanding of the method and core concept of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (12)
1. A method of model processing, the method comprising:
acquiring a highlight model and a specified object model required to be subjected to highlight baking, and acquiring a plurality of triangular grids of the highlight model;
determining target triangle grids mapped by each triangle grid on the specified object model;
and combining the highlight map of the highlight model and the object map of the appointed object model based on the barycentric coordinates of the target triangle mesh to obtain the target model after highlight baking.
2. The method of claim 1, wherein the determining the target triangle mesh for each triangle mesh mapped at the specified object model comprises:
traversing each triangular mesh of the highlight model to obtain highlight vertexes forming the triangular mesh;
projecting the highlight vertex onto the appointed object model to obtain the barycentric coordinate of the highlight vertex on the appointed object model;
according to the barycentric coordinates of the highlight vertexes on the appointed object model, determining the object texture coordinates of the vertexes of the triangle meshes on the appointed object model;
a target triangle mesh mapped in the specified object model is determined based on the object texture coordinates.
3. The method of claim 2, wherein projecting the vertex coordinates onto the specified object model yields barycentric coordinates of the highlight vertices on the specified object model, comprising:
obtaining vertex coordinates of the highlight vertex, and determining an object vertex closest to the highlight vertex on the appointed object model;
projecting and calculating the vertex coordinates to a preset triangle on the appointed object model to obtain projection points; the preset triangle is a triangle which is determined to be in a preset range of the object vertex closest to the highlight vertex on the appointed object model;
and calculating the barycenter coordinates of the projection points in the preset triangle.
4. A method according to claim 3, wherein said determining on said specified object model the object vertex closest to said highlight vertex comprises:
obtaining an object vertex of the appointed object model, and generating a node tree by adopting the object vertex;
and searching the node tree by adopting a grid closest point searching algorithm to obtain an object vertex which is determined to be closest to the highlight vertex on the appointed object model.
5. The method of claim 2, wherein the determining the object texture coordinates of the vertices of the triangular mesh on the specified object model from the barycentric coordinates of the highlight vertices on the specified object model comprises:
acquiring highlight texture coordinates of vertexes of the triangular meshes on the highlight model;
and calculating to obtain the object texture coordinates by adopting the barycentric coordinates of the highlight vertexes on the appointed object model and the Gao Guangwen physical coordinates.
6. The method according to any one of claims 1 to 5, wherein the merging the highlight map of the highlight model with the object map of the specified object model based on the barycentric coordinates of the target triangle to obtain the target model after highlight baking includes:
generating a new highlight map from the original highlight map of the highlight model based on the barycentric coordinates of the target triangle;
and rendering the specified object model by adopting the new highlight map and the object map to obtain the target model.
7. The method of claim 6, wherein the generating the new highlight map from the original highlight map for the highlight model based on the barycentric coordinates of the target triangle comprises:
Sampling from the original highlight map based on the barycentric coordinates of the target triangle to obtain texel information corresponding to the target triangle mesh;
and generating a new highlight map based on the texel information.
8. The method of claim 7, wherein the barycentric coordinates of the target triangle mesh are barycentric coordinates of points on the specified object model projected by the highlight vertices of the triangle mesh;
the sampling from the original highlight map based on the barycentric coordinates of the target triangle to obtain texel information corresponding to the target triangle mesh includes:
if the barycentric coordinates of the target triangle meet a preset condition, determining the projection point as a surface point closest to the highlight vertex on a triangular surface formed by the target triangle mesh;
determining the object texture coordinates of the surface point closest to the surface point as the object texture coordinates of the highlight vertex on the appointed object model;
establishing a corresponding relation between an object texture coordinate of the highlight vertex on the appointed object model and a highlight texture coordinate of the highlight vertex on the highlight model;
And acquiring original highlight texture coordinates of corresponding pixel points in the target triangle on the original highlight map based on the corresponding relation, and sampling pixel values of corresponding pixel points of the original highlight texture coordinates from the original highlight map.
9. The method of claim 8, wherein generating a new highlight map based on the texel information comprises:
and filling the pixel value into the new highlight map, and assigning the pixel value to a corresponding pixel point at the same position as the barycentric coordinate.
10. A model processing apparatus, characterized in that the apparatus comprises:
the triangular grid acquisition module is used for acquiring a highlight model and a specified object model required to be subjected to highlight baking, and acquiring a plurality of triangular grids of the highlight model;
the grid mapping module is used for determining a target triangle grid mapped by each triangle grid on the specified object model;
and the mapping merging module is used for merging the highlight mapping of the highlight model and the object mapping of the appointed object model based on the barycentric coordinates of the target triangle mesh to obtain the target model after highlight baking.
11. An electronic device, comprising: a processor, a memory and a computer program stored on the memory and capable of running on the processor, which computer program, when executed by the processor, implements the model processing method according to any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the model processing method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311572882.4A CN117576287A (en) | 2023-11-22 | 2023-11-22 | Model processing method and device, electronic equipment and computing storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311572882.4A CN117576287A (en) | 2023-11-22 | 2023-11-22 | Model processing method and device, electronic equipment and computing storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117576287A true CN117576287A (en) | 2024-02-20 |
Family
ID=89885848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311572882.4A Pending CN117576287A (en) | 2023-11-22 | 2023-11-22 | Model processing method and device, electronic equipment and computing storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117576287A (en) |
-
2023
- 2023-11-22 CN CN202311572882.4A patent/CN117576287A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110738721B (en) | Three-dimensional scene rendering acceleration method and system based on video geometric analysis | |
CN108648269B (en) | Method and system for singulating three-dimensional building models | |
US11461958B2 (en) | Scene data obtaining method and model training method, apparatus and computer readable storage medium using the same | |
CN112633657B (en) | Construction quality management method, device, equipment and storage medium | |
CN112755535B (en) | Illumination rendering method and device, storage medium and computer equipment | |
CN114758337B (en) | Semantic instance reconstruction method, device, equipment and medium | |
CN111968216A (en) | Volume cloud shadow rendering method and device, electronic equipment and storage medium | |
CN105894551B (en) | Image drawing method and device | |
US20240062469A1 (en) | Data processing method, apparatus, device, and medium | |
CN108230434B (en) | Image texture processing method and device, storage medium and electronic device | |
CN113593033A (en) | Three-dimensional model feature extraction method based on grid subdivision structure | |
CN115758938A (en) | Boundary layer grid generation method for viscous boundary flow field numerical simulation | |
CN116246012A (en) | Virtual building model generation method and device and electronic equipment | |
CN115228083A (en) | Resource rendering method and device | |
CN113593043B (en) | Point cloud three-dimensional reconstruction method and system based on generation countermeasure network | |
CN116206068B (en) | Three-dimensional driving scene generation and construction method and device based on real data set | |
CN117152237A (en) | Distance field generation method and device, electronic equipment and storage medium | |
CN117576287A (en) | Model processing method and device, electronic equipment and computing storage medium | |
CN110148086A (en) | The depth polishing method, apparatus and three-dimensional rebuilding method of sparse depth figure, device | |
CN112002019B (en) | Method for simulating character shadow based on MR mixed reality | |
CN117974899B (en) | Three-dimensional scene display method and system based on digital twinning | |
CN117808949B (en) | Scene rendering method | |
CN117292038B (en) | Rendering method, system, equipment and storage medium for sea surface model | |
CN115953503B (en) | Hole texture filling method, device, apparatus and storage medium | |
CN115423917B (en) | Real-time drawing method and system for global three-dimensional wind field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |