CN117893668A - Virtual scene processing method and device, computer equipment and storage medium - Google Patents

Virtual scene processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN117893668A
CN117893668A CN202410064597.XA CN202410064597A CN117893668A CN 117893668 A CN117893668 A CN 117893668A CN 202410064597 A CN202410064597 A CN 202410064597A CN 117893668 A CN117893668 A CN 117893668A
Authority
CN
China
Prior art keywords
white
virtual object
coordinate system
world coordinate
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410064597.XA
Other languages
Chinese (zh)
Inventor
鲁雪磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410064597.XA priority Critical patent/CN117893668A/en
Publication of CN117893668A publication Critical patent/CN117893668A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The embodiment of the application discloses a virtual scene processing method, a virtual scene processing device, computer equipment and a computer readable storage medium. According to the method, a preset grid map is added for the white-mold virtual object in the white-mold virtual scene, then the coordinates of the model vertex of the white-mold virtual object under the object coordinate system and the dimensions of the white-mold virtual object under the world coordinate system are obtained, the white-mold virtual object is converted from the object coordinate system to the world coordinate system according to the coordinates, the dimensions and the unit grid dimensions of the grid map, finally texture sampling is conducted on the white-mold virtual object under the world coordinate system on the coordinate axis plane of the world coordinate system respectively to obtain a plurality of measurement texture information, and the measurement texture maps corresponding to the white-mold virtual object are obtained by superposing the plurality of measurement texture information, so that the position relationship of the white-mold virtual object in the white-mold virtual scene can be measured rapidly. With this, processing efficiency of the white-mode virtual object in the white-mode virtual scene can be improved.

Description

Virtual scene processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for processing a virtual scene, a computer device, and a computer readable storage medium.
Background
In the process of editing a virtual scene, the white mode stage is very important. In the white mold stage, details such as model materials, specific shapes and the like are not required to be considered, only the overall layout and illumination atmosphere of the scene are emphasized, and only corresponding models and materials are required to be replaced in the later stage, so that the efficiency of editing the scene can be improved. The white mode stage has high requirements on the spatial coordinate relationship between objects in the scene.
In the related art, the spatial size relationship of a white-mode scene may be measured using a mesh texture map. The method mainly comprises the steps of directly endowing a scene white mould material with a grid mark ruler mapping, adapting the mapping to the space size of a scene, and measuring the space relationship according to grid scales. However, when the mesh size of the mesh texture map is adjusted, in order to adapt to the space size, the mesh size needs to be configured by manually adjusting parameters for different objects, and when the types of white mould objects in the scene are complex, the operation is inconvenient, and more time is required to be consumed, so that the editing efficiency of the virtual scene is affected.
Disclosure of Invention
The embodiment of the application provides a processing method, a processing device, computer equipment and a computer readable storage medium for a virtual scene, which can improve the processing efficiency of a white-mode virtual object in the white-mode virtual scene.
The embodiment of the application provides a processing method of a virtual scene, which comprises the following steps:
adding a preset grid map for a white-mode virtual object in a white-mode virtual scene;
acquiring first position information of a model vertex of the white-mold virtual object under an object coordinate system, and acquiring size information of the white-mold virtual object under a world coordinate system;
converting the white-mode virtual object from the object coordinate system to under a world coordinate system based on the first position information, the size information, and a unit mesh size of the mesh map;
Respectively performing texture sampling on the white-mode virtual object in the world coordinate system on coordinate axis planes of the world coordinate system to obtain a plurality of pieces of measurement texture information;
And determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
Correspondingly, the embodiment of the application also provides a processing device of the virtual scene, which comprises:
the adding unit is used for adding a preset grid map for the white-mode virtual object in the white-mode virtual scene;
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring first position information of a model vertex of the white mould virtual object under an object coordinate system and acquiring size information of the white mould virtual object under a world coordinate system;
a conversion unit configured to convert the white-mode virtual object from the object coordinate system to a world coordinate system based on the first position information, the size information, and a unit mesh size of the mesh map;
The sampling unit is used for respectively carrying out texture sampling on the white-model virtual object in the world coordinate system and the coordinate axis plane of the world coordinate system to obtain a plurality of pieces of measurement texture information;
And the determining unit is used for determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
Correspondingly, the embodiment of the application also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the processing method of the virtual scene provided by any one of the embodiments of the application.
Correspondingly, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the processing method of the virtual scene.
According to the embodiment of the application, the preset grid mapping is added for the white-mold virtual object in the white-mold virtual scene, then the coordinates of the model vertex of the white-mold virtual object under the object coordinate system and the dimensions of the white-mold virtual object under the world coordinate system are obtained, the white-mold virtual object is converted from the object coordinate system to the world coordinate system according to the coordinates, the dimensions and the unit grid dimensions of the grid mapping, finally, the texture of the white-mold virtual object under the world coordinate system is sampled on the coordinate axis plane of the world coordinate system, so that a plurality of pieces of measurement texture information are obtained, and the measurement texture mapping corresponding to the white-mold virtual object is obtained by superposing the plurality of pieces of measurement texture information, so that the position relation of the white-mold virtual object in the white-mold virtual scene can be rapidly measured. With this, processing efficiency of the white-mode virtual object in the white-mode virtual scene can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a processing method of a virtual scene according to an embodiment of the present application.
Fig. 2 is an application scenario schematic diagram of a virtual scenario processing method provided in an embodiment of the present application.
Fig. 3 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application.
Fig. 4 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application.
Fig. 5 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application.
Fig. 6 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application.
Fig. 7 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application.
Fig. 8 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application.
Fig. 9 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application.
Fig. 10 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application.
Fig. 11 is a block diagram of a processing device for a virtual scene according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a virtual scene processing method, a virtual scene processing device, a computer readable storage medium and computer equipment. Specifically, the method for processing the virtual scene in the embodiment of the application can be executed by a computer device, wherein the computer device can be a terminal or a server and other devices. The terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a personal computer (PC, personal Computer), a personal digital assistant (personal DIGITAL ASSISTANT, PDA), and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, cdns (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
For example, the computer device may be a terminal that may add a preset mesh map for a white-mode virtual object in a white-mode virtual scene; acquiring first position information of a model vertex of a white-model virtual object under an object coordinate system, and acquiring size information of the white-model virtual object under a world coordinate system; converting the white-mode virtual object from the object coordinate system to the world coordinate system based on the first position information, the size information and the unit grid size of the grid map; respectively performing texture sampling on a white-mode virtual object in a world coordinate system on a coordinate axis plane of the world coordinate system to obtain a plurality of measurement texture information; and determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
Based on the above problems, embodiments of the present application provide a method, an apparatus, a computer device, and a computer readable storage medium for processing a virtual scene, which can improve the processing efficiency of a white-mode virtual object in a white-mode virtual scene.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
The embodiment of the application provides a processing method of a virtual scene, which can be executed by a terminal or a server.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for processing a virtual scene according to an embodiment of the present application. The specific flow of the processing method of the virtual scene can be as follows:
101. And adding a preset grid map for the white-mode virtual object in the white-mode virtual scene.
In the embodiment of the application, the white-mode virtual scene refers to a three-dimensional scene model generated in a white-mode stage in an editing workflow of the virtual scene. Wherein in the white-mould stage, virtual objects in the virtual scene can be represented by creating geometric models, and the positional relationship between the virtual objects can be controlled by adjusting the positional relationship between the geometric models. The white model stage does not need to consider details such as model materials, specific shapes and the like, only the overall layout and illumination atmosphere of the scene are emphasized, and the created geometric model can be replaced by the corresponding virtual object model and materials in the later stage.
The virtual scene refers to a digital scene of simulated reality displayed through a terminal, for example, the virtual scene can be a game scene and the like, and different display effects of the game scene can be realized by editing the virtual scene.
The white-mold virtual object refers to a basic model created in a white-mold virtual scene, and texture information is not added to the white-mold virtual object.
For example, referring to fig. 2, fig. 2 is an application scenario schematic diagram of a virtual scenario processing method according to an embodiment of the present application. Fig. 2 shows a partial white-mold virtual scene, which includes a plurality of white-mold virtual objects, and may be a sphere white-mold model, a cone white-mold model, a cube white-mold model, and the like.
In some embodiments, the step of adding a preset mesh map to the white-mode virtual object in the white-mode virtual scene may include the following operations:
creating a white-mold virtual object in a white-mold virtual scene;
and obtaining the material added with the grid map and giving the material to the white-mode virtual object.
Wherein, geometric bodies with different shapes can be created in the white-mould virtual scene, and one geometric body can be used as a white-mould virtual object. For example, cubes, spheres, cylinders, etc. may be created as white-mold virtual objects.
For example, referring to fig. 3, fig. 3 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application. In the white-mold virtual scene 10 shown in fig. 3, a plurality of white-mold virtual objects 11 are created, and the shapes of the white-mold virtual objects 11 may be different, for example, a sphere, a cubic cylinder, or the like, respectively.
Wherein the grid map may be a two-dimensional picture comprising a plurality of identical cells. The mesh map may be pre-made or obtained from an existing mesh map.
For example, referring to fig. 4, fig. 4 is an application scenario diagram of another virtual scenario processing method according to an embodiment of the present application. Fig. 4 shows a grid map comprising a plurality of cells of the same size.
Specifically, a material is created in a material editor, the obtained grid map is added to the material, the material with the grid map is obtained, and then the material is endowed to a white-mold virtual object, so that the white-mold virtual object added with the grid map can be obtained.
Wherein, in the rendering program, the material is the combination of visual properties of the model surface, and the visual properties refer to the color, texture, smoothness, transparency, reflectivity, refractive index, luminosity and the like of the surface.
For example, referring to fig. 5, fig. 5 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application. In the white-mold virtual scene 10 shown in fig. 5, a material to which a mesh map is added is given to each white-mold virtual object 11, so that a mesh texture is formed on the surface of each white-mold virtual object 11.
102. First position information of a model vertex of the white-mold virtual object in an object coordinate system is obtained, and size information of the white-mold virtual object in a world coordinate system is obtained.
Wherein, the model vertex of the white-mold virtual object refers to a point of the surface of the white-mold virtual object.
Wherein the object coordinate system is associated with a specific object, each object having its own specific coordinate system. The object coordinate systems of different objects are mutually independent and can be the same or different without any connection. Meanwhile, the object coordinate system is bound with the object, the binding means that the object moves or rotates, the object coordinate system translates or rotates in the same way, and the object coordinate system and the object move synchronously and are bound with each other.
In the embodiment of the application, the object coordinate system may be a coordinate system associated with a white-mode virtual object, and a certain point in the white-mode virtual object is taken as a coordinate origin.
For example, referring to fig. 6, fig. 6 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application. Fig. 6 shows an object coordinate system 20 corresponding to the white-mold virtual object 11, the object coordinate system 20 having a point in the white-mold virtual object 11 as a coordinate origin 0, the object coordinate system 20 being changeable with a change in the white-mold virtual object.
The first position information may include coordinates of each model vertex of the white-mode virtual object in an object coordinate system.
For example, referring to fig. 7, fig. 7 is an application scenario diagram of another virtual scenario processing method according to an embodiment of the present application. In the object coordinate system 20 shown in fig. 7, the white-mold virtual object 11 may be a cube, and the size of the white-mold virtual object 11 in the coordinate axis X direction may be: x1, the dimension in the coordinate axis Y direction may be: y1, the dimension in the coordinate axis Z direction may be: z1. The model vertices of the white-model virtual object 11 may include a vertex a, a vertex B, and a vertex C, and the coordinates of each model vertex may be obtained, where the obtained coordinates of the vertex a in the object coordinate system may be (0, y1, z 1), the coordinates of the vertex B in the object coordinate system may be (x 1,0, z 1), and the coordinates of the vertex C in the object coordinate system may be (x 1, y1, z 1), so that the coordinates corresponding to each model vertex of the white-model virtual object 11 in the object coordinate system 20 may be obtained.
The size information refers to a size value of the white-mode virtual object in the world coordinate system, that is, a value of the size of the white-mode virtual object corresponding to the world coordinate system.
Wherein the world coordinate system establishes the reference system needed to describe the other coordinate systems. The world coordinate system may be used to describe all other coordinate systems or the position of the object. The world coordinate system does not change with changes in the objects, each object corresponding to an absolute position in the world coordinate system.
For example, referring to fig. 8, fig. 8 is an application scenario diagram of another virtual scenario processing method according to an embodiment of the present application. Fig. 8 shows a world coordinate system 30, in which world coordinate system 30 the white-mould virtual object 11 corresponds to an absolute position, and the world coordinate system 30 does not change with the change of the white-mould virtual object 11.
In some embodiments, the step of "obtaining size information of the white-mode virtual object in the world coordinate system" may include the following operations:
Converting the first position information through a model transformation matrix to obtain second position information of the white-model virtual object under a world coordinate system;
And calculating the size of the white model virtual object under the world coordinate system based on the second position information to obtain size information.
Wherein, the Model transformation matrix refers to a Model matrix in MVP transformation, and the MVP transformation refers to mapping points on a Model to a screen space through the MVP matrix. The MVP transforms include Model transforms, view transforms, project Projection transforms. The Model transformation adopts a Model matrix, the View observation transformation adopts a View matrix, and the project Projection transformation adopts a project matrix.
Wherein the second position information may include coordinates of respective model vertices of the white-model virtual object in a world coordinate system.
In the embodiment of the application, after the coordinates of each model vertex of the white-mold virtual object in the object coordinate system are obtained, the coordinates of each model vertex in the world coordinate system can be obtained as the second position information by transforming the coordinates through the M matrix.
Further, according to the coordinates of the model vertexes in the world coordinate system, the size of the white-model virtual object in the world coordinate system can be calculated, and specifically, according to the coordinate difference value between the coordinates of the model vertexes, the size value of the white-model virtual object in each coordinate axis direction in the world coordinate system can be calculated and obtained as size information.
103. The white-mode virtual object is converted from the object coordinate system to under the world coordinate system based on the first position information, the size information, and the unit mesh size of the mesh map.
Wherein the white-model virtual object is converted from the object coordinate system to the world coordinate system, that is, each model vertex of the white-model virtual object added with the grid map is mapped to the world coordinate system.
In some embodiments, the step of converting the white-mode virtual object from the object coordinate system to the world coordinate system based on the first position information, the size information, and the unit mesh size of the mesh map may include the operations of:
Obtaining the size configured for the unit grid in the grid map to obtain the unit grid size;
based on the first position information, the size information and the unit grid size, texture sampling coordinates of model vertexes of the white-model virtual object under a world coordinate system are calculated.
The unit grid in the grid map refers to a grid in the grid map, and the sizes of various kinds in the grid map are the same.
In the embodiment of the application, the length of the unit grid representation in the grid map can be set as the unit grid size, and can be used for measuring the size of the white-mode virtual object.
In some embodiments, the step of calculating texture sampling coordinates of model vertices of the white-model virtual object in a world coordinate system based on the first position information, the size information, and the unit mesh size may include the following operations:
calculating the product of the first coordinate and the size value to obtain a second coordinate of the model vertex converted to the world coordinate system;
And calculating the ratio of the second coordinate to the unit grid size to obtain texture sampling coordinates.
The first position information may include a first coordinate of a model vertex of the white-mode virtual object in an object coordinate system, and the size information may include a size value of the white-mode virtual object corresponding to each coordinate axis direction of the world coordinate system.
In the embodiment of the application, in order to realize that the texture of the white-mode virtual object added with the grid map is mapped to the world coordinate system, the position and size information of the model vertex on the surface of the white-mode virtual object can be mapped from the object coordinate system to the world coordinate system, and meanwhile, the influence of the grid measuring unit added with the grid map is added, the specific calculation formula can be as follows:
UVMeasure=posOS*Scaleobj/SizeMeasure
Wherein UV Measure is the coordinate used to measure the texture, i.e., the sample texture coordinate, and the measured texture refers to the texture obtained by sampling the grid map; pos OS is the position of the model vertex of the white-model virtual object in the object coordinate system, scale obj is the Size of the white-model virtual object in the world coordinate system, and Size Measure is the unit mesh Size.
104. And respectively performing texture sampling on the white-mode virtual object in the world coordinate system on coordinate axis planes of the world coordinate system to obtain a plurality of pieces of measured texture information.
In the embodiment of the application, after the model vertexes of the surface of the white-model virtual object are converted into the direct coordinate system, texture sampling can be performed on the surface model vertexes of the white-model virtual object in the world coordinate system.
The method comprises the steps of converting model vertexes of a white-model virtual object from an object coordinate system to a world coordinate system, obtaining sampling texture coordinates of each model vertex in the world coordinate system, wherein the sampling texture coordinates are three-dimensional coordinates and cannot be used as UV sampling grid mapping, and the processing is needed.
In some embodiments, the step of performing texture sampling on the white-mode virtual object in the world coordinate system and on the coordinate axis plane of the world coordinate system to obtain a plurality of measured texture information may include the following operations:
determining a plurality of coordinate axis planes according to coordinate axes of a world coordinate system;
dividing the texture sampling coordinate into a plurality of two-dimensional sampling coordinates according to a plurality of coordinate axis planes;
and performing texture sampling on the white-mode virtual object under the world coordinate system based on each two-dimensional sampling coordinate to obtain measurement texture information corresponding to each coordinate axis plane.
Wherein the coordinate axis plane refers to a plane formed by two coordinate axes.
For example, referring to fig. 9, fig. 9 is an application scenario diagram of another virtual scenario processing method according to an embodiment of the present application. The left side of fig. 9 is a world coordinate system 30, and a coordinate axis XY plane 31 is formed based on coordinate axes X and Y in the world coordinate system 30; the coordinate axis YZ plane 32 is configured based on the coordinate axis Y and the coordinate axis Z in the world coordinate system 30, and the coordinate axis XZ plane 33 is configured based on the coordinate axis X and the coordinate axis Z in the world coordinate system 30.
Further, the texture sampling coordinates of the vertexes of each model are split into two-dimensional coordinates along the coordinate axis XY plane to be used as two-dimensional sampling coordinates under the coordinate axis XY plane; splitting a two-dimensional coordinate along the YZ plane of the coordinate axis to serve as a two-dimensional sampling coordinate under the YZ plane of the coordinate axis; and (3) splitting a two-dimensional coordinate along the XZ plane of the coordinate axis to serve as a two-dimensional sampling coordinate under the XZ plane of the coordinate axis.
In some embodiments, the step of "texture sampling the white-mode virtual object in the world coordinate system based on each two-dimensional sampling coordinate to obtain the measured texture information corresponding to each coordinate axis plane" may include the following operations:
and sampling the grid map of the white-mode virtual object based on each two-dimensional sampling coordinate to obtain the measured texture information of the white-mode virtual object under each coordinate axis plane.
Specifically, according to two-dimensional sampling coordinates under a coordinate axis XY plane, sampling a grid map of a white-mode virtual object surface under the XY plane to obtain texture information on the coordinate axis XY plane, wherein the texture information is used as measurement texture information on the coordinate axis XY plane; sampling the grid mapping of the surface of the white-mode virtual object in the YZ plane according to the two-dimensional sampling coordinates in the YZ plane to obtain texture information in the YZ plane as measurement texture information in the YZ plane; and sampling the grid mapping of the surface of the white-mode virtual object in the XZ plane according to the two-dimensional sampling coordinates in the XZ plane to obtain texture information in the XZ plane as measurement texture information in the XZ plane.
105. And determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
In some embodiments, the step of determining a measured texture map corresponding to the white-mode virtual object based on the plurality of measured texture information may include the following operations:
and superposing the plurality of measured texture information to obtain a measured texture map.
In the embodiment of the present application, the calculation formula for obtaining the final measured texture map may be as follows:
TexMeasure=Texxy+Texxz+Texyz
Wherein Tex xy refers to texture information of the white-mold virtual object in the coordinate axis XY plane, tex xz refers to texture information of the white-mold virtual object in the coordinate axis XZ plane, tex yz refers to texture information of the white-mold virtual object in the coordinate axis YZ plane. And adding the texture information on each coordinate axis plane to obtain the measured texture map of the white-mode virtual object. The grids in the measurement texture map cannot deform along with the size change of the white-mode virtual object, so that the real coordinates and size information of the white-mode virtual object in the white-mode virtual scene can be represented.
In some embodiments, the method may further comprise the steps of:
and measuring the position relation of the white-mode virtual object in the white-mode virtual scene according to the measured texture map.
The unit grids in the measurement texture map have sizes, and the sizes and the position information of the white-mode virtual objects in the white-mode virtual scene can be measured through the measurement texture map of the surface of the white-mode virtual objects.
The scene position relation among the white-mode virtual objects in the white-mode virtual scene is intuitively represented by adding a measurement texture map for each white-mode virtual object in the white-mode virtual scene.
In some embodiments, the measurement grid parameters of the white-mold scene may be adjusted after the measurement texture map is mounted to the white-mold virtual object. So that different measurement grids can be set according to different requirements.
For example, referring to fig. 10, fig. 10 is an application scenario schematic diagram of another virtual scenario processing method according to an embodiment of the present application. In the renderer (renderer) operation interface 40 shown in fig. 10, a plurality of parameter adjustment items, which may be a parameter adjustment item for measuring a mesh size, a parameter adjustment item for measuring a mesh pattern, a parameter adjustment item for a color, and the like, are included.
The parameter adjustment item of the measurement grid size may be provided with a size adjustment slider, through which the measurement grid size may be adjusted, and the adjusted size value is displayed in the parameter adjustment item, for example, "1", where the unit may be: and (5) rice.
Wherein, a selection button can be arranged in the parameter adjustment item of the measurement grid size, and the selection button can be used for selecting the grid pattern of the measurement grid.
Wherein a selection button may be provided in the parameter adjustment items of the colors, which may be used to select a grid color of the measurement grid, etc.
The measured grid parameters of the white-mode scene are adjusted at the curve renderer operation interface as shown in fig. 6. The grid style is adjusted by modifying the 'Measure Tex' map, the grid Size is modified by adjusting the 'Measure Size' parameter, the unit is m, and the white mould material Color is adjusted by the 'Color' parameter.
The embodiment of the application discloses a processing method of a virtual scene, which comprises the following steps: adding a preset grid map for a white-mode virtual object in a white-mode virtual scene; acquiring first position information of a model vertex of a white-model virtual object under an object coordinate system, and acquiring size information of the white-model virtual object under a world coordinate system; converting the white-mode virtual object from the object coordinate system to the world coordinate system based on the first position information, the size information and the unit grid size of the grid map; respectively performing texture sampling on a white-mode virtual object in a world coordinate system on a coordinate axis plane of the world coordinate system to obtain a plurality of measurement texture information; and determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information. With this, processing efficiency of the white-mode virtual object in the white-mode virtual scene can be improved.
In order to facilitate better implementation of the virtual scene processing method provided by the embodiment of the application, the embodiment of the application also provides a virtual scene processing device based on the virtual scene processing method. The meaning of the nouns is the same as that in the processing method of the virtual scene, and specific implementation details can be referred to the description in the method embodiment.
Referring to fig. 11, fig. 11 is a block diagram of a virtual scene processing apparatus according to an embodiment of the present application, where the apparatus includes:
An adding unit 301, configured to add a preset mesh map to a white-mode virtual object in a white-mode virtual scene;
an obtaining 302, configured to obtain first position information of a model vertex of the white mold virtual object in an object coordinate system, and obtain size information of the white mold virtual object in a world coordinate system;
A conversion unit 303 for converting the white-mode virtual object from the object coordinate system to a world coordinate system based on the first position information, the size information, and a unit mesh size of the mesh map;
The sampling unit 304 is configured to sample textures of the white-mode virtual object in the world coordinate system on coordinate axis planes of the world coordinate system, so as to obtain a plurality of measurement texture information;
A determining unit 305, configured to determine a measured texture map corresponding to the white-mode virtual object based on the plurality of measured texture information.
In some embodiments, the conversion unit 303 may include:
A first obtaining subunit, configured to obtain a size configured for a unit grid in the grid map, so as to obtain the unit grid size;
and the first calculating subunit is used for calculating texture sampling coordinates of the model vertex of the white-model virtual object under the world coordinate system based on the first position information, the size information and the unit grid size.
In some embodiments, the first computing subunit may be specifically configured to:
Calculating the product of the first coordinate and the size value to obtain a second coordinate of the model vertex converted to the world coordinate system;
And calculating the ratio of the second coordinate to the unit grid size to obtain the texture sampling coordinate.
In some embodiments, sampling unit 304 may include:
a first determining subunit, configured to determine a plurality of coordinate axis planes according to coordinate axes of the world coordinate system;
The splitting subunit is used for splitting the texture sampling coordinate into a plurality of two-dimensional sampling coordinates according to the coordinate axis planes;
And the sampling subunit is used for sampling textures of the white-mode virtual object in the world coordinate system based on each two-dimensional sampling coordinate to obtain measurement texture information corresponding to each coordinate axis plane.
In some embodiments, the sampling subunit may be specifically configured to:
and sampling the grid map of the white-mode virtual object based on each two-dimensional sampling coordinate to obtain the measured texture information of the white-mode virtual object in each coordinate axis plane.
In some embodiments, the determining unit 305 may include:
and the superposition subunit is used for superposing the plurality of measurement texture information to obtain the measurement texture map.
In some embodiments, the acquisition unit 302 may include:
The conversion subunit is used for converting the first position information through a model transformation matrix to obtain second position information of the white-model virtual object under the world coordinate system;
And the second calculating subunit is used for calculating the size of the white mould virtual object under the world coordinate system based on the second position information to obtain the size information.
In some embodiments, the adding unit 301 may include:
A creating subunit, configured to create the white-mode virtual object in the white-mode virtual scene;
and the second acquisition subunit is used for acquiring the material added with the grid map and endowing the white-mold virtual object with the material.
In some embodiments, the apparatus may further comprise:
And the measuring unit is used for measuring the position relation of the white-mold virtual object in the white-mold virtual scene according to the measurement texture map.
The embodiment of the application discloses a processing device of a virtual scene, which adds a preset grid map to a white-mode virtual object in the white-mode virtual scene through an adding unit 301; acquiring 302 first position information of a model vertex of the white-mold virtual object under an object coordinate system, and acquiring size information of the white-mold virtual object under a world coordinate system; the conversion unit 303 converts the white-mode virtual object from the object coordinate system to under a world coordinate system based on the first position information, the size information, and a unit mesh size of the mesh map; the sampling unit 304 respectively performs texture sampling on the white-mode virtual object in the world coordinate system and coordinate axis planes of the world coordinate system to obtain a plurality of pieces of measurement texture information; the determining unit 305 determines a measured texture map corresponding to the white-mode virtual object based on the plurality of measured texture information. With this, processing efficiency of the white-mode virtual object in the white-mode virtual scene can be improved.
Correspondingly, the embodiment of the application also provides computer equipment which can be a terminal. Fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 12. The computer device 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more computer readable storage media, and a computer program stored on the memory 502 and executable on the processor. The processor 501 is electrically connected to the memory 502. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, and performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby performing overall monitoring of the computer device 500.
In the embodiment of the present application, the processor 501 in the computer device 500 loads the instructions corresponding to the processes of one or more application programs into the memory 502 according to the following steps, and the processor 501 executes the application programs stored in the memory 502, so as to implement various functions:
adding a preset grid map for a white-mode virtual object in a white-mode virtual scene;
Acquiring first position information of a model vertex of a white-model virtual object under an object coordinate system, and acquiring size information of the white-model virtual object under a world coordinate system;
Converting the white-mode virtual object from the object coordinate system to the world coordinate system based on the first position information, the size information and the unit grid size of the grid map;
Respectively performing texture sampling on a white-mode virtual object in a world coordinate system on a coordinate axis plane of the world coordinate system to obtain a plurality of measurement texture information;
and determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
In some embodiments, converting the white-mode virtual object from the object coordinate system to under the world coordinate system based on the first location information, the size information, and the unit mesh size of the mesh map, includes:
Obtaining the size configured for the unit grid in the grid map to obtain the unit grid size;
based on the first position information, the size information and the unit grid size, texture sampling coordinates of model vertexes of the white-model virtual object under a world coordinate system are calculated.
In some embodiments, the first position information includes a first coordinate of a model vertex of the white-mold virtual object in an object coordinate system, and the size information includes a size value of the white-mold virtual object corresponding to each coordinate axis direction of the world coordinate system;
Based on the first position information, the size information and the unit grid size, calculating texture sampling coordinates of model vertexes of the white-model virtual object under a world coordinate system, including:
calculating the product of the first coordinate and the size value to obtain a second coordinate of the model vertex converted to the world coordinate system;
And calculating the ratio of the second coordinate to the unit grid size to obtain texture sampling coordinates.
In some embodiments, texture sampling is performed on a white-mode virtual object in a world coordinate system on a coordinate axis plane of the world coordinate system to obtain a plurality of measured texture information, including:
determining a plurality of coordinate axis planes according to coordinate axes of a world coordinate system;
dividing the texture sampling coordinate into a plurality of two-dimensional sampling coordinates according to a plurality of coordinate axis planes;
and performing texture sampling on the white-mode virtual object under the world coordinate system based on each two-dimensional sampling coordinate to obtain measurement texture information corresponding to each coordinate axis plane.
In some embodiments, texture sampling is performed on a white-mode virtual object in a world coordinate system based on each two-dimensional sampling coordinate to obtain measured texture information corresponding to each coordinate axis plane, including:
and sampling the grid map of the white-mode virtual object based on each two-dimensional sampling coordinate to obtain the measured texture information of the white-mode virtual object under each coordinate axis plane.
In some embodiments, determining a measured texture map corresponding to a white-mode virtual object based on a plurality of measured texture information comprises:
and superposing the plurality of measured texture information to obtain a measured texture map.
In some embodiments, obtaining size information of the white-mode virtual object in the world coordinate system includes:
Converting the first position information through a model transformation matrix to obtain second position information of the white-model virtual object under a world coordinate system;
And calculating the size of the white model virtual object under the world coordinate system based on the second position information to obtain size information.
In some embodiments, adding a preset mesh map for a white-mode virtual object in a white-mode virtual scene includes:
creating a white-mold virtual object in a white-mold virtual scene;
and obtaining the material added with the grid map and giving the material to the white-mode virtual object.
In some embodiments, the method further comprises:
and measuring the position relation of the white-mode virtual object in the white-mode virtual scene according to the measured texture map.
According to the method, a preset grid map is added for the white-mold virtual object in the white-mold virtual scene, then the coordinates of the model vertex of the white-mold virtual object under the object coordinate system and the dimensions of the white-mold virtual object under the world coordinate system are obtained, the white-mold virtual object is converted from the object coordinate system to the world coordinate system according to the coordinates, the dimensions and the unit grid dimensions of the grid map, finally texture sampling is conducted on the white-mold virtual object under the world coordinate system on the coordinate axis plane of the world coordinate system respectively to obtain a plurality of measurement texture information, and the measurement texture maps corresponding to the white-mold virtual object are obtained by superposing the plurality of measurement texture information, so that the position relationship of the white-mold virtual object in the white-mold virtual scene can be measured rapidly. With this, processing efficiency of the white-mode virtual object in the white-mode virtual scene can be improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 12, the computer device 500 further includes: a touch display screen 503, a radio frequency circuit 504, an audio circuit 505, an input unit 506, and a power supply 507. The processor 501 is electrically connected to the touch display 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 12 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display screen 503 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, guidance information, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 501, and can receive commands from the processor 501 and execute them. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 501 to determine the type of touch event, and the processor 501 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch sensitive display 503 may also implement an input function as part of the input unit 506.
The radio frequency circuitry 504 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 505 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 505 and converted into audio data, which are processed by the audio data output processor 501 for transmission to, for example, another computer device via the radio frequency circuit 504, or which are output to the memory 502 for further processing. The audio circuit 505 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Alternatively, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 507 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 12, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment may add a preset mesh map to a white-mode virtual object in a white-mode virtual scene; acquiring first position information of a model vertex of a white-model virtual object under an object coordinate system, and acquiring size information of the white-model virtual object under a world coordinate system; converting the white-mode virtual object from the object coordinate system to the world coordinate system based on the first position information, the size information and the unit grid size of the grid map; respectively performing texture sampling on a white-mode virtual object in a world coordinate system on a coordinate axis plane of the world coordinate system to obtain a plurality of measurement texture information; and determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium storing a plurality of computer programs capable of being loaded by a processor to execute steps in any one of the virtual scene processing methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
adding a preset grid map for a white-mode virtual object in a white-mode virtual scene;
Acquiring first position information of a model vertex of a white-model virtual object under an object coordinate system, and acquiring size information of the white-model virtual object under a world coordinate system;
Converting the white-mode virtual object from the object coordinate system to the world coordinate system based on the first position information, the size information and the unit grid size of the grid map;
Respectively performing texture sampling on a white-mode virtual object in a world coordinate system on a coordinate axis plane of the world coordinate system to obtain a plurality of measurement texture information;
and determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
In some embodiments, converting the white-mode virtual object from the object coordinate system to under the world coordinate system based on the first location information, the size information, and the unit mesh size of the mesh map, includes:
Obtaining the size configured for the unit grid in the grid map to obtain the unit grid size;
based on the first position information, the size information and the unit grid size, texture sampling coordinates of model vertexes of the white-model virtual object under a world coordinate system are calculated.
In some embodiments, the first position information includes a first coordinate of a model vertex of the white-mold virtual object in an object coordinate system, and the size information includes a size value of the white-mold virtual object corresponding to each coordinate axis direction of the world coordinate system;
Based on the first position information, the size information and the unit grid size, calculating texture sampling coordinates of model vertexes of the white-model virtual object under a world coordinate system, including:
calculating the product of the first coordinate and the size value to obtain a second coordinate of the model vertex converted to the world coordinate system;
And calculating the ratio of the second coordinate to the unit grid size to obtain texture sampling coordinates.
In some embodiments, texture sampling is performed on a white-mode virtual object in a world coordinate system on a coordinate axis plane of the world coordinate system to obtain a plurality of measured texture information, including:
determining a plurality of coordinate axis planes according to coordinate axes of a world coordinate system;
dividing the texture sampling coordinate into a plurality of two-dimensional sampling coordinates according to a plurality of coordinate axis planes;
and performing texture sampling on the white-mode virtual object under the world coordinate system based on each two-dimensional sampling coordinate to obtain measurement texture information corresponding to each coordinate axis plane.
In some embodiments, texture sampling is performed on a white-mode virtual object in a world coordinate system based on each two-dimensional sampling coordinate to obtain measured texture information corresponding to each coordinate axis plane, including:
and sampling the grid map of the white-mode virtual object based on each two-dimensional sampling coordinate to obtain the measured texture information of the white-mode virtual object under each coordinate axis plane.
In some embodiments, determining a measured texture map corresponding to a white-mode virtual object based on a plurality of measured texture information comprises:
and superposing the plurality of measured texture information to obtain a measured texture map.
In some embodiments, obtaining size information of the white-mode virtual object in the world coordinate system includes:
Converting the first position information through a model transformation matrix to obtain second position information of the white-model virtual object under a world coordinate system;
And calculating the size of the white model virtual object under the world coordinate system based on the second position information to obtain size information.
In some embodiments, adding a preset mesh map for a white-mode virtual object in a white-mode virtual scene includes:
creating a white-mold virtual object in a white-mold virtual scene;
and obtaining the material added with the grid map and giving the material to the white-mode virtual object.
In some embodiments, the method further comprises:
and measuring the position relation of the white-mode virtual object in the white-mode virtual scene according to the measured texture map.
According to the method, a preset grid map is added for the white-mold virtual object in the white-mold virtual scene, then the coordinates of the model vertex of the white-mold virtual object under the object coordinate system and the dimensions of the white-mold virtual object under the world coordinate system are obtained, the white-mold virtual object is converted from the object coordinate system to the world coordinate system according to the coordinates, the dimensions and the unit grid dimensions of the grid map, finally texture sampling is conducted on the white-mold virtual object under the world coordinate system on the coordinate axis plane of the world coordinate system respectively to obtain a plurality of measurement texture information, and the measurement texture maps corresponding to the white-mold virtual object are obtained by superposing the plurality of measurement texture information, so that the position relationship of the white-mold virtual object in the white-mold virtual scene can be measured rapidly. With this, processing efficiency of the white-mode virtual object in the white-mode virtual scene can be improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any one of the virtual scene processing methods provided by the embodiments of the present application can be executed by the computer program stored in the computer readable storage medium, so that the beneficial effects that any one of the virtual scene processing methods provided by the embodiments of the present application can be achieved, which is detailed in the previous embodiments and will not be repeated herein.
The above describes in detail a virtual scene processing method, apparatus, computer readable storage medium and computer device provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the above description of the embodiments is only for helping to understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (12)

1. A method for processing a virtual scene, the method comprising:
adding a preset grid map for a white-mode virtual object in a white-mode virtual scene;
acquiring first position information of a model vertex of the white-mold virtual object under an object coordinate system, and acquiring size information of the white-mold virtual object under a world coordinate system;
converting the white-mode virtual object from the object coordinate system to under a world coordinate system based on the first position information, the size information, and a unit mesh size of the mesh map;
Respectively performing texture sampling on the white-mode virtual object in the world coordinate system on coordinate axis planes of the world coordinate system to obtain a plurality of pieces of measurement texture information;
And determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
2. The method of claim 1, wherein the converting the white-mode virtual object from the object coordinate system to a world coordinate system based on the first location information, the size information, and a unit mesh size of the mesh map comprises:
obtaining the size configured for the unit grid in the grid map to obtain the unit grid size;
and calculating texture sampling coordinates of the model vertex of the white-model virtual object under the world coordinate system based on the first position information, the size information and the unit grid size.
3. The method according to claim 2, wherein the first position information includes a first coordinate of a model vertex of the white-mold virtual object in the object coordinate system, and the size information includes a size value of the white-mold virtual object corresponding to each coordinate axis direction of the world coordinate system;
the calculating, based on the first position information, the size information and the unit mesh size, texture sampling coordinates of the model vertex of the white-model virtual object in the world coordinate system, includes:
Calculating the product of the first coordinate and the size value to obtain a second coordinate of the model vertex converted to the world coordinate system;
And calculating the ratio of the second coordinate to the unit grid size to obtain the texture sampling coordinate.
4. The method according to claim 2, wherein the performing texture sampling on the white-mode virtual object in the world coordinate system on coordinate axis planes of the world coordinate system to obtain a plurality of measured texture information includes:
determining a plurality of coordinate axis planes according to coordinate axes of the world coordinate system;
splitting the texture sampling coordinates into a plurality of two-dimensional sampling coordinates according to the coordinate axis planes;
and performing texture sampling on the white-mode virtual object under the world coordinate system based on each two-dimensional sampling coordinate to obtain measurement texture information corresponding to each coordinate axis plane.
5. The method according to claim 4, wherein the performing texture sampling on the white-mode virtual object in the world coordinate system based on each two-dimensional sampling coordinate to obtain the measured texture information corresponding to each coordinate axis plane includes:
and sampling the grid map of the white-mode virtual object based on each two-dimensional sampling coordinate to obtain the measured texture information of the white-mode virtual object in each coordinate axis plane.
6. The method of claim 1, wherein determining a measured texture map corresponding to the white-mode virtual object based on the plurality of measured texture information comprises:
And superposing the plurality of measured texture information to obtain the measured texture map.
7. The method of claim 1, wherein the obtaining size information of the white mold virtual object in the world coordinate system comprises:
Converting the first position information through a model transformation matrix to obtain second position information of the white-model virtual object under the world coordinate system;
And calculating the size of the white mould virtual object under the world coordinate system based on the second position information to obtain the size information.
8. The method of claim 1, wherein adding a preset mesh map for a white-mode virtual object in a white-mode virtual scene comprises:
creating the white-mold virtual object in the white-mold virtual scene;
and obtaining the material added with the grid map, and endowing the material to the white-mold virtual object.
9. The method according to any one of claims 1-8, further comprising:
and measuring the position relation of the white-mold virtual object in the white-mold virtual scene according to the measured texture map.
10. A processing apparatus for a virtual scene, the apparatus comprising:
the adding unit is used for adding a preset grid map for the white-mode virtual object in the white-mode virtual scene;
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring first position information of a model vertex of the white mould virtual object under an object coordinate system and acquiring size information of the white mould virtual object under a world coordinate system;
a conversion unit configured to convert the white-mode virtual object from the object coordinate system to a world coordinate system based on the first position information, the size information, and a unit mesh size of the mesh map;
The sampling unit is used for respectively carrying out texture sampling on the white-model virtual object in the world coordinate system and the coordinate axis plane of the world coordinate system to obtain a plurality of pieces of measurement texture information;
And the determining unit is used for determining the measured texture map corresponding to the white-mode virtual object based on the measured texture information.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the method of processing a virtual scene as claimed in any one of claims 1 to 9 when the program is executed by the processor.
12. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of processing a virtual scene according to any of claims 1 to 9.
CN202410064597.XA 2024-01-16 2024-01-16 Virtual scene processing method and device, computer equipment and storage medium Pending CN117893668A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410064597.XA CN117893668A (en) 2024-01-16 2024-01-16 Virtual scene processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410064597.XA CN117893668A (en) 2024-01-16 2024-01-16 Virtual scene processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117893668A true CN117893668A (en) 2024-04-16

Family

ID=90645393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410064597.XA Pending CN117893668A (en) 2024-01-16 2024-01-16 Virtual scene processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117893668A (en)

Similar Documents

Publication Publication Date Title
CN113052947B (en) Rendering method, rendering device, electronic equipment and storage medium
CN112233211B (en) Animation production method, device, storage medium and computer equipment
CN112489179B (en) Target model processing method and device, storage medium and computer equipment
CN111445563B (en) Image generation method and related device
CN112465945B (en) Model generation method and device, storage medium and computer equipment
CN113516742A (en) Model special effect manufacturing method and device, storage medium and electronic equipment
CN113398583A (en) Applique rendering method and device of game model, storage medium and electronic equipment
CN112370783A (en) Virtual object rendering method and device, computer equipment and storage medium
CN112206517A (en) Rendering method, device, storage medium and computer equipment
CN113487662A (en) Picture display method and device, electronic equipment and storage medium
CN112950753B (en) Virtual plant display method, device, equipment and storage medium
CN112891954A (en) Virtual object simulation method and device, storage medium and computer equipment
CN117893668A (en) Virtual scene processing method and device, computer equipment and storage medium
CN113350792B (en) Contour processing method and device for virtual model, computer equipment and storage medium
CN114797109A (en) Object editing method and device, electronic equipment and storage medium
CN117523136B (en) Face point position corresponding relation processing method, face reconstruction method, device and medium
CN115272432A (en) Model information processing method, device, storage medium and computer equipment
CN115457177A (en) Animation production method and device for virtual plant, storage medium and computer equipment
CN115588066A (en) Rendering method and device of virtual object, computer equipment and storage medium
WO2024093609A1 (en) Superimposed light occlusion rendering method and apparatus, and related product
CN115761066A (en) Animation effect generation method and device for mosaic particles, storage medium and equipment
CN115578507A (en) Rendering method and device of spar model, storage medium and electronic equipment
CN115845364A (en) Rendering method and device of virtual object, computer equipment and storage medium
CN117839216A (en) Model conversion method and device, electronic equipment and storage medium
CN117899490A (en) Virtual model processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination