CN115661318A - Method, device, storage medium and electronic device for rendering model - Google Patents

Method, device, storage medium and electronic device for rendering model Download PDF

Info

Publication number
CN115661318A
CN115661318A CN202211412554.3A CN202211412554A CN115661318A CN 115661318 A CN115661318 A CN 115661318A CN 202211412554 A CN202211412554 A CN 202211412554A CN 115661318 A CN115661318 A CN 115661318A
Authority
CN
China
Prior art keywords
model
map
target
rendered
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211412554.3A
Other languages
Chinese (zh)
Inventor
林琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211412554.3A priority Critical patent/CN115661318A/en
Publication of CN115661318A publication Critical patent/CN115661318A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure discloses a method, a device, a storage medium and an electronic device for rendering a model. The method comprises the following steps: acquiring an initial material map and an initial parallax map; performing material mapping processing on the initial material chartlet according to the model characteristics of the model to be rendered to obtain a target material chartlet, wherein the model to be rendered is a virtual model made of non-transparent materials; performing parallax adjustment on the initial parallax map according to the scene characteristics corresponding to the model to be rendered to obtain a target parallax map; and rendering the model to be rendered based on the target material mapping and the target parallax mapping to obtain a target rendering model. The method and the device solve the technical problem that in the prior art, system performance consumption is high when the virtual model is rendered.

Description

Method, device, storage medium and electronic device for rendering model
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method, an apparatus, a storage medium, and an electronic apparatus for rendering a model.
Background
Conventionally, when a virtual model of a transparent material (for example, a transparent glass bottle shown in fig. 1) is rendered, a translucent material is generally used as a base material. In the process of rendering the virtual model, the virtual model is required to be rendered, and the background is required to be rendered, so that the complexity of materials is improved, and the rendering amount of the virtual model is increased.
In addition, in the process of rendering the virtual model, the refraction effect of the virtual model is realized by using the refraction function, so that the aesthetic feeling of the virtual model is improved. In the related art, the implementation of the refraction effect of the virtual model consumes a large amount of system memory, and the system performance of the mobile terminal (e.g., a mobile phone) is far inferior to that of a non-mobile terminal (e.g., a computer), so that the refraction effect of the virtual model cannot be implemented in the mobile terminal.
In addition, in practical applications, the fine details of the virtual model may need to be refined by the artists, and this operation will undoubtedly further increase the power consumption of the virtual model rendering system.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
At least some embodiments of the present disclosure provide a method, an apparatus, a storage medium, and an electronic apparatus for rendering a model, so as to at least solve the technical problem in the prior art that system performance consumption is large when a virtual model is rendered.
According to an embodiment of the present disclosure, there is provided a method of rendering a model, including: acquiring an initial material map and an initial parallax map; performing material mapping processing on the initial material chartlet according to the model characteristics of the model to be rendered to obtain a target material chartlet, wherein the model to be rendered is a virtual model made of non-transparent materials; performing parallax adjustment on the initial parallax map according to the scene characteristics corresponding to the model to be rendered to obtain a target parallax map; and rendering the model to be rendered based on the target material mapping and the target parallax mapping to obtain a target rendering model.
According to an embodiment of the present disclosure, there is also provided an apparatus for rendering a model, including: the map acquisition module is used for acquiring an initial material map and an initial parallax map; the material mapping module is used for carrying out material mapping processing on the initial material map according to the model characteristics of the model to be rendered to obtain a target material map, wherein the model to be rendered is a virtual model made of non-transparent materials; the parallax adjustment module is used for performing parallax adjustment on the initial parallax map according to the scene characteristics corresponding to the model to be rendered to obtain a target parallax map; and the model rendering module is used for rendering the model to be rendered based on the target material map and the target parallax map to obtain a target rendering model.
There is further provided, according to an embodiment of the present disclosure, a computer-readable storage medium having a computer program stored therein, where the computer program is configured to execute the above method of rendering a model when running.
There is further provided, according to an embodiment of the present disclosure, an electronic apparatus including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for rendering a model described above.
In at least some embodiments of the present disclosure, a material mapping and parallax combination manner is adopted, after an initial material map and an initial parallax map are obtained, a material mapping process is performed on the initial material map according to a model characteristic of a model to be rendered to obtain a target material map, meanwhile, a parallax adjustment is performed on the initial parallax map according to a scene characteristic corresponding to the model to be rendered to obtain a target parallax map, and finally, the model to be rendered is rendered based on the target material map and the target parallax map to obtain a target rendering model, wherein the model to be rendered is a non-transparent virtual model.
In the process, the model to be rendered is rendered in a mode of combining the material map and the parallax map, so that the refraction effect of the model to be rendered can be realized, the memory consumption of a rendering system is reduced, and the rendering of the refraction effect of the virtual model can be realized by the mobile terminal. In addition, in the present disclosure, the model to be rendered is a virtual model made of a non-transparent material, so that rendering consumption of the virtual model made of a transparent material is reduced, and performance consumption of the system is further reduced.
Based on the above analysis, the scheme provided by the present disclosure achieves the purpose of rendering the virtual model made of non-transparent material, thereby achieving the technical effect of reducing the performance consumption of the system, and further solving the technical problem of large system performance consumption in rendering the virtual model in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
FIG. 1 is a schematic view of a related art clear glass bottle;
FIG. 2 is a block diagram of a hardware configuration of a mobile terminal of a method of rendering a model according to an embodiment of the present disclosure;
FIG. 3 is a flow diagram of a method of rendering a model according to one embodiment of the present disclosure;
FIG. 4 is a schematic view of a cola bottle according to one embodiment of the present disclosure;
FIG. 5 is a schematic diagram of target rendering models for different lens angles according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a target rendering effect according to an embodiment of the present disclosure;
FIG. 7 is a schematic view of a glass bottle model according to one embodiment of the present disclosure;
FIG. 8 is a schematic view of zoning for a model to be rendered according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating generation of a target texture map according to one embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a texture map after sampling according to one embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a final texture map according to one embodiment of the present disclosure;
FIG. 12 is a schematic view of a circular map according to one embodiment of the present disclosure;
FIG. 13 is a schematic view of an ellipse map according to one embodiment of the present disclosure;
fig. 14 is a schematic diagram of target disparity map generation according to an embodiment of the present disclosure;
FIG. 15 is a schematic diagram of a generation node of a disparity map according to one embodiment of the present disclosure;
FIG. 16 is a block diagram of an apparatus for rendering a model according to an embodiment of the present disclosure;
FIG. 17 is a schematic view of an electronic device according to an alternative embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present disclosure without making creative efforts shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In a possible implementation manner, aiming at the scheme of rendering a virtual model made of a transparent material in the background of rendering the virtual model in the field of computer graphics, which is usually adopted, after practice and careful research, the inventor still has the technical problem of high system performance consumption in the process of rendering the virtual model.
The above method embodiments related to the present disclosure may be executed in a mobile terminal, a computer terminal or a similar computing device. Taking the mobile terminal as an example, the mobile terminal may be a smart phone, a tablet computer, a palmtop computer, a mobile internet device, a PAD, a game machine, or other terminal devices. Fig. 2 is a block diagram of a hardware structure of a mobile terminal of a method of rendering a model according to an embodiment of the present disclosure. As shown in fig. 2, the mobile terminal may include one or more processors 202 (only one is shown in fig. 2) (the processor 202 may include, but is not limited to, a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 204 for storing data, which in one embodiment of the present disclosure may further include: an input output device 208, and a display device 210.
In some optional embodiments, where the game scene is a main game scene, the device may further provide a human-machine interaction interface with a touch-sensitive surface, the human-machine interaction interface may sense finger contact and/or gesture to perform human-machine interaction with a Graphical User Interface (GUI), and the human-machine interaction function may include the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
It will be understood by those skilled in the art that the structure shown in fig. 2 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 2, or have a different configuration than shown in FIG. 2.
In accordance with one embodiment of the present disclosure, there is provided an embodiment of a method of rendering a model, wherein the steps illustrated in the flow chart of the figure may be performed in a computer system such as a set of computer executable instructions, and wherein although a logical order is illustrated in the flow chart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
In a possible implementation manner, an embodiment of the present disclosure provides a method for rendering a model, where the method is implemented by a terminal device, where the terminal device may be a local terminal device, and may also be a client device in a cloud interaction system. FIG. 3 is a flow chart of a method of rendering a model according to one embodiment of the present disclosure, as shown in FIG. 3, the method comprising the steps of:
in step S302, an initial texture map and an initial parallax map are obtained.
In step S302, the initial Material map may be a Material Capture map, which may be a circular map. In practical application, the terminal device projects the material chartlet to the normal direction of the model to be rendered, so that the material effect is increased for the virtual model to be rendered.
In addition, in step S302, the parallax represents a direction difference resulting from observing the same target object from two points at a certain distance. The included angle between two points determined based on the target object is a parallax angle, and a connecting line between the two points is a baseline. In practical application, after the angle of the parallax angle and the length of the baseline are obtained, the distance between the target object and the observer can be calculated. In graphics, the sense of depth and the sense of stereoscopy of a virtual model are generally increased by disparity mapping.
In an optional embodiment, different virtual models correspond to different material maps and different disparity maps, and after the terminal device acquires the model to be rendered, the terminal device may select an initial material map and an initial disparity map according to a model type (e.g., a transparent virtual model, a semi-transparent virtual model, a non-transparent virtual model, etc.) of the model to be rendered and a scene in which the model to be rendered is located.
In another optional embodiment, different virtual models correspond to the same material map and disparity map, when it is determined that a model to be rendered needs to be rendered, the terminal device may directly obtain an initial material map and an initial disparity map from the map library, and in the rendering process, the terminal device adaptively adjusts the initial material map and the initial disparity map according to the model type of the model to be rendered and the scene where the model to be rendered is located.
And step S304, performing material mapping processing on the initial material map according to the model characteristics of the model to be rendered to obtain a target material map.
In step S304, the model to be rendered is a virtual model made of non-transparent material, for example, the model to be rendered may be a cola bottle as shown in fig. 4. It should be noted that, in the present disclosure, rendering a virtual model made of a non-transparent material can avoid a problem of high energy consumption of a rendering system caused by rendering the virtual model and a background simultaneously when rendering the virtual model made of a transparent material in the related art.
In addition, it should be noted that, in the mobile terminal game, the Matcap map may be used to simulate the effect in the computer terminal game, for example, the effect of the virtual character such as the contour light, the directional light, the light supplement, the jade, and the paint material, that is, in this embodiment, the Matcap map is used to render the model to be rendered, and the rendering effect of the virtual model may be implemented in the mobile terminal.
And S306, performing parallax adjustment on the initial parallax map according to the scene characteristics corresponding to the model to be rendered to obtain a target parallax map.
In step S306, the scene characteristics corresponding to the model to be rendered may include, but are not limited to, a light source position, an illumination intensity, a shooting angle of the virtual camera, and the like of a scene song in which the model to be rendered is located. In the method, the initial parallax map is adjusted according to the scene characteristics in practical application, so that the parallax map can be fitted with the practical application scene of the model to be rendered.
It should be noted that, when the model to be rendered is rendered, the depth and the stereoscopic impression of the model to be rendered can be increased by using the target disparity map. In addition, in practical application, the parallax map is adjusted according to practical requirements, and an irregular parallax effect can be added to the model to be rendered, so that the color shaking effect caused by the refraction of glass shaking can be presented, the memory consumption of the system is reduced, and the visual experience of the model to be rendered is improved.
In addition, it should be noted that the generation sequence of the target material map and the target disparity map may be parallel generation, that is, the terminal device may process the initial material map and the initial disparity map at the same time to obtain the target material map and the target disparity map, that is, in this embodiment, step S304 and step S306 are executed in parallel.
And step S308, rendering the model to be rendered based on the target material map and the target parallax map to obtain a target rendering model.
In step S308, after obtaining the target material map and the target parallax map in steps S304 and S306, the terminal device may render the model to be rendered using the two maps, so as to obtain the target rendering model.
In an optional embodiment, the terminal device may superimpose the target material map and the target parallax map to obtain the target map. The terminal device renders the model to be rendered by using the target map, so as to obtain a target rendering model, such as the target rendering models with different lens angles shown in fig. 5. For example, in the schematic diagram of the rendering process of the target rendering model shown in fig. 6, the terminal device superimposes the Matcap effect corresponding to the target material map and the parallax effect corresponding to the target parallax map, so as to implement rendering of the model to be rendered, and thus the model to be rendered has the target rendering effect shown in fig. 6.
Based on the solutions defined in steps S302 to S308, it can be known that, in at least some embodiments of the present disclosure, a manner of combining material mapping and parallax is adopted, after an initial material map and an initial parallax map are obtained, material mapping processing is performed on the initial material map according to model characteristics of a model to be rendered to obtain a target material map, meanwhile, parallax adjustment is performed on the initial parallax map according to scene characteristics corresponding to the model to be rendered to obtain a target parallax map, and finally, the model to be rendered is rendered based on the target material map and the target parallax map to obtain a target rendering model, where the model to be rendered is a non-transparent virtual model.
It is easy to notice that in the above process, the model to be rendered is rendered by combining the material map and the parallax map, so that the refraction effect of the model to be rendered can be realized, the memory consumption of the rendering system is reduced, and the rendering of the refraction effect of the virtual model can be realized by the mobile terminal. In addition, in the application, the model to be rendered is a virtual model made of a non-transparent material, so that the rendering consumption of the virtual model made of the transparent material is reduced, and the performance consumption of the system is further reduced.
Based on the above analysis, the scheme provided by the present disclosure achieves the purpose of rendering the virtual model made of non-transparent material, thereby achieving the technical effect of reducing the performance consumption of the system, and further solving the technical problem of large system performance consumption in rendering the virtual model in the prior art.
In an optional embodiment, before rendering the model to be rendered, the terminal device needs to obtain the model to be rendered without adding the Matcap effect and the parallax effect (as shown in fig. 4, the model to be rendered is a glass cola bottle model), and the model to be rendered only has basic colors and normal lines and is primarily rendered by using RMA mapping. Wherein, the RMA map is a map formed by merging a Roughness (Roughness) map, a metal degree (Metallic) map and a map environment shielding (AO, ambient Occlusion) map. Then, the terminal device distinguishes between a main body portion and a non-main body portion of the model to be rendered, for example, in the schematic diagram of the glass bottle model shown in fig. 7, the main body portion is a portion of the glass bottle except for the bottle cap and the label, and the non-main body portion includes the bottle cap, the label, the pull ring, a line connected to the pull ring, and the like. Further, the terminal device distinguishes between the body part and the non-body part of the model to be rendered by an LERP node (interpolation node) in the game engine. As shown in fig. 8, the Lerp node includes three interfaces a, B and Alpha, where the terminal device determines the effect displayed by each part of the model to be rendered by reading a numerical value of the Alpha interface (e.g., a texture sampling graph in fig. 8), that is, the terminal device determines whether a picture (e.g., picture 1 in fig. 8) or an effect corresponding to each part of the model to be rendered displays the interface a or a picture (e.g., picture 1 in fig. 8) or an effect corresponding to the interface B by reading a numerical value of the Alpha interface.
It should be noted that, in the present disclosure, since the Matcap technology is used to implement rendering of the model to be rendered, in the present disclosure, it is only necessary to use a shader model, which is the most common default illumination model, that is, in the present disclosure, after determining the body part and the non-body part of the model to be rendered, the terminal device uses the default illumination model to implement rendering of the model to be rendered. And the performance consumption of the shader model is lower than that of the semi-transparent illumination model, so that the consumption of rendering the model to be rendered on the system performance is further reduced.
In an alternative embodiment, as shown in fig. 3, after the initial material map and the initial parallax map are obtained, the terminal device executes step S304, that is, performs material mapping processing on the initial material map according to the model features of the model to be rendered, so as to obtain a target material map.
Specifically, the terminal device firstly converts the initial material chartlet from the normal space of the model to be rendered to the visual angle space of the model to be rendered, then samples the initial material chartlet in the visual angle space according to the model characteristics of the model to be rendered to obtain a sampled material chartlet, and performs scaling processing and/or offset processing on the sampled material chartlet to obtain a target material chartlet.
Optionally, as shown in the schematic diagram for generating the target material map shown in fig. 9, the terminal device first inputs the initial material map through the pixel normalcws of the input node, and then converts the initial material map from the normal space of the model to be rendered to the view space of the model to be rendered through the spatial conversion node transformer vector, so that the Matcap circular map can be directly mapped to the view space and projected onto the model to be rendered. In addition, in the process of space conversion, the normal of the model to be rendered is used for conversion, so that the Matcap chart can be attached to the concave-convex part of the model to be rendered, and the rendering effect of the model to be rendered is more natural.
It should be noted that, in this embodiment, the execution order of performing the scaling process and the offset process on the sampled texture map is not particularly limited, and in practical applications, the execution order of the scaling process and the offset process may be set according to actual requirements. In order to save the rendering process, the scaling process and the offset process can also be performed synchronously. In addition, if the size of the sampled material map conforms to the actual requirement, the scaling process on the sampled material map may not be performed. Similarly, if the position of the sampled material map is in accordance with the actual requirement, the offset processing may not be performed on the sampled material map.
Further, after the space mapping is completed, the terminal device samples the initial material map to obtain a sampled material map. Since the normal region has a range of [ -1,1] and the texture region has a range of [0,1] in the view space, the scaling and shifting operations are required for the sampled texture map (as shown in fig. 10). Specifically, as shown in the schematic diagram for generating the target material map shown in fig. 9, the terminal device first multiplies the sampled material map by a preset parameter MF _ Matcap _ Scale (e.g., a multiplex node in fig. 9) to Scale the sampled material map, where the preset parameter MF _ Matcap _ Scale is a scaling factor, a value of the scaling factor may be set by a user according to an actual requirement, and a value of the scaling factor may be, but is not limited to, 0.5. After obtaining the scaled material map, the terminal device performs an offset operation on the scaled material map, for example, in fig. 9, the offset of the scaled material map is implemented by an Add node, and similarly, the offset value may also be set by the user according to an actual requirement, and a value of the offset value may be, but is not limited to, 0.5. After the scaling and the shifting of the sampled material chartlet are finished, the target material chartlet can be obtained.
Namely, the above process satisfies the following equation:
Figure BDA0003939264710000081
in the above formula, S 1 Pasting a picture for the target material, S 2 Is a material mapping chart after sampling,
Figure BDA0003939264710000082
for the scaling factor, K is the offset factor.
In the process of sampling the initial material map, only the circle map content in the initial material map is generally sampled, and the other four corners are not sampled, so in the present disclosure, the circle map is used as the initial material map.
Further, after the sampled material map is subjected to scaling processing and/or offset processing to obtain a target material map, the terminal device further performs channel splitting processing on the target material map. Specifically, the terminal device performs channel splitting processing on the target material mapping to obtain a first channel and a second channel, performs flip-chip processing on the first channel to obtain a processed first channel, and performs channel merging processing on the processed first channel and the processed second channel to obtain a processed target material mapping. The first channel and the second channel have different corresponding grain directions.
Optionally, as shown in fig. 9, the terminal device splits the target material map into two channels, namely a Mask (G) (i.e., a first channel) and a Mask (R) (i.e., a second channel), where the Mask (G) represents textures in the left-right direction, and the Mask (R) represents textures in the up-down direction.
In addition, as shown in fig. 9, after the channel splitting is performed on the target material map, the terminal device increases a 1-x flip effect on the Mask (G) channel, and finally merges the Mask (R) and the flipped Mask (G) through the Append node, so as to obtain a final material map MF _ matcap _ Mask, as shown in fig. 11. If the flip-chip processing of 1-x is not performed, the effect of the Matcap map display in the screen is upside down, whereas the flip-chip processing of 1-x is performed for the Mask (G) channel, the upside down can be avoided.
Furthermore, after the initial material map is subjected to material mapping processing according to the model characteristics of the model to be rendered to obtain the target material map, the terminal device needs to adjust the target material map in order to draw the thickness effect of the model to be rendered. Specifically, the terminal device firstly carries out initial rendering on a model to be rendered according to the target material mapping to obtain an initial rendering model, when the rendering result of the initial rendering model is not matched with the preset rendering effect, the sampling position of the initial material mapping is adjusted according to the model characteristics of the model to be rendered to obtain a target sampling position, the initial material mapping is resampled based on the target sampling position to obtain a first material mapping, then the first material mapping is subjected to zooming processing and/or offset processing to obtain an updated target material mapping.
Optionally, after the target material map is obtained, the terminal device may perform initial rendering on the model to be rendered by using the target material map, so that the user determines whether the rendering result meets the expected requirement according to the rendering result. For example, taking a glass bottle model as an example of a model to be rendered, since the glass bottle model has two layers of thicknesses, the glass effect can be simulated by drawing an outer layer circular diagram and an inner layer circular diagram in a simulation manner. However, if only the round graph with regular inside and outside is adopted, since the general Matcap graph is a perfect circle, when the model to be rendered is viewed from the bottom, the top area of the Matcap graph is seen, and the top area is communicated by two layers, as shown in fig. 12, the problem of missed view is existed. In order to better meet the reality that the bottleneck of a physical glass bottle has no two-layer part, the terminal equipment can modify a general Matcap image, modify an original perfect circle into an ellipse map shown in FIG. 13, then sample the ellipse map to obtain a first material map, and perform scaling processing and offset processing on the first material map, so that after the obtained updated target material map is rendered on the model to be rendered, when the model to be rendered is viewed from the top, the error effect can not be seen.
It should be noted that the above modification may also be applied to a circle diagram repairing method without double-layer effect for openings of glass bottles, vessels and the like, and will not be illustrated here.
In addition, in this embodiment, the execution order of the scaling process and the offset process performed on the first texture map is not particularly limited, and in practical applications, the execution order of the scaling process and the offset process may be set according to actual requirements. In order to save the rendering process, the scaling process and the offset process can also be performed synchronously. In addition, if the size of the first material map is consistent with the actual requirement, the scaling process of the first material map is not needed. Similarly, if the position of the first material map matches the actual requirement, the first material map may not be subjected to the offset process.
Furthermore, after the initial material map is subjected to material mapping processing according to the model characteristics of the model to be rendered to obtain the target material map, the terminal device further performs color adjustment and/or depth adjustment on the target material map according to the model characteristics of the model to be rendered to obtain the adjusted target material map. For example, the terminal device may draw a circular edge map by Matcap mapping, and enhance the contrast of dark color effect and the color vividness of the edge of the model to be rendered, so as to simulate the depth effect of the edge of the model to be rendered. In addition, the Matcap circular edge graph is hand-drawn, so that the effect of the model to be rendered is more cartoon, and the problem that the model to be rendered cannot have the cartoon effect in the related technology is solved.
In order to show the color shaking effect of the glass shaking caused by the refraction in the glass bottle model, an irregular parallax effect needs to be superimposed. Since the parallax effect can give a person an effect of having a layer of depth inside the object, in the present disclosure, the parallax effect is mixed with the Matcap effect, and the refraction sensation of different thicknesses of the glass can be enhanced.
In addition, it should be noted that, in this embodiment, the execution order of performing the color adjustment and the depth adjustment on the target texture map is not specifically limited, and in practical applications, the execution order of the color adjustment and the depth adjustment may be set according to actual requirements. In order to save the rendering process, the color adjustment and the depth adjustment can be performed synchronously. In addition, if the color information of the target material map conforms to the actual requirements, the color of the target material map may not be adjusted. Similarly, if the depth information of the target material map matches the actual requirement, the target material map may not be subjected to the migration process.
In an alternative embodiment, the terminal device generates the target disparity map before mixing the disparity effect with the Matcap effect. Specifically, the terminal device splits a channel of an initial parallax map according to scene information corresponding to a model to be rendered in the direction of a camera to obtain a color channel and a depth channel, calculates a product of the color channel and a preset numerical value to obtain a first numerical value, calculates a difference value between the first numerical value and the depth channel to obtain a second numerical value, and finally superimposes the second numerical value and the initial parallax map to obtain a target parallax map.
Optionally, as shown in a schematic diagram for generating a target disparity map shown in fig. 14, in the present disclosure, a camera direction is used, the camera direction is converted into a tangential space through a TransformVector node, then, according to scene information corresponding to a model to be rendered, a channel is split for an initial disparity map, an RG channel (i.e., a color channel) and a B channel (i.e., a depth channel) are obtained, then, a product between the RG channel and a preset value (i.e., a parameter corresponding to an Offset node) is calculated through a multiplex node, the B channel is negated through a multiplex (-1) node, a sum of the product and the negated B channel is calculated through a Divide node, and then, the Add node is superimposed with the initial disparity map, so that the target disparity map can be obtained.
In order to make the parallax distortion effect more obvious, the user may also adjust the generation node of the parallax map shown in fig. 14 according to actual requirements, so as to obtain the generation node of the parallax map shown in fig. 15. Compared with fig. 14, after the sum of the product and the negatively charged B channel is calculated by the Divide node, channel separation is performed on the result of the sum to obtain an R channel and a G channel, the R channel is subjected to 1-x flip processing, finally, the G channel and the flip R channel are combined by the MakeFloat2 node, and the combined result is superimposed on the initial parallax map by the Add node to obtain the target parallax map.
Before rendering a model to be rendered based on the target material map and the target parallax map to obtain a target rendering model, the terminal device respectively adjusts the brightness of the target material map and the target parallax map to obtain an adjusted target material map and an adjusted target parallax map. And finally, rendering the model to be rendered based on the target map to obtain a target rendering model.
Optionally, the terminal device increases the brightness of the Matcap effect by two times in a color superimposing manner of the Add node, and the parallax effect takes a negative value, that is, the highlighted color with 2 × Matcap effect — the irregular parallax effect color. The brightness of the Matcap effect is increased, and the color main body of the model to be rendered can be kept to be the Matcap color; the parallax effect is negated, the dark part at the edge of the model to be rendered can be more obvious, and the glass false refraction effect of the bright part, namely the refraction effect simulated by the shaking effect moving along with the visual angle, is ensured.
Based on the above, in the present disclosure, the virtual model made of non-transparent material is used instead of using the semi-transparent shader model and the refraction function, so that the performance consumption of the system is reduced, and the method can be applied to mobile terminals such as mobile phones. The adjustment and manufacturing mode of overlapping the Matcap effect and the parallax effect simulates the cartoon effect of the glass bottle, can be used in various glass materials, simulates hollow and internal effects, and is recognized by art workers.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present disclosure.
In this embodiment, a device for rendering a model is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and is not described again after being described. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 16 is a block diagram of an apparatus for rendering a model according to an embodiment of the present disclosure, as shown in fig. 16, the apparatus including: a map acquisition module 1601, a material mapping module 1603, a parallax adjustment module 1605, and a model rendering module 1607.
The map acquiring module 1601 is configured to acquire an initial material map and an initial parallax map; a material mapping module 1603, configured to perform material mapping processing on the initial material map according to model characteristics of the model to be rendered to obtain a target material map, where the model to be rendered is a virtual model made of a non-transparent material; a disparity adjustment module 1605, configured to perform disparity adjustment on the initial disparity map according to the scene features corresponding to the model to be rendered, so as to obtain a target disparity map; and the model rendering module 1607 is configured to render the model to be rendered based on the target material map and the target parallax map, so as to obtain a target rendering model.
Optionally, the material mapping module includes: the device comprises a first conversion module, a first sampling module, a scaling module and an offset module. The first conversion module is used for converting the initial material chartlet from the normal space of the model to be rendered to the visual angle space of the model to be rendered; the first sampling module is used for sampling the initial material chartlet in a visual angle space according to the model characteristics of the model to be rendered to obtain a sampled material chartlet; and the scaling offset module is used for carrying out scaling processing and/or offset processing on the sampled material chartlet to obtain a target material chartlet.
Optionally, the apparatus for rendering a model further includes: the device comprises a first splitting module, a flip-chip processing module and a channel merging module. The first splitting module is used for performing scaling processing and/or offset processing on the sampled material chartlet to obtain a target material chartlet, and then performing channel splitting processing on the target material chartlet to obtain a first channel and a second channel, wherein the corresponding texture directions of the first channel and the second channel are different; the flip processing module is used for performing flip processing on the first channel to obtain a processed first channel; and the channel merging module is used for carrying out channel merging treatment on the treated first channel and the second channel to obtain a treated target material chartlet.
Optionally, the apparatus for rendering a model further includes: the device comprises an initial rendering module, a first adjusting module, a second sampling module and a processing module. The initial rendering module is used for performing material mapping processing on the initial material map according to the model characteristics of the model to be rendered to obtain a target material map, and then performing initial rendering on the model to be rendered according to the target material map to obtain an initial rendering model; the first adjusting module is used for adjusting the sampling position of the initial material chartlet according to the model characteristics of the model to be rendered when the rendering result of the initial rendering model is not matched with the preset rendering effect, so as to obtain a target sampling position; the second sampling module is used for resampling the initial material map based on the target sampling position to obtain a first material map; and the processing module is used for carrying out scaling processing and/or offset processing on the first material chartlet to obtain an updated target material chartlet.
Optionally, the apparatus for rendering a model further includes: and the second adjusting module is used for performing material mapping processing on the initial material chartlet according to the model characteristics of the model to be rendered to obtain a target material chartlet, and then performing color adjustment and/or depth adjustment on the target material chartlet according to the model characteristics of the model to be rendered to obtain an adjusted target material chartlet.
Optionally, the parallax adjustment module includes: the device comprises a second splitting module, a first calculating module, a second calculating module and a first superposing module. The second splitting module is used for splitting a channel of the initial parallax map in the direction of the camera according to scene information corresponding to the model to be rendered to obtain a color channel and a depth channel; the first calculation module is used for calculating the product of the color channel and a preset numerical value to obtain a first numerical value; the second calculation module is used for calculating the difference value between the first numerical value and the depth channel to obtain a second numerical value; and the first superposition module is used for superposing the second numerical value and the initial parallax map to obtain a target parallax map.
Optionally, the model rendering module includes: the system comprises a third adjusting module, a second superposition module and a target rendering module. The third adjusting module is used for respectively adjusting the brightness of the target material map and the target parallax map to obtain an adjusted target material map and an adjusted target parallax map; the second superposition module is used for superposing the adjusted target material map and the adjusted target parallax map to obtain a target map; and the target rendering module is used for rendering the model to be rendered based on the target map to obtain a target rendering model.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps in any of the above method embodiments when executed.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer-readable storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a program product capable of implementing the above-described method of the present embodiment is stored on a computer-readable storage medium. In some possible implementations, various aspects of the embodiments of the present disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary implementations of the present disclosure described in the above section "exemplary method" of this embodiment, when the program product is run on the terminal device.
According to the program product for implementing the above method of the embodiments of the present disclosure, it may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the disclosed embodiments is not limited in this respect, and in the disclosed embodiments, the computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more computer-readable media. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present disclosure also provide an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps in the above method embodiments of rendering a model.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Fig. 17 is a schematic diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 17, the electronic device 1700 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present disclosure.
As shown in fig. 17, the electronic apparatus 1700 is represented in the form of a general purpose computing device. The components of the electronic device 1700 may include, but are not limited to: the at least one processor 1710, the at least one memory 1720, the bus 1730 that connects the various system components (including the memory 1720 and the processor 1710), and the display 1740.
Wherein the memory 1720 described above stores program code that may be executed by the processor 1710 to cause the processor 1710 to perform steps according to various exemplary embodiments of the present disclosure as described in the method section above of an embodiment of the present disclosure.
The memory 1720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 17201 and/or a cache memory unit 17202, may further include a read-only memory unit (ROM) 17203, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
In some examples, memory 1720 may also include programs/utilities 17204 with a set (at least one) of program modules 17205, such program modules 17205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment. The memory 1720 may further include memory located remotely from the processor 1710, and such remote memory may be coupled to the electronic device 1700 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 1730 may be representative of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, and processor 1710, or a local bus using any of a variety of bus architectures.
Display 1740 may be, for example, a touch screen Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 1700.
Optionally, the electronic apparatus 1700 may also communicate with one or more external devices 1800 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 1700, and/or with any devices (e.g., router, modem, etc.) that enable the electronic apparatus 1700 to communicate with one or more other computing devices. Such communication can occur via an input/output (I/O) interface 1750. Also, the electronic device 1700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1760. As shown in FIG. 17, the network adapter 1760 communicates with the other modules of the electronic device 1700 over the bus 1730. It should be appreciated that although not shown in FIG. 17, other hardware and/or software modules may be used in conjunction with electronic device 1700, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The electronic device 1700 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power source, and/or a camera.
It will be understood by those skilled in the art that the structure shown in fig. 17 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 1700 may also include more or fewer components than shown in FIG. 17, or have a different configuration than shown in FIG. 17. Memory 1720 may be used to store computer programs and corresponding data, such as computer programs and corresponding data corresponding to methods of rendering a model in embodiments of the present disclosure. The processor 1710 executes various functional applications and data processing, i.e., a method of implementing the rendering model described above, by executing computer programs stored in the memory 1720.
The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present disclosure, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described in detail in a certain embodiment.
In the embodiments provided in the present disclosure, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is illustrative of the preferred embodiments of the present disclosure, and it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the disclosure, and such modifications and adaptations are intended to be within the scope of the disclosure.

Claims (10)

1. A method of rendering a model, comprising:
acquiring an initial material map and an initial parallax map;
performing material mapping processing on the initial material map according to the model characteristics of the model to be rendered to obtain a target material map, wherein the model to be rendered is a virtual model made of non-transparent materials;
performing parallax adjustment on the initial parallax map according to the scene characteristics corresponding to the model to be rendered to obtain a target parallax map;
and rendering the model to be rendered based on the target material map and the target parallax map to obtain a target rendering model.
2. The method of claim 1, wherein performing a material mapping process on the initial material map according to model features of a model to be rendered to obtain a target material map, comprises:
converting the initial material mapping from a normal space of the model to be rendered to a visual angle space of the model to be rendered;
sampling the initial material chartlet in the visual angle space according to the model characteristics of the model to be rendered to obtain a sampled material chartlet;
and carrying out scaling treatment and/or offset treatment on the sampled material chartlet to obtain the target material chartlet.
3. The method according to claim 2, wherein after scaling and/or shifting the sampled material map to obtain the target material map, the method further comprises:
performing channel splitting processing on the target material chartlet to obtain a first channel and a second channel, wherein the first channel and the second channel have different corresponding texture directions;
performing flip processing on the first channel to obtain a processed first channel;
and carrying out channel merging treatment on the treated first channel and the second channel to obtain a treated target material chartlet.
4. The method of claim 2, wherein after performing a material mapping process on the initial material map according to model features of a model to be rendered to obtain a target material map, the method further comprises:
performing initial rendering on the model to be rendered according to the target material chartlet to obtain an initial rendering model;
when the rendering result of the initial rendering model is not matched with a preset rendering effect, adjusting the sampling position of the initial material chartlet according to the model characteristics of the model to be rendered to obtain a target sampling position;
resampling the initial material map based on the target sampling position to obtain a first material map;
and carrying out scaling treatment and/or offset treatment on the first material chartlet to obtain an updated target material chartlet.
5. The method of claim 2, wherein after performing a material mapping process on the initial material map according to model features of a model to be rendered to obtain a target material map, the method further comprises:
and according to the model characteristics of the model to be rendered, carrying out color adjustment and/or depth adjustment on the target material chartlet to obtain an adjusted target material chartlet.
6. The method of claim 1, wherein performing disparity adjustment on the initial disparity map according to the scene features corresponding to the model to be rendered to obtain a target disparity map comprises:
in the direction of a camera, according to scene information corresponding to the model to be rendered, channel splitting is carried out on the initial parallax map, and a color channel and a depth channel are obtained;
calculating the product of the color channel and a preset numerical value to obtain a first numerical value;
calculating a difference value between the first numerical value and the depth channel to obtain a second numerical value;
and superposing the second numerical value and the initial parallax map to obtain the target parallax map.
7. The method of claim 1, wherein rendering the model to be rendered based on the target material map and the target disparity map to obtain a target rendering model comprises:
respectively adjusting the brightness of the target material map and the target parallax map to obtain an adjusted target material map and an adjusted target parallax map;
superposing the adjusted target material map and the adjusted target parallax map to obtain a target map;
and rendering the model to be rendered based on the target map to obtain the target rendering model.
8. An apparatus for rendering a model, comprising:
the map acquisition module is used for acquiring an initial material map and an initial parallax map;
the material mapping module is used for carrying out material mapping processing on the initial material chartlet according to the model characteristics of the model to be rendered to obtain a target material chartlet, wherein the model to be rendered is a virtual model made of non-transparent materials;
the parallax adjustment module is used for performing parallax adjustment on the initial parallax map according to the scene characteristics corresponding to the model to be rendered to obtain a target parallax map;
and the model rendering module is used for rendering the model to be rendered based on the target material map and the target parallax map to obtain a target rendering model.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is arranged to carry out the method of rendering a model according to any one of claims 1 to 7.
10. An electronic apparatus comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is arranged to execute the computer program to perform the method of rendering a model as claimed in any one of claims 1 to 7.
CN202211412554.3A 2022-11-11 2022-11-11 Method, device, storage medium and electronic device for rendering model Pending CN115661318A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211412554.3A CN115661318A (en) 2022-11-11 2022-11-11 Method, device, storage medium and electronic device for rendering model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211412554.3A CN115661318A (en) 2022-11-11 2022-11-11 Method, device, storage medium and electronic device for rendering model

Publications (1)

Publication Number Publication Date
CN115661318A true CN115661318A (en) 2023-01-31

Family

ID=85020314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211412554.3A Pending CN115661318A (en) 2022-11-11 2022-11-11 Method, device, storage medium and electronic device for rendering model

Country Status (1)

Country Link
CN (1) CN115661318A (en)

Similar Documents

Publication Publication Date Title
JP6967043B2 (en) Virtual element modality based on location in 3D content
WO2020125785A1 (en) Hair rendering method, device, electronic apparatus, and storage medium
US20120013613A1 (en) Tools for Use within a Three Dimensional Scene
US20160004300A1 (en) System, Method, Device and Computer Readable Medium for Use with Virtual Environments
TW202008328A (en) Data processing method and device for map region merging
US11587280B2 (en) Augmented reality-based display method and device, and storage medium
US11430192B2 (en) Placement and manipulation of objects in augmented reality environment
US11195323B2 (en) Managing multi-modal rendering of application content
CN112907716B (en) Cloud rendering method, device, equipment and storage medium in virtual environment
CN112053423A (en) Model rendering method and device, storage medium and computer equipment
CN110377220B (en) Instruction response method and device, storage medium and electronic equipment
CN115738249A (en) Method and device for displaying three-dimensional model of game role and electronic device
CN115375822A (en) Cloud model rendering method and device, storage medium and electronic device
JP2021190098A (en) Image preprocessing method, device, electronic apparatus, and storage medium
CN109065001A (en) A kind of down-sampled method, apparatus, terminal device and the medium of image
WO2024002086A1 (en) Image processing method and apparatus, electronic device and readable storage medium
CN117252982A (en) Material attribute generation method and device for virtual three-dimensional model and storage medium
CN108604367B (en) Display method and handheld electronic device
CN104952100B (en) The streaming of delay coloring compresses antialiasing method
CN115661318A (en) Method, device, storage medium and electronic device for rendering model
CN114299203A (en) Processing method and device of virtual model
US10311130B1 (en) Dynamic page transitions in electronic content
CN109857244A (en) A kind of gesture identification method, device, terminal device, storage medium and VR glasses
KR20220126257A (en) Method and system for providing realistic virtual exhibition space
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination