CN114299207A - Virtual object rendering method and device, readable storage medium and electronic device - Google Patents

Virtual object rendering method and device, readable storage medium and electronic device Download PDF

Info

Publication number
CN114299207A
CN114299207A CN202111527719.7A CN202111527719A CN114299207A CN 114299207 A CN114299207 A CN 114299207A CN 202111527719 A CN202111527719 A CN 202111527719A CN 114299207 A CN114299207 A CN 114299207A
Authority
CN
China
Prior art keywords
target
virtual object
determining
angle
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111527719.7A
Other languages
Chinese (zh)
Inventor
钱静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111527719.7A priority Critical patent/CN114299207A/en
Publication of CN114299207A publication Critical patent/CN114299207A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual object rendering method, a virtual object rendering device, a readable storage medium and an electronic device. The method comprises the following steps: determining an observation angle of a game scene; performing dissolving processing on the first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object; determining a target transparency value of the virtual object based on the dissolution result; and according to the target transparency value, rendering and displaying the virtual object. The invention solves the technical problem of low flexibility in showing the moonfood effect of the virtual moon object.

Description

Virtual object rendering method and device, readable storage medium and electronic device
Technical Field
The invention relates to the field of computers, in particular to a virtual object rendering method, a virtual object rendering device, a readable storage medium and an electronic device.
Background
In the related art, a moonfood effect in a game scene is an effect of moonfood by covering a black moonfood object with an original moonfood object through a flow of model chartlet coordinates (UV), but a moonfood image cannot be changed with a change of a visual field, so that there is a problem of low flexibility in exhibiting the moonfood effect of a virtual moonfood object.
Aiming at the problem that the flexibility of showing the moonfood effect of the virtual moon object is low in the prior art, an effective solution is not provided at present.
Disclosure of Invention
At least some embodiments of the present invention provide a virtual object rendering method, an apparatus, a readable storage medium, and an electronic apparatus, so as to at least solve the technical problem of low flexibility in displaying a moonfood effect of a virtual moonlight object.
To achieve the above object, according to an embodiment of the present invention, a virtual object rendering method is provided. Providing a graphical user interface by the terminal device, the graphical user interface at least partially displaying a game scene, the game scene at least partially including a virtual object. The method can comprise the following steps: determining an observation angle of a game scene; performing dissolving processing on the first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object; determining a target transparency value of the virtual object based on the dissolution result; and according to the target transparency value, rendering and displaying the virtual object.
Optionally, obtaining the viewing angle of the game scene includes: acquiring an observation visual field of a game role in a game scene in real time, wherein the game scene comprises the game role controlled by terminal equipment; and determining the observation angle of the virtual object according to the observation visual field.
Optionally, a second target map is obtained, where the second target map is used to represent a texture image of a virtual cloud layer in a game scene and changes with time; determining a target color of the virtual object based on the second target map; according to the target transparency value, rendering and displaying the virtual object, comprising: and rendering the virtual object according to the target transparency value and the target color.
Optionally, rendering the virtual object according to the target transparency value and the target color, including: and rendering the virtual object according to the target transparent value and the target color corresponding to the target texture coordinate on the virtual object.
Optionally, performing a dissolution process on the first target map based on the observation angle to obtain a dissolution result, including: determining a target smoothing parameter based on the observation angle; and smoothing the first target map based on the target smoothing parameters to obtain a dissolution result.
Optionally, determining a target smoothing parameter based on the viewing angle comprises: determining a target position corresponding to the observation angle, wherein a preset camera is used for shooting a game scene at the target position to obtain a scene picture of the observation angle in the game scene; determining a first rotation angle of the target position relative to the position of the virtual object; a target smoothing parameter is determined based on the first rotation angle.
Optionally, determining the target smoothing parameter based on the first rotation angle comprises: converting the first rotation angle to a second rotation angle in the first value range, wherein when the second rotation angle is the upper limit value of the first value range, the virtual object after rendering display presents a first display state, and when the second rotation angle is the lower limit value of the first value range, the virtual object after rendering display presents a second display state; a target smoothing parameter is determined based on the second rotation angle.
Optionally, determining the target smoothing parameter based on a second rotation angle comprises: converting the second rotation angle into a third rotation angle in a second value range, wherein the second value range is smaller than the first value range; determining a first smoothing parameter and a second smoothing parameter of a smoothing function based on the third rotation angle; smoothing the first target map based on the target smoothing parameters to obtain a dissolution result, comprising: and inputting the first smoothing parameter, the second smoothing parameter and the first target map into a smoothing function for smoothing to obtain a dissolving result, wherein the dissolving result is between the first smoothing parameter and the second smoothing parameter.
Optionally, determining a target position corresponding to the viewing angle includes: the target position corresponding to the viewing angle is determined in the model space.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a virtual object rendering apparatus for providing a graphical user interface through a terminal device, the graphical user interface at least partially displaying a game scene, the game scene at least partially including a virtual object, the apparatus may include: a first determination unit for determining an observation angle of a game scene; the dissolving unit is used for dissolving the first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object; a second determination unit for determining a target transparency value of the virtual object based on the dissolution result; and the rendering unit is used for rendering and displaying the virtual object according to the target transparent value.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a computer-readable storage medium. The computer readable storage medium stores a computer program, wherein when the computer program is executed by a processor, the apparatus where the computer readable storage medium is located is controlled to execute the virtual object rendering method according to the embodiment of the present invention.
In order to achieve the above object, according to another aspect of the present invention, there is also provided an electronic device. The electronic device may comprise a memory and a processor, wherein the memory stores a computer program, and the processor is configured to be executed by the processor to execute the computer program to perform the virtual object rendering method according to the embodiment of the present invention.
In at least some embodiments of the invention, an observation angle of a game scene is determined; performing dissolving processing on the first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object; determining a target transparency value of the virtual object based on the dissolution result; and according to the target transparency value, rendering and displaying the virtual object. That is to say, the method and the device for displaying the moonfood effect of the virtual moonlight object solve the technical problem that the moonfood effect of the virtual moonlight object is low in flexibility, and achieve the technical effect of improving the flexibility of displaying the moonfood effect of the moonlight object.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal according to a virtual object rendering method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a method of virtual object rendering according to one embodiment of the invention;
fig. 3 is a schematic view showing the effect of a moonfood according to the related art of the present invention;
FIG. 4 is a graphical illustration of a moonfood effect according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of coordinates of a point on a sphere in model space, according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of coordinates of a point on a sphere in world space, according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a conversion result for calculating an angle according to one embodiment of the present invention;
FIG. 8 is a diagram of the result of a conversion of a range of value ranges according to the present invention;
FIG. 9 is a schematic illustration of the result of a conversion of another range of values according to the present invention;
FIG. 10 is a schematic illustration of a dissolve map according to the present invention;
fig. 11 is a block diagram of a virtual object rendering apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some of the nouns or terms appearing in the description of the embodiments of the present application are used for the following explanation:
a lighting map (Lightmap), which describes a texture in which lighting data of a scene is recorded, is often used to increase the lighting atmosphere and artistic effects of the scene;
normal, the method of generating a lightmap is generally called baking, and the game engine unit/ue 4 is self-contained;
a model space (model space), which is also called object space (object space) or local space (local space) in different game engines or software, for example, as shown in fig. 5, fig. 5 is a schematic diagram of coordinates of a point on a sphere in a model space according to an embodiment of the present invention;
world space (world space), which is a macroscopic special coordinate system that represents the largest coordinate system we are interested in, for example, as shown in fig. 6, fig. 6 is a schematic diagram of the coordinates of a point on the sphere in world space according to an embodiment of the present invention;
normalizing, namely changing the length of the vector into 1;
UV flow, UV flow or UV translation refers to moving the UV coordinates of a texture along the horizontal (U) direction, or the vertical (V) direction, to create a complex illusion of animation; UV flow can create effects such as flames, running water, or smoke;
in accordance with one embodiment of the present invention, there is provided an embodiment of a virtual object rendering method, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system, such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that herein.
The method embodiments may be performed in a mobile terminal, a computer terminal or a similar computing device. Taking the example of the mobile terminal running on the mobile terminal, the mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a mobile internet device (MID for short), a PAD, a game machine, etc. Fig. 1 is a block diagram of a hardware structure of a mobile terminal according to a virtual object rendering method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input/output device 108, and a display device 110 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the virtual object rendering method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, that is, implements the object processing method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The inputs in the input output Device 108 may come from a plurality of Human Interface Devices (HIDs). For example: keyboard and mouse, game pad, other special game controller (such as steering wheel, fishing rod, dance mat, remote controller, etc.). Some human interface devices may provide output functions in addition to input functions, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
The virtual object rendering method in one embodiment of the disclosure can run on a local terminal device or a server. When the virtual object rendering method is executed on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and a client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and operation of the virtual object rendering method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output.
In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In this embodiment, a virtual object rendering method running on the mobile terminal is provided, where a graphical user interface is provided by a terminal device, where the terminal device may be the aforementioned local terminal device, or may also be the aforementioned client device in the cloud interaction system, and the graphical user interface at least partially displays a game scene, where the game scene at least partially includes a virtual object, where the virtual object may be a moon, a sun, or another model.
Fig. 2 is a flowchart of a virtual object rendering method for providing a graphical user interface through a terminal device according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step S202, the observation angle of the game scene is determined.
In the technical solution provided by step S202 of the present invention, the game scene may be a picture obtained by shooting with a preset camera in the game scene, and the game scene may include a virtual object. The observation angle may be an observation angle with respect to the game scene or information that changes with the movement of the virtual game character.
In the embodiment, the virtual camera is bound with the orientation to realize the purpose of controlling the observation angle of the virtual character, and the orientation is controlled to cause the virtual camera to rotate so as to obtain a changed game scene; if the virtual camera is not bound to the orientation, only direct limitation can be performed on the virtual camera to cause the observation angle of the game scene to change.
Alternatively, the viewing angle of the virtual object in the game scene may be changed by moving the position of the virtual camera, for example, at the moving end, by sliding the screen in one of the up, down, left and right directions, the purpose of changing the viewing angle of the virtual object in the game scene is achieved, or at the computer end, by long-pressing the sliding game scene picture in the up, down, left and right directions through the mouse.
Optionally, a preset camera is used for shooting in a game scene to obtain a scene picture, and the observation angle of the virtual game character in the scene picture is determined by acquiring information in the game scene.
And step S204, carrying out dissolving processing on the first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing the texture image of the virtual object.
In the technical solution provided in step S204 of the present invention, a dissolving process is performed on the first target map based on the determined observation angle of the game scene to obtain a dissolving result, where the first target map may be a dissolving map, may be used to generate a virtual object dissolving picture, and may be represented by a solution _ tex. The dissolving process may be a smooth step function (smooth) dissolving of the target map based on the game field of view information, thereby obtaining a dissolving result. The dissolution result is used to achieve disappearance of graphics on the scene picture, and may be represented by step _ resolution.
In step S206, a target transparency value of the virtual object is determined based on the dissolution result.
In the technical solution provided in step S206 of the present invention, the target transparent value may be a dissolved value, may be a target transparent channel, may be an Alpha channel, and is used to represent transparent information of a pixel point in an image, and an effect of making a virtual object disappear may be achieved by changing a transparent channel of the virtual object, and the target transparent channel may be a transparent channel of a moon with a cloud layer.
Optionally, the dissolving result obtained by dissolving the first target map based on the observation angle is processed to obtain a target transparency value of the virtual object, and the determined target transparency value is used as a transparent channel of the virtual object.
And step S208, rendering and displaying the virtual object according to the target transparent value.
In the technical solution provided in step S208 of the present invention, the calculated target transparency value is used as a transparent channel of a virtual object, and the virtual object with the transparent channel is rendered and displayed, so as to obtain a virtual object that changes along with the observation angle.
In this embodiment, the obtained dissolution result may be used as a transparent value of the virtual object, and the transparent value may be used as a transparent channel of the virtual object, so as to render and display the virtual object.
Obtaining game visual field information of a scene picture through the steps S202 to S208; dissolving the target map based on the game visual field information to obtain a dissolving result, wherein the target map is used for generating a lunar food image of the virtual lunar object; a target transparent channel of the virtual moon object is determined based on the dissolution result to convert the image of the virtual moon object from the first lunar image to the second lunar image. That is to say, the method and the device for displaying the moonfood effect of the virtual moonlight object solve the technical problem that the moonfood effect of the virtual moonlight object is low in flexibility, and achieve the technical effect of improving the flexibility of displaying the moonfood effect of the moonlight object.
The above method of this embodiment is further described below.
As an alternative implementation, step S202, acquiring the viewing angle of the game scene includes: acquiring an observation visual field of a game role in a game scene in real time, wherein the game scene comprises the game role controlled by terminal equipment; and determining the observation angle of the virtual object according to the observation visual field.
In this embodiment, as a game character (virtual game character) moves, an observation field of view of the game character in a game scene is acquired in real time, and an observation angle of a virtual object is determined based on the acquired observation field of view, wherein the observation angle of the field of view of the game character can be controlled by a terminal device.
Alternatively, the adjustment of the field of view may be by an adjustment of a free viewing angle in the game, thereby causing a change in the viewing angle of the game scene; or when the view field is tied to the orientation of the game character, the view field of the game character can be adjusted, so that the observation angle of the game scene can be adjusted.
As an optional implementation manner, a second target map is obtained, where the second target map is used to represent a texture image of a virtual cloud layer in a game scene and changes with time; determining a target color of the virtual object based on the second target map; according to the target transparency value, rendering and displaying the virtual object, comprising: and rendering the virtual object according to the target transparency value and the target color.
Optionally, the second target map may be a cloud layer texture map for adjusting a target color of the virtual object, and the second target map may be changed along with time change of the cloud layer texture map, and the target color and the target transparency value obtained by rendering and displaying the second target map are superimposed, and a result of the superimposition is rendered and displayed, so as to obtain a virtual object picture that changes along with time.
Optionally, the texture map of the cloud layer and the moon map are superimposed to obtain the lunar eclipse change, wherein the two maps correspond to the UV coordinates on the virtual object, and then the rendering and display of the virtual object are realized.
As an optional implementation, rendering the virtual object according to the target transparency value and the target color includes: and rendering the virtual object according to the target transparent value and the target color corresponding to the target texture coordinate on the virtual object.
In this embodiment, the target texture information is used to represent texture coordinates of the cloud image that vary according to the target speed within the target time; determining a target transparent channel for the virtual moon object based on the dissolution result, comprising: and converting the original transparent channel of the virtual moon object into a target transparent channel based on the target texture information and the dissolution result.
Optionally, the target texture information is used to indicate texture coordinates of the cloud layer image that change according to the target speed within the target time, and may be represented by Tex2, input.uv, where the texture coordinates may be UV coordinates for locating any one pixel on the image, and vertices of a polygon may be associated with pixels on the image file by the texture coordinates, so as to locate a texture map on the polygonal surface, where the value range of U and V coordinates of the map may be 0 to 1.
Alternatively, the target texture information may be obtained by adding a product of a generation time (FrameTime) of each frame picture within a target time and a target speed (cloudomovespeed) of cloud layer movement to texture coordinates of the virtual moon object, where the target texture information may be expressed by tex2Color, which may be expressed by the following formula:
HALF4
tex2color=SAMPLE_TEXTURE_LEVEL(Tex2,input.uv+float2(FrameTime*CloudMoveSpeed,0),-1)
wherein SAMPLE _ text _ LEVEL represents TEXTURE sampling for flowing cloud UV.
In this embodiment, the original transparent channel of the virtual moon object, which may be represented by AlphaMt1, is converted into a target transparent value based on the target texture information and the dissolution result, and the virtual object is rendered according to the target transparent value and the target color corresponding to the target texture coordinates on the virtual object.
As an alternative embodiment, performing a dissolution process on the first target map based on the observation angle to obtain a dissolution result includes: determining a target smoothing parameter based on the observation angle; and smoothing the first target map based on the target smoothing parameters to obtain a dissolution result.
In this embodiment, converting the original transparent channel of the virtual moon object into the target transparent channel based on the target texture information and the dissolution result includes: adding the cloud layer image corresponding to the target texture information to the original transparent channel; and carrying out dissolving treatment on the original transparent channel with the cloud layer image based on a dissolving result to obtain a target transparent channel.
Optionally, the cloud layer image corresponding to the target texture information is added to the original transparent channel, for example, the original transparent channel of the moon is added with a cloud uv flow effect, and the original transparent channel with the target texture information is subjected to a dissolving process to obtain a target transparent channel, where the target transparent channel may be represented by finalcolor, and may be represented by the following formula:
HALF4
finalColor=HALF4((text0Color.rgb*lerp(float3(1.0),text2Color.rgb,CloudIntensity)+text3Color.rgb*text3Color.a*EmissiveIntensity),text0Color.a*step_dissolve*AlphaMt1)*ColorFactor
wherein text0color.rgb denotes: the red, green and blue channel of the basic color, the null (float3(1.0) is used to determine the effect of the cloud intensity, cloudinensity represents the cloud layer intensity parameter, text3color.rgb represents the red, green and blue channel of the cloud image, text3color.a represents the alpha channel of the cloud image, EmissiveIntensity represents the target object luminous intensity parameter, and ColorFactor is used to control the color gray.
In this embodiment, the smoothing parameter is used to make the image change smoother, and may be a minimum value of the smoothing parameter, s _ min, and a maximum value of the smoothing parameter, s _ max, and the smoothing process may be smooth staircase dissolving of the target patch. Optionally, the target smoothing parameter is used as the first two parameters of the smoothing step function, and the target patch input is the last parameter, so as to obtain the dissolution result, which may be:
Float step_dissolve=smoothstep(s_min,s_max,dissolve_tex)
as an alternative embodiment, the determining the target smoothing parameter based on the observation angle includes: determining a target position corresponding to the observation angle, wherein a preset camera is used for shooting a game scene at the target position to obtain a scene picture of the observation angle in the game scene; determining a first rotation angle of the target position relative to the position of the virtual moon object; a target smoothing parameter is determined based on the first rotation angle.
In this embodiment, the preset camera is used for shooting a game scene at the target position to obtain a scene picture, so as to determine the target position corresponding to the game visual field information. The target position may be a position at which the scene captured by the camera is converted into a model space, and may be represented by a loca _ camera _ position, which may be represented by the following formula:
float4 local_camera_position=mul(FLOAT4(CameraPosition.xyz,1.0),World);
frag.local_camera_position=local_camera_position.xyz
alternatively, a first rotation angle of the target position with respect to the position of the virtual moon object is determined, wherein the position of the virtual moon object may be the position of each vertex in the camera, which may be represented by cam _ object, the first rotation angle may be the angle of the target position to the center point of the virtual moon object, which may be represented by distance, and the angle may be calculated by atan2 function.
Alternatively, the first rotation angle may be converted to the range of 0 to 1 by dividing the calculated angle by PI divided by 2 and then dividing the result of the calculated angle by PI divided by 2, which may be expressed by the following formula:
Float3 cam_object=normalize(input.local_camera_position)
Float distance=(atan2(cam_object.z,abs(cam_object.x))+PI/2.0)/PI;//1-0.5-0
where normaize is a normalization function for processing the target position of the virtual moon object between 0-1, abs (cam _ object.x) function is used to find the absolute value of the target position on the x-axis, and cam _ object.z is expressed as the value of the target position on the z-axis.
As an optional implementation, the determining the target smoothing parameter based on the first rotation angle includes: converting the first rotation angle to a second rotation angle in the first value range, wherein when the second rotation angle is the upper limit value of the first value range, the virtual object after rendering display presents a first display state, and when the second rotation angle is the lower limit value of the first value range, the virtual object after rendering display presents a second display state; a target smoothing parameter is determined based on the second rotation angle.
In this embodiment, since the requirement is that the virtual moon object disappears when turning to 90 degrees on the side and reappears when turning to 180 degrees, the calculation of the second rotation angle converts the first rotation angle into the first value range, wherein the first value range may be 0-1, that is, the upper limit value is a value close to 0, and the lower limit value is a value range of a value close to 1, and may include changing the value smaller than 0 to 0 and the value greater than 1 in the first rotation angle to 1, so as to obtain the second rotation angle, which may be represented by the following formula:
Float distance=(atan2(cam_object.z,abs(cam_object.x))+PI/2.0)/PI;//1-0.5-0;
Float distance_fix=saturate(abs((distance-0.5)*2.0));//1-0-1
where distance _ fix is used to indicate the second rotation angle, saturrate is used to limit a number to the range of 0-1, a value less than 0 becomes 0, and a value more than 1 becomes 1.
Optionally, when the second rotation angle is an upper limit value of the first value range, the rendered and displayed virtual object is in a first display state, for example, the virtual object is a moon, the first display state may be a full moon state, when the second rotation angle is a lower limit value of the first value range, the rendered and displayed virtual object is in a second display state, for example, the virtual object is a moon, the second display state may be a full moon state, and the target smoothing parameter is determined based on the second rotation angle.
As an optional implementation, the determining the target smoothing parameter based on the second rotation angle includes: converting the second rotation angle into a third rotation angle in a second value range, wherein the second value range is smaller than the first value range; determining a first smoothing parameter and a second smoothing parameter of a smoothing function based on the third rotation angle; smoothing the first target map based on the target smoothing parameters to obtain a dissolution result, comprising: and inputting the first smoothing parameter, the second smoothing parameter and the first target map into a smoothing function for smoothing to obtain a dissolving result, wherein the dissolving result is between the first smoothing parameter and the second smoothing parameter.
In this embodiment, since the change close to 0-1 may be slow in rotating the camera angle, the first value range needs to be slightly narrowed down to make the angle change smoother, and optionally, the second rotation angle is converted to a third rotation angle within a second value range, wherein the second value range is smaller than the first value range. For example, the first value range may be 0-1, and the second value range may be 0.1-0.9 or 0.05-0.95, which is not limited herein.
Optionally, the second rotation angle is adjusted by using a smooth step function to obtain a third rotation angle, which may be represented by distance _ fix _ smooth and may be represented by the following formula:
Float distance_fix=saturate(abs((distance-0.5)*2.0));//1-0-1;
Float distance_fix_smooth=smoothstep(0.05,0.95,distance_fix);//0.9-1-0.9
wherein the smoothstep function is specifically smoothstep (a, b, c), and the value range of c is changed to be between a and b, so that the value range of the whole is changed from 0-1 to 0.1-0.9 or 0.05-0.95.
In this embodiment, the result of the third rotation angle is converted into a first smoothing parameter and a second smoothing parameter, where the first smoothing parameter may be represented by s _ min and the second smoothing parameter may be represented by s _ max, and the first smoothing parameter, the second smoothing parameter, and the first target map are input into a smoothing function to be smoothed, so as to obtain a dissolution result, which may be represented by the following formula:
Float s_min=saturate(distance_fix_smooth*(-2.0)+1.0);//saturate(-1~1)
Float s_max=1.0-(saturate(distance_fix_smooth-0.5)*2.0
Float step_dissolve=smoothstep(s_min,s_max,dissolve_tex)
alternatively, the smoothing function may be smoothstep (a, b, c), which will change the value range of c between a-b, and thus the dissolution result will be between the first smoothing parameter and the second smoothing parameter.
As an alternative embodiment, determining the target position corresponding to the game visual field information includes: the target position corresponding to the game view information is determined in the model space.
In this embodiment, the position corresponding to the game view information may be converted to the model space, thereby obtaining the target position in the model space.
It should be noted that, regarding the model space and the time space, a conversion between the relative coordinates and the world coordinates may be used, and the model space coordinate system is usually established to facilitate real-time adjustment and control of the angle and the orientation of the virtual model, but the model space coordinate system needs to be converted into the world coordinate system to be displayed on the graphical user interface, and then the world coordinate system is converted into the screen (graphical user interface) coordinate system to complete the screen mapping display.
In this embodiment, game view information of a scene picture is acquired; dissolving the target map based on the game visual field information to obtain a dissolving result, wherein the target map is used for generating a lunar food image of the virtual lunar object; a target transparent channel of the virtual moon object is determined based on the dissolution result to convert the image of the virtual moon object from the first lunar image to the second lunar image. That is to say, the method and the device for displaying the moonfood effect of the virtual moonlight object solve the technical problem that the moonfood effect of the virtual moonlight object is low in flexibility, and achieve the technical effect of improving the flexibility of displaying the moonfood effect of the moonlight object.
The technical solutions of the embodiments of the present invention are further described below with reference to preferred embodiments.
In the prior art, the natural phenomenon of the effect on the moonfood is to cover the original moonlight with the UV flowing black moonlight, so as to realize an effect of the moonfood, as shown in fig. 3, fig. 3 is a schematic diagram of the effect of the moonfood in the related art according to the present invention, and the prior art cannot realize the characteristic that changes with the change of the sight line.
Therefore, the scheme provides an effect scheme that the moon and moon are lacked along with the change of the sight line, and the effect of changing the moon and the moon are achieved along with the change of the sight line, as shown in fig. 4, fig. 4 is a schematic diagram of the moon and moon food effect according to one embodiment of the invention, the moon and moon food effect is a perfect circle when the moon and the moon are pure front faces and pure back faces, but the moon and the moon are not dispersed in a sheet shape when dispersed, and the change on the Z axis is similar to the effect of the moon and the moon food.
The above-described method of this embodiment is further described below.
First, in the vertex phase, the camera position is converted to model space. Can be as follows:
float4 local_camera_position=mul(FLOAT4(CameraPosition.xyz,1.0),World);
frag.local_camera_position=local_camera_position.xyz。
and secondly, calculating the angle from the camera to the center point of the model, wherein the angle comprises the following steps: the calculation of the angle using the atan2 function (here in radians, convert angle/2 pi to the range 0-1, where the range is converted to 0-0.5-1, as shown in fig. 7, fig. 7 is a schematic diagram of the conversion result of calculating the angle according to one of the present invention, and may be:
Float3 cam_object=normalize(input.local_camera_position)
Float distance=(atan2(cam_object.z,abs(cam_object.x))+PI/2.0)/PI。
thirdly, since the requirement requires that the moon disappears when turning to the side 90 degrees and then turns to 180 degrees to appear, the value range is converted by calculation, as shown in fig. 8, fig. 8 is a schematic diagram of a conversion result of calculating the value range according to one embodiment of the present invention, and may be:
Float distance=(atan2(cam_object.z,abs(cam_object.x))+PI/2.0)/PI;
Float distance_fix=saturate(abs((distance-0.5)*2.0))。
when the center of the object is located at the center point of the cross, and the camera is located at each position, the corresponding angle value linearly and uniformly changes, and is calculated as Saturate (Abs ((distance-0.5) × 2)), Abs is the absolute value of a number, Saturate is the absolute value of a number which limits a number to be in the range of 0-1, changes to 0 when being less than 0, and changes to 1 when being more than 1, so that the calculated result realizes the change from left to right in fig. 8, namely, the value range is changed from 0-0.5-1 to be in the range of 1-0-1.
Fourthly, the value range of 0-1 is adjusted to be smaller, and since the change close to 0 and 1 is slow when the camera angle is rotated, the threshold value range is slightly reduced by setting parameters, so that the angle change is smoother, and smooth step function adjustment is used, as shown in fig. 9, fig. 9 is a schematic diagram of a conversion result of another calculated value range according to the present invention, and may be:
Float distance_fix=saturate(abs((distance-0.5)*2.0));
Float distance_fix_smooth=smoothstep(0.05,0.95,distance_fix);
wherein, the moon is complete when approaching 1, the whole moon is the whole moon when approaching 0, the smoothstep function is used to change the integral value range from 1-0-1 to 0.9-1-0.9 or 0.05-0.95, thus the value range becomes the right picture of FIG. 9. The function is specifically used as smoothstep (a, b, c), and the value range of c is changed to be between a-b.
A fifth step of performing a dissolution process on the dissolution map by using a smooth step function, and fig. 10 is a schematic diagram of a dissolution map according to the present invention, as shown in fig. 10, including: converting the result in the fourth step into two values of s _ min and s _ max respectively as the first two parameters of smoothstep, inputting a map as the last parameter, and using the dissolution value as a transparent channel of the moon to realize the dissolution effect, which may be:
Float s_min=saturate(distance_fix_smooth*(-2.0)+1.0);//saturate(-1~1)
Float s_max=1.0-(saturate(distance_fix_smooth-0.5)*2.0
Float step_dissolve=smoothstep(s_min,s_max,dissolve_tex)
sixth, the moon map is added with the effect of the cloud UV flow, i.e., UV + time velocity, so that a simple UV flow effect can be achieved, which can be:
HALF4
tex2color=SAMPLE_TEXTURE_LEVEL(Tex2,input.uv+float2(FrameTime*CloudMoveSpeed,0),-1)。
and step seven, dissolving the transparent passage of the moon with the cloud layer according to the result of the step four, thereby realizing the effect of the moon food changing along with the sight line, which can be:
HALF4
finalColor=HALF4((text0Color.rgb*lerp(float3(1.0),text2Color.rgb,CloudIntensity)+text3Color.rgb*text3Color.a*EmissiveIntensity),text0Color.a*step_dissolve*AlphaMt1)*ColorFactor。
in the embodiment, the position of the camera is converted into the model space, the rotation angle of the camera relative to the center point of the model is calculated, the angle value is converted to be between 0 and 1, the dissolving map is dissolved according to the value, and finally the dissolved value is used as a transparent passage of the moon, so that the technical problem of low flexibility in displaying the lunar eclipse effect of the virtual moon object is solved, and the technical effect of improving the flexibility in displaying the lunar eclipse effect of the moon object is achieved.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The embodiment of the present invention further provides a virtual object rendering apparatus, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted here for brevity. As used below, the term "unit" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 11 is a block diagram of a virtual object rendering apparatus according to an embodiment of the present invention, in which a terminal device provides a graphical user interface, the graphical user interface at least partially displays a game scene, and the game scene at least partially includes a virtual object, as shown in fig. 11, the virtual object rendering apparatus 110 may include: an acquisition unit 111, a dissolution unit 112, and a determination unit 113.
A first determination unit 111 for determining an observation angle of the game scene.
And a dissolving unit 112, configured to perform dissolving processing on the first target map based on the observation angle to obtain a dissolving result, where the first target map is used to represent a texture image of the virtual object.
A second determination unit 113 for determining a target transparency value of the virtual object based on the dissolution result.
And the rendering unit 114 is configured to render and display the virtual object according to the target transparency value.
In the virtual object rendering apparatus of this embodiment, the present application determines an observation angle of a game scene by a first determining unit; performing dissolving processing on the first target map by using a dissolving unit based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object; determining, with a second determination unit, a target transparency value of the virtual object based on the dissolution result; the virtual object is rendered and displayed by the rendering unit according to the target transparent value, so that the target transparent channel of the virtual moon object is determined based on the dissolution result, the purpose of moon shortage along with the change of sight is achieved, the flexibility of displaying the lunar eclipse effect of the moon object is improved, the technical problem of low flexibility of displaying the lunar eclipse effect of the virtual moon object is solved, and the technical effect of improving the flexibility of displaying the lunar eclipse effect of the moon object is achieved.
It should be noted that, the above units may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the units are all positioned in the same processor; or, the above units may be located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, where the computer program is configured to be executed by a processor to perform the virtual object rendering method according to the embodiments of the present invention.
Alternatively, in the present embodiment, the above-mentioned nonvolatile storage medium may be configured to store a computer program for executing the steps of:
s1, determining the observation angle of the game scene;
s2, performing dissolving processing on the first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object;
s3, determining the target transparency value of the virtual object based on the dissolution result.
And S4, according to the target transparency value, rendering and displaying the virtual object.
Optionally, in this embodiment, the nonvolatile storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining the observation angle of the game scene;
s2, performing dissolving processing on the first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object;
s3, determining the target transparency value of the virtual object based on the dissolution result.
And S4, according to the target transparency value, rendering and displaying the virtual object.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A virtual object rendering method, wherein a graphical user interface is provided by a terminal device, the graphical user interface at least partially displaying a game scene, the game scene at least partially including a virtual object, the method comprising:
determining an observation angle of the game scene;
performing dissolving processing on a first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object;
determining a target transparency value for the virtual object based on the dissolution result;
and rendering and displaying the virtual object according to the target transparent value.
2. The method of claim 1, wherein obtaining the viewing angle of the game scene comprises:
acquiring an observation visual field of a game role in the game scene in real time, wherein the game scene comprises the game role controlled by the terminal equipment;
determining the viewing angle of the virtual object from the viewing field.
3. The method of claim 1,
the method further comprises the following steps: acquiring a second target map, wherein the second target map is used for representing a texture image of a virtual cloud layer in the game scene and changes along with the change of time; determining a target color of the virtual object based on the second target map;
according to the target transparency value, rendering and displaying the virtual object, comprising: and rendering the virtual object according to the target transparency value and the target color.
4. The method of claim 3, wherein rendering the virtual object according to the target transparency value and the target color comprises:
and rendering the virtual object according to the target transparent value and the target color corresponding to the target texture coordinate on the virtual object.
5. The method of claim 1, wherein performing a dissolution process on the first target map based on the viewing angle to obtain a dissolution result comprises:
determining a target smoothing parameter based on the viewing angle;
and smoothing the first target map based on the target smoothing parameters to obtain the dissolution result.
6. The method of claim 5, wherein determining a target smoothing parameter based on the viewing angle comprises:
determining a target position corresponding to the observation angle, wherein a preset camera is used for shooting the game scene at the target position to obtain a scene picture of the observation angle in the game scene;
determining a first angle of rotation of the target position relative to the position of the virtual object;
determining the target smoothing parameter based on the first rotation angle.
7. The method of claim 6, wherein determining the target smoothing parameter based on the first angle of rotation comprises:
converting the first rotation angle to a second rotation angle within a first value range, wherein when the second rotation angle is an upper limit value of the first value range, the virtual object after rendering display presents a first display state, and when the second rotation angle is a lower limit value of the first value range, the virtual object after rendering display presents a second display state;
determining the target smoothing parameter based on the second rotation angle.
8. The method of claim 7,
determining the target smoothing parameter based on the second angle of rotation, including: converting the second rotation angle into a third rotation angle within a second value range, wherein the second value range is smaller than the first value range; determining a first smoothing parameter and a second smoothing parameter of a smoothing function based on the third rotation angle;
smoothing the first target map based on the target smoothing parameter to obtain the dissolution result, including: inputting the first smoothing parameter, the second smoothing parameter and the first target map into the smoothing function for smoothing processing to obtain the dissolution result, wherein the dissolution result is between the first smoothing parameter and the second smoothing parameter.
9. The method of claim 6, wherein determining a target location corresponding to the viewing angle comprises:
determining the target position corresponding to the observation angle in a model space.
10. An apparatus for rendering virtual objects, the apparatus comprising a terminal device providing a graphical user interface, the graphical user interface at least partially displaying a game scene, the game scene at least partially including a virtual object, the apparatus comprising:
a first determination unit for determining an observation angle of the game scene;
the dissolving unit is used for dissolving a first target map based on the observation angle to obtain a dissolving result, wherein the first target map is used for representing a texture image of the virtual object;
a second determination unit for determining a target transparency value of the virtual object based on the dissolution result;
and the rendering unit is used for rendering and displaying the virtual object according to the target transparent value.
11. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to, when executed by a processor, perform the method of any one of claims 1 to 9.
12. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 9.
CN202111527719.7A 2021-12-14 2021-12-14 Virtual object rendering method and device, readable storage medium and electronic device Pending CN114299207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111527719.7A CN114299207A (en) 2021-12-14 2021-12-14 Virtual object rendering method and device, readable storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111527719.7A CN114299207A (en) 2021-12-14 2021-12-14 Virtual object rendering method and device, readable storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN114299207A true CN114299207A (en) 2022-04-08

Family

ID=80968518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111527719.7A Pending CN114299207A (en) 2021-12-14 2021-12-14 Virtual object rendering method and device, readable storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114299207A (en)

Similar Documents

Publication Publication Date Title
CN111145326B (en) Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN109151540B (en) Interactive processing method and device for video image
US9717988B2 (en) Rendering system, rendering server, control method thereof, program, and recording medium
CN112215934A (en) Rendering method and device of game model, storage medium and electronic device
CN111368137A (en) Video generation method and device, electronic equipment and readable storage medium
US11430192B2 (en) Placement and manipulation of objects in augmented reality environment
US20230059116A1 (en) Mark processing method and apparatus, computer device, storage medium, and program product
WO2022247204A1 (en) Game display control method, non-volatile storage medium and electronic device
US20130225293A1 (en) System and method for efficient character animation
CN115082608A (en) Virtual character clothing rendering method and device, electronic equipment and storage medium
CN115375822A (en) Cloud model rendering method and device, storage medium and electronic device
CN114820915A (en) Method and device for rendering shading light, storage medium and electronic device
CN115082607A (en) Virtual character hair rendering method and device, electronic equipment and storage medium
CN114299207A (en) Virtual object rendering method and device, readable storage medium and electronic device
CN107452045B (en) Space point mapping method based on virtual reality application anti-distortion grid
CN115131489A (en) Cloud layer rendering method and device, storage medium and electronic device
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN115501590A (en) Display method, display device, electronic equipment and storage medium
CN115713589A (en) Image generation method and device for virtual building group, storage medium and electronic device
CN114972466A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN111462007B (en) Image processing method, device, equipment and computer storage medium
GB2595445A (en) Digital sandtray
CN115546082A (en) Lens halo generation method and device and electronic device
CN115089964A (en) Method and device for rendering virtual fog model, storage medium and electronic device
CN115719305A (en) Image dissolving processing method, image dissolving processing device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination