CN112138387A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112138387A
CN112138387A CN202011004924.0A CN202011004924A CN112138387A CN 112138387 A CN112138387 A CN 112138387A CN 202011004924 A CN202011004924 A CN 202011004924A CN 112138387 A CN112138387 A CN 112138387A
Authority
CN
China
Prior art keywords
initial
current orientation
chartlet
map
shading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011004924.0A
Other languages
Chinese (zh)
Inventor
程波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011004924.0A priority Critical patent/CN112138387A/en
Publication of CN112138387A publication Critical patent/CN112138387A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an image processing method, an image processing device, an image processing apparatus and a storage medium. The method comprises the following steps: providing a graphical user interface through a terminal device, wherein the graphical user interface comprises a scene image captured by a virtual camera, and the scene image comprises a virtual object, and the method comprises the following steps: acquiring an initial map of a virtual object, wherein the initial map has a basic pattern, and acquiring the current orientation of a virtual camera; acquiring a target shading chartlet corresponding to the initial chartlet under the current orientation of the virtual camera, wherein the target shading chartlet is a chartlet obtained after shading treatment is carried out on the initial chartlet under the current orientation; rendering the virtual object according to the current orientation, the target shading map and the initial map. According to the embodiment of the application, the virtual object is rendered according to the observation angle of the user, namely the current orientation of the virtual camera, the initial chartlet of the virtual object and the corresponding target rendering chartlet, so that the virtual object presents a halation effect, and the implementation complexity is low.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
In the process of making a game, virtual characters in various games are usually created, and with the continuous improvement of user requirements, higher requirements are also put on the richness of the virtual characters in the game.
In order to increase the richness of the virtual character, it is usually necessary to change the dress, makeup, etc. of the character, and changing the dress is the most intuitive expression. For example, in game products of traditional Chinese painting style and ink-wash style, clothes are usually robe-gowns, and the appearance of ink-wash staining is shown, if the clothes effect is changed, the flow is complex, the manufacturing cost is high, and if the color is changed simply, the ink-wash staining effect cannot be achieved. Thus, there is a need for those skilled in the art to achieve a way of altering the effects of apparel that has a water-and-ink stunning effect.
Disclosure of Invention
The application provides an image processing method, an image processing device, an image processing apparatus and a storage medium, so as to improve the presentation effect of a dress pattern of a virtual object.
In a first aspect, the present application provides an image processing method, which provides a graphical user interface through a terminal device, where the graphical user interface includes a scene image captured by a virtual camera, and the scene image includes at least one virtual object, and the method includes:
obtaining an initial map for the virtual object, the initial map having a base pattern;
acquiring the current orientation of the virtual camera;
acquiring a target shading chartlet corresponding to the initial chartlet in the current orientation of the virtual camera, wherein the target shading chartlet is a chartlet obtained after shading treatment is carried out on the initial chartlet in the current orientation;
rendering the virtual object according to the current orientation, the target shading map and the initial map.
In a second aspect, the present application provides an image processing apparatus, applied to a terminal device, for providing a graphical user interface through the terminal device, where the graphical user interface includes a scene image captured by a virtual camera, and the scene image includes at least one virtual object, the apparatus includes:
an obtaining module, configured to obtain an initial map for the virtual object, where the initial map has a basic pattern;
the acquisition module is further used for acquiring the current orientation of the virtual camera;
the processing module is used for acquiring a target shading chartlet corresponding to the initial chartlet in the current orientation of the virtual camera, wherein the target shading chartlet is a chartlet obtained after shading processing is carried out on the initial chartlet in the current orientation;
the processing module is further configured to render the virtual object according to the current orientation, the target shading map, and the initial map.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method of any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of the first aspects via execution of the executable instructions.
In a fifth aspect, embodiments of the present application provide a program, which when executed by a processor, is configured to perform the method according to any one of the first aspect above.
In a sixth aspect, the present application provides a computer program product, which includes program instructions for implementing the method of any one of the first aspect.
According to the image processing method, the image processing device, the image processing equipment and the image processing storage medium, the scene image captured by the virtual camera contains the virtual object, the initial chartlet aiming at the virtual object and the current orientation of the virtual camera are obtained, and the initial chartlet is provided with the basic pattern; and rendering the virtual object according to the observation visual angle of a user, namely the current orientation of the virtual camera, the target halation chartlet and the initial chartlet, so that the images of different virtual objects along with the observation direction are dynamically changed, namely the virtual objects present different halation effects, and the effect of pattern change of the virtual objects is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of an embodiment of an image processing method provided in the present application;
FIG. 3 is a schematic illustration of a map of an embodiment of a method provided herein;
FIG. 4 is a schematic diagram of the base pattern of the initial map of FIG. 3;
FIG. 5 is a schematic illustration of the shading of the base pattern of FIG. 4;
FIG. 6 is a bottom view of the map of FIG. 3;
FIG. 7 is an enlarged schematic view of a portion of the initial map of FIG. 3;
FIG. 8 is the shading map of FIG. 7 with the virtual camera head facing down;
FIG. 9 is the shading map of FIG. 7 with the virtual camera head facing down in another orientation;
FIG. 10 is a shading map of the map of FIG. 7, in a further orientation of the virtual camera;
FIG. 11 is a schematic diagram of a fusion process according to an embodiment of the method provided herein;
FIG. 12 is a schematic diagram of target normal vector calculation according to an embodiment of the method provided by the present application;
FIG. 13 is a schematic structural diagram of an embodiment of an image processing apparatus provided in the present application;
fig. 14 is a schematic structural diagram of an embodiment of an electronic device provided in the present application.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms "comprising" and "having," and any variations thereof, in the description and claims of this application and the drawings described herein are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
First, a part of vocabulary and application scenarios related to the embodiments of the present application will be described.
Lightness is the perception of the eye of the brightness of the light source and the surface of an object, and is a visual experience mainly determined by the intensity of light.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present application. As shown in fig. 1, the system architecture of the embodiment of the present application may include, but is not limited to: a terminal device 11 and a server 12. The terminal device 11 includes, for example: user equipment such as cell-phones, panel computer, personal computer.
The terminal device 11 and the server 12 may be connected via a network.
One or more terminal devices may be included in the system architecture, and one terminal device is illustrated in fig. 1 as an example. The server may be a server of a game, and the terminal device may run a game application.
In the related art, the schemes for changing the effect of the clothes are as follows:
(1) the model and the corresponding map are replaced together, and the model is divided into a partial model and an integral model, the integral replacement is to replace the whole body with another model except the head so as to achieve the purpose of changing, and the partial replacement is used for replacing partial models such as a breastplate, trousers, gloves and the like; (2) the map replacement means that the model is kept unchanged and different map textures are replaced; (3) local color changing: in some fixed parts of the character, the color of the mask part is changed by mask calculation.
The existing model replacement is not only integral model replacement or local model replacement, but also the whole manufacturing process is more complex and the manufacturing cost is higher because new models, skeleton binding and mapping are involved; the similar reason of map replacement and model replacement is that the model is kept unchanged, but a new map is also needed for replacement, and the flow and the cost are increased; the local color changing is to change the color by a shade mode, which is simple to realize and low in cost, but if only the color is changed, the effect is limited, and the effect of ink halation cannot be shown.
According to the method, the target shading chartlet corresponding to the initial chartlet in the current orientation of the virtual camera is obtained, the target shading chartlet is the chartlet obtained after shading processing is carried out on the basic pattern in the initial chartlet in the current orientation, the virtual object is rendered according to the observation visual angle of a user, namely the current orientation of the virtual camera, the target shading chartlet and the initial chartlet, so that the shading effect of the virtual object is shown downwards in the current orientation, and the showing effect of the virtual object is improved.
In one embodiment, the pattern of the virtual character map exhibits different shading effects as the viewing angle of the user varies, i.e., the orientation of the virtual camera varies.
The image processing method in one embodiment of the present disclosure may be executed on a terminal device or a server. The terminal device may be a local terminal device. When the processing method is operated on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and a client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and the operation of the image processing method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, the terminal device performing the information processing is a cloud game server in the cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present application provides an image processing method, where a graphical user interface is provided by a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system.
The method is suitable for 3D games in the traditional Chinese painting style and the ink and wash style, and is also suitable for 2D picture processing of fixed lenses.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flowchart of an embodiment of an image processing method provided in the present application. As shown in fig. 2, in this embodiment, a terminal device provides a graphical user interface, where the graphical user interface includes a scene image captured by a virtual camera, and the scene image includes at least one virtual object, and the method includes:
step 101, obtaining an initial map for a virtual object, wherein the initial map has a basic pattern.
Specifically, the scene image may be an image of a game scene, the virtual object may be a virtual character in the game scene, and the initial map of the virtual object may be an initial map corresponding to a dress of the virtual object, for example, a jacket of the virtual character corresponds to one map, trousers correspond to one map, or a skirt corresponds to one map. The two parts circled in fig. 3 are the initial maps of the virtual object. The initial map has a base pattern, such as the pattern with flowers in fig. 3.
And 102, acquiring the current orientation of the virtual camera.
Specifically, the current orientation of the virtual camera refers to a user observation angle, an angle of view at which the user observes the virtual object in the game scene is different, and the current orientation of the virtual camera is different.
And 103, acquiring a target shading chartlet corresponding to the initial chartlet under the current orientation of the virtual camera, wherein the target shading chartlet is a chartlet obtained after shading treatment is carried out on the initial chartlet under the current orientation.
Specifically, for each initial map, the shading effects presented in the directions of different virtual cameras may be different, and therefore, a target shading map corresponding to the initial map in the current direction of the virtual camera is obtained, where the target shading map is a map obtained after shading the initial map in the current direction.
In one embodiment, the base pattern in the initial chartlet may be shaded to obtain shaded texture; for example, fig. 4 shows the base pattern in the initial map shown in the left margin of fig. 3, and fig. 5 shows the texture of the base pattern in fig. 4 after shading.
And fusing the texture after the halation and the base map of the initial mapping to obtain the target halation mapping.
The base map is a map obtained by removing all the details of the patterns in the initial map, and as shown in fig. 6, all other effects and details of the base map are the same as those of the initial map, except that no patterns are provided.
And 104, rendering the virtual object according to the current orientation, the target shading chartlet and the initial chartlet.
Specifically, the virtual object is rendered according to the target shading chartlet of the obtained initial chartlet facing downward at the current orientation, for example, the target shading chartlet and the initial chartlet are fused according to the current orientation, and the shading effect of the virtual object facing downward at the current orientation is displayed on the graphical user interface.
For example, FIG. 7 is an enlarged view of a portion of an area in FIG. 3, FIG. 8 is a target shading map of the initial map of FIG. 7 with the initial map facing down in a, FIG. 9 is a target shading map of the initial map of FIG. 7 with the initial map facing down in b, and FIG. 10 is a target shading map of the initial map of FIG. 7 with the initial map facing down in c.
In the method of the embodiment, an initial map for a virtual object and a current orientation of a virtual camera are obtained by including the virtual object in a scene image captured by the virtual camera, wherein the initial map has a basic pattern; and rendering the virtual object according to the observation visual angle of a user, namely the current orientation of the virtual camera, the target halation chartlet and the initial chartlet, so that the virtual object presents the halation effect under the current orientation, and the presentation effect of the virtual object is improved.
On the basis of the foregoing embodiment, in order to improve the response speed, that is, in order to quickly display the shading effect in the current orientation after the user adjusts the current orientation of the virtual camera, target shading maps of each initial map in different current orientations may be obtained in advance, and step 103 in this embodiment may be specifically implemented by:
carrying out halation treatment on the basic pattern in the initial chartlet to obtain halation texture;
acquiring a halation effect parameter of the initial chartlet in the current orientation according to the current orientation and a target normal vector; the target normal vector is obtained according to the halation texture and the vertex normal of the virtual object;
fusing the shading texture and the base map by using the shading effect parameters of the initial mapping in the current orientation to obtain a target shading mapping corresponding to the initial mapping in the current orientation of the virtual camera; the base map is a map obtained by removing the base pattern in the initial map.
Specifically, as shown in fig. 3, the circled portion is an initial chartlet, that is, a complete common chartlet with details of the basic pattern, the basic pattern in the initial chartlet is obtained to obtain the pattern shown in fig. 4, and the basic pattern is subjected to shading processing to obtain shading texture, as shown in fig. 5.
The shading process is based on the shape of the base pattern, over which shading details are spread.
In one embodiment, the brightness of the shading texture is gradually changed from the center of the shading texture to the periphery.
For example, the lightness of a vignette varies gradually within 0-1 from the center of the vignette towards the periphery.
The base map is a map obtained by removing all the details of the patterns in the initial map, and as shown in fig. 6, all other effects and details of the base map are the same as those of the initial map, except that no patterns are provided.
In one embodiment, to reduce the amount of data storage, the shading texture resulting from the shading process may be black and white.
For example, vignette texture data may be stored within the alpha channel of the base map. The base map may be saved in the RGB channel of the original map.
In one embodiment, the current orientation and the target normal vector may be used for calculation to obtain a dynamic value as the vignetting effect parameter, for example, a dot product calculation is performed on the direction vector of the current orientation and the target normal vector.
Wherein the target normal vector is obtained from the vignetting texture and the vertex normal of the virtual object, when the vignetting texture is used as the normal texture.
Since the shading effect parameter is derived from the current orientation and the target normal vector, the final displayed shading effect varies with the current orientation, i.e. the angle presented to the user.
And fusing the shading texture and the base map by using the shading effect parameters of the initial mapping in the current orientation to obtain the target shading mapping with the shading effect corresponding to the initial mapping in the current orientation. For example, a Lerp function is used for the fusion process, and the shading effect parameter in the current direction is used as the parameter of the Lerp function.
The fusion processing refers to an image processing method.
In the above embodiment, the halation texture and the base map are fused by using the parameters of the halation effect of the initial chartlet in the current orientation, so as to obtain a target halation chartlet corresponding to the initial chartlet in the current orientation of the virtual camera; and further rendering the virtual object according to the observation visual angle of the user, namely the current orientation of the virtual camera, the target shading chartlet and the initial chartlet, so that the virtual object presents shading effect downwards at the current orientation, and the presenting effect of the virtual object is improved.
In an embodiment, the halo effect parameters of the initial chartlet in the current orientation are used to perform fusion processing on the halo texture and the base map, so as to obtain a target halo chartlet corresponding to the initial chartlet in the current orientation of the virtual camera, which can be implemented as follows:
adjusting the shading texture according to preset color parameters and/or brightness parameters to obtain the adjusted shading texture;
and fusing the adjusted halation texture and the base map by using the halation effect parameters of the initial mapping in the current orientation to obtain a target halation mapping corresponding to the initial mapping in the current orientation of the virtual camera.
The color parameter and/or the brightness parameter may be the same as the color parameter and/or the brightness parameter of the basic pattern in the initial map, or may be different from the color parameter and/or the brightness parameter of the basic pattern in the initial map.
The color parameter and/or the brightness parameter may be user set.
Specifically, as shown in fig. 11, the adjusted halation texture may be obtained by multiplying the halation texture data by a vector of color parameters, for example, the vector of color parameters is a four-dimensional vector including RGBA four dimensions, and then multiplying by the lightness parameter value.
And performing fusion processing on the base map and the adjusted halation texture, for example, performing fusion processing by adopting a Lerp function, wherein the parameters of the halation effect in the current downward direction are taken as the parameters of the Lerp function.
In an embodiment, step 104 may be specifically implemented as follows:
according to the current orientation, fusing the target halation mapping and the initial mapping to obtain a target mapping;
and rendering the virtual object according to the target map.
Specifically, the initial chartlet and the target shading chartlet are subjected to fusion processing by utilizing the shading effect parameters of the current orientation, and the target chartlet of the initial chartlet in the current orientation is obtained.
And performing fusion processing, namely interpolation calculation, to obtain a target map of the initial map in the current direction.
As shown in fig. 11, for example, the fusion process is performed using a Lerp function, and the halation effect parameter in the current direction is set as a parameter of the Lerp function. Whether the initial or target vignette map is displayed, or the extent to which the target vignette map is displayed, is controlled by the Lerp function.
And according to the target chartlet of the initial chartlet in the current orientation, displaying the halation effect of the initial chartlet of the virtual object in the current orientation of the virtual camera on a graphical user interface when presenting the initial chartlet to a user.
In the above embodiment, dynamic alternation of the target map and the initial map with shading effect is realized, and the pattern details on the virtual character clothes are different under different orientations of the virtual camera, i.e. different user observation visual angles, so as to achieve the effect that the original pattern and the ink shading dynamically change along with the user observation visual angles.
In one embodiment, the vignetting effect parameter for the current orientation may be obtained by:
adding the normal texture corresponding to the halation texture and the vertex normal of the virtual object to obtain a target normal vector;
and performing point multiplication on the current orientation and the target normal vector to obtain a halation effect parameter corresponding to the current orientation.
Specifically, as shown in fig. 12, the vector of the normal texture of the shading texture and the vector of the vertex normal of the virtual object are added, normalization processing is performed to obtain a target normal vector, and then the vector of the direction corresponding to the current orientation is respectively subjected to point multiplication with the target normal vector to obtain shading effect parameters of the initial chartlet in the current orientation.
In the above embodiment, the shading effect parameter for determining the variation of the shading effect is obtained by using the current orientation of the virtual camera and the target normal direction of the shading texture, wherein the target normal direction is the calculated value of the texture normal and the vertex normal, and the shading effect that the clothing pattern of the virtual object varies with the variation of the viewing angle is realized.
In one embodiment, in response to an adjustment operation on a virtual camera, obtaining an adjusted current orientation of the virtual camera;
acquiring a target halation chartlet corresponding to the initial chartlet under the current orientation adjusted by the virtual camera;
and rendering the virtual object according to the adjusted current orientation, the target shading chartlet and the initial chartlet.
In an embodiment, in the embodiment of the present application, a material editor in a fantasy engine (unreal engine) may be used to make a material for making the target shading chartlet of the initial chartlet in different orientations of the virtual camera, and based on the viewing angle of the user, that is, the orientation of the virtual camera, the pattern and the pattern of the initial chartlet may change with the different viewing angles of the user, and the ink shading effect may also change with the change of the viewing angle.
In the above embodiment, along with the difference of the observation angle of the user, that is, the current orientation, the dress pattern of the virtual object presents different shading effects, that is, the pattern of the virtual character has a dynamic change effect in different orientations, and has a water and ink shading effect, thereby improving the dress pattern change effect.
In an embodiment, the method of this embodiment further includes:
responding to the clothing change operation of the virtual object, and acquiring an initial chartlet of the virtual object after clothing change;
acquiring a target halation chartlet corresponding to the initial chartlet after the clothes change at the current orientation of the virtual camera;
and rendering the virtual object according to the current orientation, the target shading charm and the initial charm after the clothing change.
Specifically, the user may also change the clothing of the virtual object, for example, change the pattern, color, and the like of the clothing, perform clothing changing operation, for example, set an operation control on the graphical user interface, the user operates the operation control, and in response to the clothing changing operation on the virtual object, obtain an initial chartlet of the virtual object after clothing changing; for example, by changing the base pattern in the initial map, the details of the pattern can be in any form, such as a change in the details of the base pattern, or simply a change in the color parameter and/or brightness parameter.
The changed basic pattern can be customized by a user or selected by the user according to a pattern provided by the system.
Acquiring a target halation chartlet corresponding to the initial chartlet after the clothes change at the current orientation of the virtual camera; the generation process of the target shading map is similar to that of the target shading map in the previous embodiment, and is not described again here. The corresponding base map of the initial mapping and the initial mapping after the clothes change is not changed, and only the shading treatment is needed to be carried out on the basic pattern, and the obtained shading texture, the initial mapping and the base map are subjected to fusion treatment.
And then rendering the virtual object according to the current orientation, the target shading charm and the initial charm after the clothing change, and displaying the shading effect of the virtual object in the orientation on a graphical user interface.
In the above embodiment, the dynamic change of the patterns on the virtually tinned clothes is realized, the new patterns of the clothes can be selected according to actual requirements, the new patterns only need to be subjected to shading treatment, the initial chartlet and shading texture after the shading treatment can be utilized for fusion treatment, complex flows such as model replacement and the like are not needed, the manufacturing cost is greatly saved, and the complexity is reduced.
In an embodiment, changes may also be made to the overall apparel of the virtual object, such as by model replacement, or to portions of the apparel of the virtual object, such as by way of a replacement map.
Fig. 13 is a structural diagram of an embodiment of an image processing apparatus provided in the present application, and as shown in fig. 13, the embodiment is applied to a terminal device, and provides a graphical user interface through the terminal device, where the graphical user interface includes a scene image captured by a virtual camera, and the scene image includes at least one virtual object, and the image processing apparatus includes:
an obtaining module 141, configured to obtain an initial map for the virtual object, where the initial map has a basic pattern;
the obtaining module 141 is further configured to obtain a current orientation of the virtual camera;
a processing module 142, configured to obtain a target shading map corresponding to the initial map in the current orientation of the virtual camera, where the target shading map is a map obtained after shading the initial map in the current orientation;
the processing module 142 is further configured to render the virtual object according to the current orientation, the target shading map, and the initial map.
In a possible implementation manner, the processing module 142 is specifically configured to:
carrying out halation treatment on the basic pattern in the initial chartlet to obtain halation texture;
acquiring a halation effect parameter of the initial chartlet in the current orientation according to the current orientation and a target normal vector; the target normal vector is obtained according to the halation texture and the vertex normal of the virtual object;
fusing the shading texture and the base map by using the shading effect parameters of the initial mapping in the current orientation to obtain a target shading mapping corresponding to the initial mapping in the current orientation of the virtual camera; the base map is a map obtained by removing the base pattern in the initial map.
In one possible implementation, the lightness of the vignette texture is gradual from the center toward the periphery of the vignette texture.
In one possible implementation, the vignette texture is a black and white decal.
In a possible implementation manner, the processing module 142 is specifically configured to:
adjusting the shading texture according to preset color parameters and/or brightness parameters to obtain the adjusted shading texture;
and fusing the adjusted halation texture and the base map by using the halation effect parameters of the initial mapping in the current orientation to obtain a target halation mapping corresponding to the initial mapping in the current orientation of the virtual camera.
In a possible implementation manner, the processing module 142 is specifically configured to:
adding the normal texture corresponding to the halation texture and the vertex normal of the virtual object to obtain a target normal vector;
and performing point multiplication on the current orientation and the target normal vector to obtain a halation effect parameter corresponding to the current orientation.
In a possible implementation manner, the processing module 142 is specifically configured to:
according to the current orientation, fusing the target halation mapping and the initial mapping to obtain a target mapping;
and rendering the virtual object according to the target map.
The apparatus of this embodiment may be configured to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 14 is a block diagram of an embodiment of an electronic device provided in the present application, and as shown in fig. 14, the electronic device includes:
a processor 151, and a memory 152 for storing executable instructions of the processor 151.
Optionally, the method may further include: a display 153 for displaying a graphical user interface.
The above components may communicate over one or more buses.
The processor 301 is configured to execute the corresponding method in the foregoing method embodiment by executing the executable instruction, and the specific implementation process thereof may refer to the foregoing method embodiment, which is not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method in the foregoing method embodiment is implemented.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method is characterized in that a terminal device provides a graphical user interface, the graphical user interface comprises a scene image captured by a virtual camera, the scene image at least comprises a virtual object, and the method comprises the following steps:
obtaining an initial map for the virtual object, the initial map having a base pattern;
acquiring the current orientation of the virtual camera;
acquiring a target shading chartlet corresponding to the initial chartlet in the current orientation of the virtual camera, wherein the target shading chartlet is a chartlet obtained after shading treatment is carried out on the initial chartlet in the current orientation;
rendering the virtual object according to the current orientation, the target shading map and the initial map.
2. The method of claim 1, wherein the obtaining a target vignette map corresponding to the initial map at a current orientation of the virtual camera comprises:
carrying out halation treatment on the basic pattern in the initial chartlet to obtain halation texture;
acquiring a halation effect parameter of the initial chartlet in the current orientation according to the current orientation and a target normal vector; the target normal vector is obtained according to the halation texture and the vertex normal of the virtual object;
fusing the shading texture and the base map by using the shading effect parameters of the initial mapping in the current orientation to obtain a target shading mapping corresponding to the initial mapping in the current orientation of the virtual camera; the base map is a map obtained by removing the base pattern in the initial map.
3. The method as claimed in claim 2, wherein the brightness of the vignette texture is graded from the center towards the periphery of the vignette texture.
4. A method as claimed in claim 2 or 3, wherein the vignette texture is a black and white decal.
5. The method as claimed in claim 2 or 3, wherein the fusing the shading texture and the base map by using the shading effect parameter of the initial mapping in the current orientation to obtain the target shading mapping corresponding to the initial mapping in the current orientation of the virtual camera comprises:
adjusting the shading texture according to preset color parameters and/or brightness parameters to obtain the adjusted shading texture;
and fusing the adjusted halation texture and the base map by using the halation effect parameters of the initial mapping in the current orientation to obtain a target halation mapping corresponding to the initial mapping in the current orientation of the virtual camera.
6. The method as claimed in claim 2 or 3, wherein obtaining the vignetting effect parameter of the initial map in the current orientation according to the current orientation and a target normal vector comprises:
adding the normal texture corresponding to the halation texture and the vertex normal of the virtual object to obtain a target normal vector;
and performing point multiplication on the current orientation and the target normal vector to obtain a halation effect parameter corresponding to the current orientation.
7. The method of any of claims 1-3, wherein rendering the virtual object according to the current orientation, the target shading map, and the initial map comprises:
according to the current orientation, fusing the target halation mapping and the initial mapping to obtain a target mapping;
and rendering the virtual object according to the target map.
8. An image processing apparatus applied to a terminal device, wherein a graphical user interface is provided by the terminal device, the graphical user interface includes a scene image captured by a virtual camera, and the scene image includes at least one virtual object, the apparatus comprising:
an obtaining module, configured to obtain an initial map for the virtual object, where the initial map has a basic pattern;
the acquisition module is further used for acquiring the current orientation of the virtual camera;
acquiring a target shading chartlet corresponding to the initial chartlet in the current orientation of the virtual camera, wherein the target shading chartlet is a chartlet obtained after shading treatment is carried out on the initial chartlet in the current orientation;
and the processing module is used for rendering the virtual object according to the current orientation, the target shading chartlet and the initial chartlet.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-7 via execution of the executable instructions.
CN202011004924.0A 2020-09-22 2020-09-22 Image processing method, device, equipment and storage medium Pending CN112138387A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011004924.0A CN112138387A (en) 2020-09-22 2020-09-22 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011004924.0A CN112138387A (en) 2020-09-22 2020-09-22 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112138387A true CN112138387A (en) 2020-12-29

Family

ID=73897531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011004924.0A Pending CN112138387A (en) 2020-09-22 2020-09-22 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112138387A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050023004A (en) * 2003-08-28 2005-03-09 인테크놀러지(주) Apparatus and Method for simulating Oriental Painting
CN105205846A (en) * 2015-07-24 2015-12-30 江苏音图文化发展有限公司 Water-and-ink animation production method
CN108616731A (en) * 2016-12-30 2018-10-02 艾迪普(北京)文化科技股份有限公司 360 degree of VR panoramic images images of one kind and video Real-time Generation
CN109939440A (en) * 2019-04-17 2019-06-28 网易(杭州)网络有限公司 Generation method, device, processor and the terminal of 3d gaming map
CN111127624A (en) * 2019-12-27 2020-05-08 珠海金山网络游戏科技有限公司 Illumination rendering method and device based on AR scene
CN111292405A (en) * 2020-02-06 2020-06-16 腾讯科技(深圳)有限公司 Image rendering method and related device
CN111402385A (en) * 2020-03-26 2020-07-10 网易(杭州)网络有限公司 Model processing method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050023004A (en) * 2003-08-28 2005-03-09 인테크놀러지(주) Apparatus and Method for simulating Oriental Painting
CN105205846A (en) * 2015-07-24 2015-12-30 江苏音图文化发展有限公司 Water-and-ink animation production method
CN108616731A (en) * 2016-12-30 2018-10-02 艾迪普(北京)文化科技股份有限公司 360 degree of VR panoramic images images of one kind and video Real-time Generation
CN109939440A (en) * 2019-04-17 2019-06-28 网易(杭州)网络有限公司 Generation method, device, processor and the terminal of 3d gaming map
CN111127624A (en) * 2019-12-27 2020-05-08 珠海金山网络游戏科技有限公司 Illumination rendering method and device based on AR scene
CN111292405A (en) * 2020-02-06 2020-06-16 腾讯科技(深圳)有限公司 Image rendering method and related device
CN111402385A (en) * 2020-03-26 2020-07-10 网易(杭州)网络有限公司 Model processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111641844B (en) Live broadcast interaction method and device, live broadcast system and electronic equipment
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN108711180B (en) Method and device for generating makeup and/or face-changing special effect program file package and method and device for generating makeup and/or face-changing special effect
US10403036B2 (en) Rendering glasses shadows
CN108236783B (en) Method and device for simulating illumination in game scene, terminal equipment and storage medium
KR20180108709A (en) How to virtually dress a user's realistic body model
CN114049459A (en) Mobile device, information processing method, and non-transitory computer readable medium
CN111882627A (en) Image processing method, video processing method, device, equipment and storage medium
CN105389090B (en) Method and device, mobile terminal and the computer terminal of game interaction interface display
CN108765520B (en) Text information rendering method and device, storage medium and electronic device
US11328437B2 (en) Method for emulating defocus of sharp rendered images
US8730265B2 (en) Character generating system and character generating method
CN112274934A (en) Model rendering method, device, equipment and storage medium
CN114820915A (en) Method and device for rendering shading light, storage medium and electronic device
KR20060108271A (en) Method of image-based virtual draping simulation for digital fashion design
CN112348841B (en) Virtual object processing method and device, electronic equipment and storage medium
CN117011417A (en) Image processing method and device and computer equipment
KR20200022778A (en) Method and system for real-time generation of 3D avatar for virtual fitting
CN114612641A (en) Material migration method and device and data processing method
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
CN116630500A (en) Virtual article generation method, virtual clothing generation method and electronic device
CN112138387A (en) Image processing method, device, equipment and storage medium
US11308586B2 (en) Method for applying a vignette effect to rendered images
WO2015144563A1 (en) Image processing system and method
EP4070538A1 (en) Encoding stereo splash screen in static image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination