CN110599577B - Method, device, equipment and medium for rendering skin of virtual character - Google Patents

Method, device, equipment and medium for rendering skin of virtual character Download PDF

Info

Publication number
CN110599577B
CN110599577B CN201910900572.8A CN201910900572A CN110599577B CN 110599577 B CN110599577 B CN 110599577B CN 201910900572 A CN201910900572 A CN 201910900572A CN 110599577 B CN110599577 B CN 110599577B
Authority
CN
China
Prior art keywords
skin
rendering
sampling
dimensional
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910900572.8A
Other languages
Chinese (zh)
Other versions
CN110599577A (en
Inventor
刘杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910900572.8A priority Critical patent/CN110599577B/en
Publication of CN110599577A publication Critical patent/CN110599577A/en
Application granted granted Critical
Publication of CN110599577B publication Critical patent/CN110599577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Abstract

The invention belongs to the technical field of image processing, mainly relates to a computer vision technology in artificial intelligence, and discloses a method, a device, equipment and a medium for skin rendering of a virtual character. Therefore, the problem of the appearance of the '+' shape is avoided, the number of sampling points is reduced while the visual characteristics of the real human skin of the virtual character are kept, and the time cost is reduced.

Description

Method, device, equipment and medium for rendering skin of virtual character
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a medium for rendering skin of a virtual character.
Background
In the field of computer graphics, the skin rendering technology is an important subject in the current computer field, and has extremely important application in the fields of medical treatment, movies, games and the like.
When rendering the skin of the virtual character, the skin color is generally determined according to the skin material and the illumination information of the virtual character, and the skin rendering is performed according to the determined skin color. The skin texture is a description model for describing the illumination characteristics of the skin.
Specifically, during skin rendering, a two-pass (TwoPass) sampling mode is usually adopted to sample the skin to obtain each sampling point, and the color of the skin is determined according to the illumination distribution information of each sampling point. The TwoPass sampling method is a method of sampling in two directions.
However, sampling is performed in a TwoPass sampling manner, and the obtained sampling points are distributed in a shape like a plus sign, so that the skin rendering of the virtual character may have a aliasing problem like the plus sign.
Therefore, how to avoid aliasing when performing skin rendering on the virtual character is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for skin rendering of a virtual character, which are used for avoiding aliasing when the skin rendering is carried out on the virtual character.
In one aspect, a method for rendering skin of a virtual character is provided, including:
acquiring a two-dimensional virtual portrait of a virtual character of skin to be rendered in a two-dimensional space;
sampling in a designated area around each target pixel point of the skin of the two-dimensional virtual portrait according to a preset spiral array sampling mode, wherein the spiral array sampling mode is that acquired sampling points are distributed according to a designated spiral array;
respectively adopting a preset skin rendering algorithm according to the acquired illumination distribution information of each sampling point of each target pixel point to acquire the skin color of the corresponding target pixel point, wherein the skin rendering algorithm is used for converting the illumination distribution information of each sampling point into the skin color by adopting a Gaussian function;
and performing skin rendering on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual portrait.
In one aspect, an apparatus for skin rendering of a virtual character is provided, including:
the system comprises an acquisition unit, a rendering unit and a rendering unit, wherein the acquisition unit is used for acquiring a two-dimensional virtual portrait of a virtual character of skin to be rendered in a two-dimensional space;
the sampling unit is used for sampling in a specified area around each target pixel point of the skin of the two-dimensional virtual portrait according to a preset spiral array sampling mode, wherein the spiral array sampling mode is that acquired sampling points are distributed according to a specified spiral array;
the obtaining unit is used for obtaining the skin color of the corresponding target pixel point by adopting a preset skin rendering algorithm according to the obtained illumination distribution information of each sampling point of each target pixel point, and the skin rendering algorithm is used for converting the illumination distribution information of each sampling point into the skin color by adopting a Gaussian function;
and the rendering unit is used for performing skin rendering on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual portrait.
Preferably, the scattering physical model is determined according to skin material parameters correspondingly set by the skin identification information, and the skin material parameters are used for representing the skin characteristics of the virtual character.
In one aspect, a control device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to perform the steps of the method for skin rendering of any of the above-mentioned virtual characters.
In one aspect, a computer readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of any of the above-described methods of skin rendering of a virtual character.
In the method, the device, the equipment and the medium for skin rendering of the virtual character, provided by the embodiment of the invention, a two-dimensional virtual image of the virtual character of the skin to be rendered in a two-dimensional space is obtained, sampling is carried out in a specified area around each target pixel point of the skin of the two-dimensional virtual image according to a preset spiral array sampling mode, the skin color of the corresponding target pixel point is obtained by adopting a preset skin rendering algorithm according to the obtained illumination distribution information of each sampling point of each target pixel point, and the skin rendering is carried out on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual image. Therefore, the problem of the appearance of the '+' shape in the traditional mode is avoided, the number of sampling points is less than that in the traditional mode, and the time cost is less while the skin of the virtual character has the visual characteristics of the real human skin.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is an application scenario of skin rendering of a virtual character according to an embodiment of the present invention;
FIG. 2 is a flowchart of an embodiment of a method for rendering a skin of a virtual character according to an embodiment of the present invention;
FIG. 3a is an exemplary illustration of a Fibonacci spiral curve in an embodiment of the invention;
FIG. 3b is an exemplary graph of an Archimedes spiral curve in an embodiment of the invention;
FIG. 3c is a schematic diagram of a skin rendering architecture according to an embodiment of the present invention;
FIG. 4a is a diagram illustrating the effect of traditional skin rendering of a virtual character according to an embodiment of the present invention;
FIG. 4b is a diagram illustrating the effect of the skin rendering of an avatar according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a detailed implementation of the skin rendering of a virtual character according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an apparatus for rendering skin of a virtual character according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a control device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
First, some terms related to the embodiments of the present invention are explained to facilitate understanding by those skilled in the art.
Artificial Intelligence (AI): the method is a theory, method, technology and application system for simulating, extending and expanding human intelligence by using a digital computer or a machine controlled by the digital computer, sensing the environment, acquiring knowledge and obtaining the best result by using the knowledge. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (Computer Vision, CV): computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. Theories and techniques related to computer vision research attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
The control device: the electronic device can be mobile or fixed, and can be used for installing various applications and displaying objects provided in the installed applications. For example, a mobile phone, a tablet computer, various wearable devices, a vehicle-mounted device, a Personal Digital Assistant (PDA), a point of sale (POS), or other electronic devices capable of implementing the above functions may be used.
The application comprises the following steps: i.e. application programs, computer programs that can perform one or more services, typically have a visual display interface that can interact with a user, for example electronic maps and wechat, are referred to as applications.
Three-dimensional (3Dimensions, 3D), chinese refers to three Dimensions, three coordinates, i.e., length, width, height. In other words, being stereoscopic, 3D is the concept of space, i.e. the space consisting of X, Y, Z three axes, in an even higher dimension (4D +) with respect to a plane (2D) that is only long and wide, with only a line (1D) of length.
A 3D engine: is a module which integrates various algorithms including 3D graphics and provides a convenient Software Development Kit (SDK) interface to facilitate others to develop games on this basis. The 3D engine can stably and efficiently package complex graphic algorithms in the module, provides a simple and effective SDK interface for the outside, and people can easily learn and use the SDKs and can completely meet various complex 3D game function requirements through the simple SDKs. A 3D engine, typically provides a powerful editor. The game development system comprises functions of scene editing, model editing, animation editing, particle editing and the like of an engine, and artists in game development can greatly improve the working efficiency and the working quality by means of the tools. The 3D engine provides third-party plug-in components and also provides functions of networks, databases, scripts and the like.
And (3) SDK: is a collection of development tools used by software engineers to create application software for a particular software package, software framework, hardware platform, operating system, etc., and in general SDKs are SDKs used by applications developed under the Windows platform. It may simply be some file that provides an Application Programming Interface (API) for a certain Programming language, but may also include complex hardware that can communicate with a certain embedded system. Typical tools include utility tools for debugging and other purposes. SDKs also often include example code, supporting technical notes, or other supporting documentation to clarify suspicions for basic reference.
API: it is a call interface that the operating system leaves to the application program, which causes the operating system to execute the commands (actions) of the application program by calling the API of the operating system.
Virtual roles: the method refers to a virtual portrait with high reality degree drawn by a modeling rendering technology. The virtual character is not limited to a human character, and may be a character such as an animal.
Skin material quality: the description model is used for describing the skin of the virtual character, and the description model describes the illumination characteristics of the skin through the contained skin material parameters.
Block chains: the method is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of upgrading and canceling the contracts; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation, such as: alarm, monitoring network conditions, monitoring node equipment health status, and the like.
The platform product service layer provides basic capability and an implementation framework of typical application, and developers can complete block chain implementation of business logic based on the basic capability and the characteristics of the superposed business. The application service layer provides the application service based on the block chain scheme for the business participants to use.
Graphics Processing Unit (GPU)): the image processor is also called a display core, a visual processor and a display chip, and is a microprocessor which is specially used for image operation work on personal computers, workstations, game machines and some mobile devices (such as tablet computers, smart phones and the like).
Pixel Shader (Shader): is an applet running on the GPU. The small programs are operated for a certain specific part of a graphics rendering pipeline, and the pixel shader replaces the traditional editable program of a fixed rendering pipeline, so that the related calculation in the 3D graphics calculation can be realized, and the image rendering is realized. Due to its editability, a wide variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
Screwing: is a twisted curve like a spiral and a screw, and is a biologically common shape. The helix is divided into left-handed and right-handed. Looking along the axis from the center of the helix, if the helix is counterclockwise from near to far, it is left-handed and conversely right-handed. Most screws have right-handed screws, but both left-handed and right-handed screws are common in biological structures. The left-handed and right-handed rotations can be judged by comparing: the thumb of the erect fist points to the axial direction, the imaginary spiral is around the axial line along the four-finger direction, if the extending direction of the spiral is consistent with that of the thumb of the left hand, the spiral is a left-hand spiral, and the spiral is a right-hand spiral.
The sequence of the numbers: is a function of a domain defined by a set of positive integers (or a finite subset thereof) that is an ordered list of numbers. Each number in the series of numbers is called an entry of the series.
The Fibonacci helix Sequence, also known as the golden section Sequence. Graphical laws, mathematically, the fibonacci spiral sequence is defined in a recursive way: f0 ═ 1; f1 ═ 1; fn ═ F (n-1) + F (n-2). Wherein n is a serial number, and Fn is a sequence value corresponding to the serial number n. A spiral curve drawn according to the fibonacci spiral sequence, i.e. a fibonacci spiral is also known as a "golden spiral". Many patterns of Fibonacci spirals exist in nature, which is the most perfect classic golden ratio in nature. The rule of drawing is to draw a 90 degree sector in a rectangle made up of squares bordered by fibonacci numbers, and the connected arcs are fibonacci spirals.
Archimedean (Archimedean) helix number series: can be obtained by Archimedean spiral (also known as isovelocity spiral). An Archimedean spiral, named archimedes by the third century greek mathematician of the metric unit, is a track generated by a point moving away from a fixed point at a constant speed and simultaneously rotating around the fixed point at a fixed angular speed.
The idea of an embodiment of the invention is presented below.
In the field of computer graphics, the skin rendering technology is an important subject in the current computer field, and has extremely important application in the fields of medical treatment, movies, games and the like.
Taking the virtual character in the game as an example, different skin materials are set for different game characters in the game. When light strikes one point of the skin, it is scattered and refracted inside the skin and finally exits at another point. Therefore, when a lamp irradiates the whole skin, the illumination distribution information of a point p on the skin can be determined through the external illumination information of the area around the point p and the scattering physical model corresponding to the skin material, and the color of the skin can be determined through the illumination distribution information and the Gaussian function.
Wherein, the external illumination information is the light energy of the light irradiating the skin. The illumination distribution information is the light energy distributed by the skin when the light is absorbed in the skin and the scattering phenomenon occurs. The scattering physical model is determined according to the skin material parameters of the skin material, so that different skin materials correspond to different scattering physical models.
In the conventional art, the skin rendering is generally performed in the following manner:
sampling the skin of the virtual character by adopting a TwoPass sampling mode to obtain each sampling point in the x direction and each sampling point in the y direction, and according to the obtained illumination distribution information of each sampling point, independently calculating the sampling points in the x direction for n times, then independently calculating the sampling points in the y direction for n times, and performing 2n times of calculation in total, thereby determining the color of the skin.
However, in this way, the obtained distribution of the sampling points appears in a "+" shape, which may cause the skin of the virtual character to render, and the "+" shape to be distorted.
Obviously, the conventional technology does not provide a technical scheme for rendering the skin of the virtual character with high fidelity and avoiding the appearance of the unshaped skin of the shape of the '+'. Therefore, a technical solution for rendering the skin of the virtual character is urgently needed, so that the aliasing problem of the "+" font is avoided when the skin of the virtual character is rendered.
In the technical scheme, according to a preset spiral number sequence sampling mode, a specified peripheral area of each target pixel point of the skin of a two-dimensional virtual image of the virtual character in a two-dimensional space is sampled to obtain illumination distribution information of each sampling point of each target pixel point, and the skin color of each target pixel point is determined according to the illumination distribution information of each sampling point of each target pixel point by adopting a skin rendering algorithm determined by a Gaussian function aiming at each target pixel point. And finally, performing skin rendering on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual portrait.
To further illustrate the technical solutions provided by the embodiments of the present invention, the following detailed description is made with reference to the accompanying drawings and the specific embodiments. Although embodiments of the present invention provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in a method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by embodiments of the present invention. The method can be executed in sequence or in parallel according to the method shown in the embodiment or the figure when the method is executed in an actual processing procedure or a device.
Referring to fig. 1, an application scenario of skin rendering of a virtual character is shown. The application scenario includes a plurality of control devices 110 and a server 130, and fig. 1 illustrates three control devices 110, and the number of control devices 110 is not limited in practice. The control device 110 has a virtual application 120 installed therein for skin rendering. In the embodiment of the present invention, the virtual application 120 is only used as an example of a game application. Virtual application 120 and server 130 may communicate over a communication network. The control device 110 is, for example, a mobile phone, a tablet computer, a personal computer, or the like. The server 130 may be implemented by a single server or may be implemented by a plurality of servers. The server 130 may be implemented by a physical server or may be implemented by a virtual server.
In one possible application scenario, the servers 130 may be deployed in various regions to facilitate communication latency for gaming, or the different servers 130 may serve users separately for load balancing. The plurality of servers 130 can share data of each virtual character by a block chain, and the plurality of servers 130 constitute a data sharing system. For example, the control device 110 associated with user a is located at location a and communicatively coupled to the server 130, and the control device 110 associated with user C, B is located at location b and communicatively coupled to the other servers 130.
Each server 130 in the data sharing system has a node identifier corresponding to the server 130, and each server 130 in the data sharing system may store node identifiers of other servers in the data sharing system, so that the generated block is broadcast to other servers 130 in the data sharing system according to the node identifiers of other servers 130. Each server 130 may maintain a node identifier list as shown in the following table, and store the name of the server 130 and the node identifier in the node identifier list. The node identifier may be an IP (Internet Protocol) address and any other information that can be used to identify the node, and table 1 only illustrates the IP address as an example.
Table 1.
Server name Node identification
Node
1 119.115.151.174
Node 2 118.116.189.145
Node N 119.123.789.258
In an embodiment, an administrator writes rendering data in a process of rendering the skin of a virtual character into a data sharing system through a certain server 130, each rendering data is stored in the data sharing system, and any server 130 can obtain rendering data information through a block of a block chain and send the determined rendering data to the virtual application 120.
Optionally, the rendering data includes any one or a combination of the following parameters: the illumination distribution information of each sampling point and the skin color of each sampling point.
In the embodiment of the present invention, an example in which the control device 110 is a game terminal of a user and the virtual application 120 is a game application is described, where a virtual character in a 3D game is skin-rendered is taken as an example.
The game application in the game terminal calls an SDK interface provided by a 3D engine, a specified three-dimensional virtual portrait is mapped to a two-dimensional space from the three-dimensional space by means of a pixel shader in a GPU to obtain the two-dimensional virtual portrait, sampling is carried out on a specified area of each target pixel point of the skin of the two-dimensional virtual portrait in a spiral number sequence sampling mode, and the skin color of each target pixel point is determined according to illumination distribution information of each sampling point of each target pixel point by adopting a preset skin rendering algorithm. And finally, determining the skin color of each corresponding pixel point of the skin of the three-dimensional virtual portrait according to the skin color of each target pixel point of the two-dimensional virtual portrait and the mapping relation between the three-dimensional virtual portrait and the two-dimensional virtual portrait, and performing skin rendering on the three-dimensional virtual portrait according to the skin color of the three-dimensional virtual portrait.
It should be noted that the 3D engine is a module that encapsulates various algorithms of 3D graphics inside the module and provides an SDK interface to facilitate users (e.g., developers) to develop games on this basis. And the SDK interface provided by the 3D engine can call a graphic algorithm packaged by the 3D engine so as to realize various complex 3D game function requirements. The GPU is a microprocessor used for image operation in the control equipment, and the pixel shader is a small program operated in the GPU, so that the related calculation in the 3D graphics calculation can be realized, and the image rendering is realized.
Therefore, when the game application runs, the SDK interface provided by the 3D engine is called, the graphics algorithm packaged by the 3D engine and related to skin rendering is called, and the skin rendering is realized by combining a pixel shader in the GPU.
The method for rendering the skin of the virtual character can be applied to the fields of medical treatment, movies, games and the like, and can be specifically executed by a terminal suitable for the fields or scenes. The virtual character is a virtual portrait with high reality drawn by a modeling rendering technology. The virtual character is not limited to a human character, and may be a character such as an animal. The virtual character may be a three-dimensional virtual representation or a two-dimensional virtual representation. For example, a virtual character may be a virtual human representation drawn by a 3D engine via 3D graphical modeling rendering techniques.
Referring to fig. 2, it is a flowchart of an implementation of a method for rendering a skin of a virtual character according to the present invention, which is applied to a terminal. The method comprises the following specific processes:
step 200: the virtual application obtains a two-dimensional virtual representation of a virtual character of the skin to be rendered in a two-dimensional space.
Specifically, if the virtual character of the skin to be rendered is a three-dimensional virtual portrait in a three-dimensional space. For example, a three-dimensional virtual representation of a 3D game character is mapped from a three-dimensional space to a two-dimensional space to obtain a corresponding two-dimensional virtual representation.
This is because when rendering the skin of the virtual character, the skin color is usually determined according to the skin material of the virtual character and the light of the application scene where the virtual character is located. The skin material is a description model for describing the illumination characteristics of the skin of the virtual character, and can be read, processed and calculated by an engine. The description model describes the illumination characteristics of the skin through the contained skin material parameters. With this descriptive model, the skin is defined as a semi-transparent homogeneous medium in a two-dimensional plane. Therefore, it is necessary to acquire a two-dimensional virtual representation of the virtual character in a two-dimensional space.
The three-dimensional virtual portrait is a three-dimensional stereoscopic portrait of a virtual character of the skin to be rendered in a three-dimensional space, and the two-dimensional virtual portrait is a two-dimensional plane portrait of the virtual character in a two-dimensional space.
The virtual character is a virtual portrait with high reality drawn by a modeling rendering technology. The virtual character is not limited to a human character, and may be a character such as an animal.
The two-dimensional space is a planar space consisting of only two elements of length and width (in geometry, X axis and Y axis), and extends only to the plane. Two-dimensional space is also a term used in art, and for example, painting is to represent three-dimensional space (three-dimensional space) by two-dimensional space (two-dimensional space).
Three-dimensional space, which means the concept of space, i.e. the space consisting of X, Y, Z three axes, is of even higher dimension (4D +) with respect to a plane (2D) that is only long and wide, and a line (1D) that is only long. Wherein, three dimensions are three dimensions, three coordinates are length, width and height. In other words, it is stereoscopic.
The three-dimensional virtual representation displays 3D graphics in the computer, that is, three-dimensional graphics in a plane. Unlike the real world, a real three-dimensional space has a real distance space. The computer only looks like the real world, so that the 3D graphics displayed on the computer are seen as if the human eyes are true. Human eyes have the characteristic that the human eyes are big at near and small at far, and can form stereoscopic impression. The computer screen is planar and two-dimensional, so that people can enjoy a three-dimensional image as a real object, and the two-dimensional computer screen is perceived as the three-dimensional image because human eyes generate visual illusion due to different colors and gray levels when the three-dimensional image is displayed on the computer screen. Based on the knowledge of colorimetry, the convex part of the edge of the three-dimensional object generally shows high brightness color, and the concave part shows dark color due to the shielding of light. This knowledge is widely used for the drawing of buttons, 3D lines in web pages or other applications. For example, 3D text to be drawn is displayed with a high brightness color at the original position, and is outlined with a low brightness color at the lower left or upper right position, so that the 3D text is visually generated. In the concrete implementation, two 2D characters with different colors can be respectively drawn at different positions by using the same font, and 3D characters with different effects can be completely generated visually as long as the coordinates of the two characters are suitable.
Therefore, the two-dimensional virtual image of the virtual character of the skin to be rendered can be obtained, so that the skin color of the two-dimensional virtual image can be determined according to the skin material of the virtual character in the subsequent steps.
Step 201: the virtual application samples in a designated area around each target pixel point of the skin of the two-dimensional virtual representation according to a preset spiral array sampling mode.
Specifically, the virtual application executes the following steps for each target pixel point of the skin of the two-dimensional virtual representation:
and sampling the designated area around the target pixel point by adopting a preset spiral array sampling mode to obtain each sampling point corresponding to the target pixel point.
That is, each target pixel point in the skin of the two-dimensional virtual portrait is obtained, and sampling is performed according to spiral array distribution on the periphery of each target pixel point, so as to obtain a sampling point corresponding to each target pixel point.
In the embodiment of the invention, the spiral number sequence adopted by the spiral number sequence sampling mode is stored in advance, so that the dynamic calculation process during sampling is avoided, and the consumed time cost is reduced. The number of each sampling point corresponding to one target pixel point is a specified number. In practical application, the designated area and the designated number may be set according to a practical application scenario, and are not limited herein. The designated number and designated area can be configured in a program development stage according to the requirements of a developer through a configuration file mode. For example, the designated area may be a circular area with the target pixel point as a center and a radius of 1 cm, and the designated number may be 20.
Alternatively, the spiral sequence may be stored in a program code or a database in the form of a function or a data set. Thus, when the program runs, the spiral number sequence can be directly obtained, or the spiral number sequence can be obtained through a database.
Where the array is a function of a set of positive integers (or a finite subset thereof) as a domain, an ordered array of numbers. Each number in the series of numbers is called an entry of the series. A spiral is a twisted curve like a spiral and a screw. The spiral array sampling mode is a sampling mode that all acquired sampling points are distributed according to a specified spiral array.
The spiral sequence is a function with positive integer set as definition domain, and is a sequence of ordered numbers, each number in the sequence is called the item of the spiral sequence, and the curve formed by connecting the included items (i.e. sampling points) is a spiral curve. Different spiral arrays are adopted, different sampling point distributions can be obtained, and functions of the spiral arrays can be set according to practical application scenes without limitation. Research personnel can flexibly configure the functions of the spiral array according to actual requirements, and self-define the distribution of sampling points.
Optionally, the spiral sequence may be any one or combination of the following: fibonacci helix number series, and archimedean helix number series.
When the spiral number is a combination of a fibonacci spiral number series and an archimedean spiral number series, the following method can be adopted:
and acquiring m sampling points of the target pixel point through the Fibonacci spiral number sequence, acquiring n sampling points of the target pixel point through the Archimedes spiral number sequence, and taking the m sampling points and the n sampling points as each sampling point corresponding to the target pixel point.
Wherein m and n are numerical values, and both m and n are positive integers.
For example, referring to fig. 3a, an exemplary graph of a fibonacci spiral curve and to fig. 3b, an exemplary graph of an archimedes spiral curve. The virtual application obtains each sampling point of a target pixel point through a Fibonacci spiral number series sampling mode, and a spiral curve shown in fig. 3a can be drawn according to each obtained sampling point. The virtual application obtains each sampling point of a target pixel by an archimedes spiral array sampling mode, and a spiral curve shown in fig. 3b can be drawn according to each obtained sampling point.
It should be noted that the fibonacci spiral number sequence is mathematically defined in a recursive manner: f0 ═ 1; f1 ═ 1; fn ═ F (n-1) + F (n-2). Wherein n is the designated number of the sampling points, and Fn is the term of the number sequence corresponding to n.
The archimedean spiral series is obtained by an archimedean spiral curve. An archimedean spiral is a trajectory of a point rotating around a fixed point at a constant angular velocity while leaving the fixed point at a constant velocity.
Therefore, when each target pixel point is sampled, the obtained sampling points are distributed spirally, and the problem that the sampling points are distributed in a shape like a plus sign in the traditional technology is solved.
Step 202: and the virtual application obtains the skin color of the corresponding target pixel point by adopting a preset skin rendering algorithm according to the obtained illumination distribution information of each sampling point of each target pixel point.
Specifically, when step 202 is executed, the following steps are executed for each target pixel point:
and adopting a preset skin rendering algorithm to convert the illumination distribution information of each sampling point of the target pixel point into the skin color of the target pixel point.
The skin rendering algorithm is used for converting the illumination distribution information of each sampling point into skin color by adopting a Gaussian function. The illumination distribution information is the energy distribution in the skin after light enters the object from the skin surface and is internally scattered. The gaussian function is widely used in the field of statistics for expressing normal distribution, in the field of signal processing for defining gaussian filters, and in the field of image processing for gaussian blur. The illumination distribution information may be illumination intensity, which is the energy of visible light received per unit area, and has unit Lux or Lx. Is a physical term used to indicate the intensity of illumination and the amount of illumination to which the surface area of an object is illuminated.
Therefore, the process of reflecting and refracting light rays between different layers of the skin can be simulated, so that the skin of the virtual character has visual characteristics close to the real human skin.
Before step 202 is executed, the virtual application obtains the illumination distribution information of each sampling point of each target pixel point in advance.
Wherein, aiming at each sampling point, the following steps are executed:
s2021: and acquiring the external illumination information received in a preset sampling area around the sampling point.
The external illumination information is determined according to the application scene where the virtual role is located, and corresponding external illumination information is configured in different application scenes. In practical applications, the preset sampling area may be set according to practical application scenarios, for example, 1 × 1 cm, and is not limited herein.
For example, if the virtual character is a game character, the game character moves in various game scenes, and in order to improve the reality of the game scenes, corresponding external illumination information is configured in advance in different game scenes.
It should be noted that the external illumination information is set for an application scene in the virtual application by simulating an illumination phenomenon in a natural sun-illuminated area by the virtual application. For example, the ambient light intensity during the day may be stronger than the ambient light intensity at night.
S2022: and acquiring a scattering physical model correspondingly set by the skin identification information according to the acquired skin identification information of the virtual character.
Before S2022 is executed, each skin material is preset, corresponding skin identification information is set for each skin material, and a corresponding scattering physics model is set for each skin material (or skin identification information).
The skin material is a description model for describing the skin illumination characteristics of the virtual character, and the description model describes the skin illumination characteristics through the contained skin material parameters. With this descriptive model, the skin is defined as a semi-transparent homogeneous medium in a two-dimensional plane. The skin texture parameter is used to represent the skin characteristics of the virtual character.
The scattering physical model is determined according to the skin material parameters corresponding to the skin identification information (or the skin material) and is used for determining the illumination distribution information of the skin after the skin is irradiated by the external light.
In one embodiment, the skin material may be a subsurface reflective material. The sub-surface reflection material is a description model of the skin of the high-fidelity virtual character.
S2023: and acquiring the illumination distribution information of the sampling point by adopting a scattering physical model according to the external illumination information in a preset sampling area around the sampling point.
That is to say, the external illumination information in the preset sampling region around the sampling point is converted into the illumination distribution information of the sampling point through the scattering physical model.
Therefore, the corresponding skin color can be determined according to the skin material of the virtual character and the illumination distribution information.
Step 203: and the virtual application performs skin rendering on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual image and outputs the virtual character after the skin rendering.
Specifically, when step 203 is executed, the following two ways may be adopted:
the first mode is as follows: and if the virtual character is a two-dimensional virtual image in a two-dimensional space, performing skin rendering on the skin of the two-dimensional virtual image according to the skin color of each target pixel point of the two-dimensional virtual image.
That is, the skin of the two-dimensional virtual image is rendered to the determined skin color.
The second way is: and if the virtual character is a three-dimensional virtual portrait in a three-dimensional space, performing skin rendering on the three-dimensional virtual portrait according to the skin color of each target pixel point of the two-dimensional virtual portrait of the virtual character and the mapping relation between the three-dimensional virtual portrait and the two-dimensional virtual portrait.
That is, the skin color of the two-dimensional virtual image is converted into the skin color of the three-dimensional virtual image.
Further, rendering data in the process of rendering the skin of the virtual character is stored and acquired through the blocks in the block chain. Optionally, the rendering data includes any one or a combination of the following parameters: the illumination distribution information of each sampling point, the skin color of each sampling point and the like.
Referring to fig. 3c, a schematic diagram of a skin rendering architecture is shown, which includes fig. 1 and fig. 2, and samples the virtual character shown in fig. 1 in a spiral number sequence sampling manner, performs skin rendering according to illumination distribution information of each sampling point, and outputs the virtual character shown in fig. 2 after skin rendering.
FIG. 4a is a diagram of the effects of a traditional way of skin rendering of a virtual character. Fig. 4a is a left image of a virtual character, a right image of a local skin of the virtual character, fig. 4b is a view of the effect of the skin rendering of the virtual character according to the present invention, the left image of fig. 4b is an image of the virtual character, and the right image is the local skin. As can be seen from comparison between the effect diagrams of fig. 4a and fig. 4b, the scheme of skin rendering provided by the present invention can make the skin of the virtual character have visual characteristics close to the real human skin.
In the embodiment of the invention, the sampling points corresponding to each target pixel point of the skin of the virtual character are obtained by adopting a spiral array sampling mode, the obtained sampling points are distributed spirally, the problem of the appearance of a plus shape in the traditional mode is avoided, and the number of the sampling points is less than that of the traditional mode and the time cost is less while the skin of the virtual character has the visual characteristics of the real human skin.
The above embodiments are illustrated below using a specific application scenario. Referring to fig. 5, a detailed implementation flow diagram of the skin rendering of a virtual character is shown.
Suppose the virtual character is a game character a in a 3D game application, the game player selects a skin texture B for the game character, the skin identification information of the skin texture B is identification information C, the virtual application is a game application of the game player, and the game character a is currently in a game scene H. When the 3D game application performs skin rendering on the game character A, the game terminal performs the following steps:
500: a three-dimensional virtual representation of a game character A is acquired, and the three-dimensional virtual representation is converted into a two-dimensional virtual representation.
501: and sampling in a designated area around each target pixel point of the skin of the two-dimensional virtual image according to a Fibonacci spiral number sequence sampling mode.
502: and according to the game scene H, acquiring the external illumination information received in a preset sampling area around each sampling point.
503: and acquiring a corresponding scattering physical model according to the identification information C of the skin material B.
504: and determining corresponding illumination distribution information according to the external illumination information and the scattering physical model corresponding to each sampling point.
505: and respectively adopting a preset skin rendering algorithm according to the acquired illumination distribution information of each sampling point of each target pixel point to acquire the skin color of the corresponding target pixel point.
506: and performing skin rendering on the three-dimensional virtual portrait according to the skin color of each target pixel point of the two-dimensional virtual portrait of the virtual character and the mapping relation between the three-dimensional virtual portrait and the two-dimensional virtual portrait.
507: and outputting the game character A after the skin rendering.
Based on the same inventive concept, the embodiment of the invention also provides a device for rendering the skin of the virtual character, and because the principle of solving the problems of the device and the equipment is similar to the method for rendering the skin of the virtual character, the implementation of the device can refer to the implementation of the method, and repeated parts are not described again.
Fig. 6 is a schematic structural diagram of an apparatus for rendering skin of a virtual character according to an embodiment of the present invention. An apparatus for skin rendering of a virtual character comprising:
an obtaining unit 601, configured to obtain a two-dimensional virtual representation of a virtual character of a skin to be rendered in a two-dimensional space;
the sampling unit 602 is configured to sample in a specified area around each target pixel of the skin of the two-dimensional virtual representation according to a preset spiral array sampling manner, where the spiral array sampling manner is that acquired sampling points are distributed according to a specified spiral array;
an obtaining unit 603, configured to obtain a skin color of each target pixel point by using a preset skin rendering algorithm according to the obtained illumination distribution information of each sampling point of each target pixel point, where the skin rendering algorithm is configured to convert the illumination distribution information of each sampling point into the skin color by using a gaussian function;
and a rendering unit 604, configured to perform skin rendering on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual representation.
Preferably, the obtaining unit 601 is further configured to:
and if the virtual character is a three-dimensional virtual portrait in a three-dimensional space, mapping the three-dimensional virtual portrait from the three-dimensional space to a two-dimensional space to obtain a two-dimensional virtual portrait.
Preferably, the rendering unit 604 is specifically configured to:
and performing skin rendering on the three-dimensional virtual portrait according to the skin color of each target pixel point of the two-dimensional virtual portrait and the mapping relation between the three-dimensional virtual portrait and the two-dimensional virtual portrait.
Preferably, the number of spirals is any one or combination of the following:
a two-dimensional Fibonacci spiral number series, and a two-dimensional Archimedes spiral number series;
the number of each sampling point corresponding to one target pixel point is a specified number.
Preferably, rendering data in the process of rendering the skin of the virtual character is stored and acquired through blocks in a block chain;
the rendering data includes any one or a combination of the following parameters: the illumination distribution information of each sampling point and the skin color of each sampling point.
Preferably, the illumination distribution information of each sampling point is determined according to the following steps:
acquiring external illumination information received in a preset sampling area around the sampling point, wherein the external illumination information is determined according to an application scene where the virtual character is located;
acquiring a scattering physical model correspondingly arranged to the skin identification information according to the acquired skin identification information of the virtual character, wherein the scattering physical model is used for determining illumination distribution information of the skin after the skin is irradiated by external light;
and acquiring the illumination distribution information of the sampling point by adopting a scattering physical model according to the external illumination information received in a preset sampling area around the sampling point.
Preferably, the scattering physical model is determined according to skin material parameters correspondingly set by the skin identification information, and the skin material parameters are used for representing the skin characteristics of the virtual character.
In the method, the device, the equipment and the medium for skin rendering of the virtual character, provided by the embodiment of the invention, a two-dimensional virtual image of the virtual character of the skin to be rendered in a two-dimensional space is obtained, sampling is carried out in a specified area around each target pixel point of the skin of the two-dimensional virtual image according to a preset spiral array sampling mode, the skin color of the corresponding target pixel point is obtained by adopting a preset skin rendering algorithm according to the obtained illumination distribution information of each sampling point of each target pixel point, and the skin rendering is carried out on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual image. Therefore, the problem of the appearance of the '+' shape in the traditional mode is avoided, the number of sampling points is less than that in the traditional mode, and the time cost is less while the skin of the virtual character has the visual characteristics of the real human skin.
Fig. 7 shows a schematic configuration of a control device 7000. Referring to fig. 7, the control apparatus 7000 includes: a processor 7010, a memory 7020, a power supply 7030, a display unit 7040, and an input unit 7050.
The processor 7010 is a control center of the control apparatus 7000, connects the respective components by various interfaces and lines, and executes various functions of the control apparatus 7000 by running or executing software programs and/or data stored in the memory 7020, thereby monitoring the control apparatus 7000 as a whole.
In an embodiment of the present invention, the processor 7010, when invoking the computer program stored in the memory 7020, performs the method of skin rendering of a virtual character as provided by the embodiment shown in fig. 2.
Optionally, the processor 7010 may include one or more processing units; preferably, the processor 7010 may integrate an application processor, which handles primarily the operating system, user interfaces, applications, etc., and a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 7010. In some embodiments, the processor, memory, and/or memory may be implemented on a single chip, or in some embodiments, they may be implemented separately on separate chips.
The memory 7020 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, various applications, and the like; the stored data area may store data created from the use of the control device 7000 and the like. In addition, the memory 7020 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The control device 7000 also includes a power supply 7030 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 7010 via a power management system that may be used to manage charging, discharging, and power consumption.
Display unit 7040 may be configured to display information input by a user or information provided to the user, and various menus of control apparatus 7000, and the like, and in the embodiment of the present invention, is mainly configured to display a display interface of each application in control apparatus 7000, and objects such as texts and pictures displayed in the display interface. The display unit 7040 may include a display panel 7041. The Display panel 7041 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 7050 may be used to receive information such as numbers or characters input by a user. The input unit 7050 may include a touch panel 7051 and other input devices 7052. Among other things, the touch panel 7051, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7051 (e.g., operations by a user on or near the touch panel 7051 using any suitable object or attachment such as a finger, a stylus, etc.).
Specifically, the touch panel 7051 may detect a touch operation of a user, detect signals generated by the touch operation, convert the signals into touch point coordinates, transmit the touch point coordinates to the processor 7010, receive a command transmitted from the processor 7010, and execute the command. In addition, the touch panel 7051 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. Other input devices 7052 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, power on and off keys, etc.), a trackball, a mouse, a joystick, and the like.
Of course, the touch panel 7051 may cover the display panel 7041, and when the touch panel 7051 detects a touch operation on or near the touch panel 7051, the touch operation is transmitted to the processor 7010 to determine the type of the touch event, and then the processor 7010 provides a corresponding visual output on the display panel 7041 according to the type of the touch event. Although in fig. 7, the touch panel 7051 and the display panel 7041 are two separate components to implement the input and output functions of the control device 7000, in some embodiments, the touch panel 7051 and the display panel 7041 may be integrated to implement the input and output functions of the control device 7000.
The control device 7000 may also comprise one or more sensors, such as pressure sensors, gravitational acceleration sensors, proximity light sensors, etc. Of course, the control device 7000 may also comprise other components such as a camera, which are not shown in fig. 7 and will not be described in detail, since they are not components used in the embodiment of the present invention.
Those skilled in the art will appreciate that fig. 7 is merely an example of a control device and is not intended to be limiting and may include more or less components than those shown, or some components in combination, or different components.
Embodiments of the present invention further provide a computer-readable storage medium on which a computer program is stored, where the computer program, when executed by a processor, implements a method for skin rendering of a virtual character in any of the above-mentioned method embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or partially contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a control device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. A method of skin rendering of a virtual character, comprising:
acquiring a two-dimensional virtual portrait of a virtual character of skin to be rendered in a two-dimensional space;
sampling in a designated area around each target pixel point of the skin of the two-dimensional virtual portrait according to a preset spiral array sampling mode, wherein the spiral array sampling mode is that acquired sampling points are distributed according to a designated spiral array;
respectively adopting a preset skin rendering algorithm according to the acquired illumination distribution information of each sampling point of each target pixel point to acquire the skin color of the corresponding target pixel point, wherein the skin rendering algorithm is used for converting the illumination distribution information of each sampling point into the skin color by adopting a Gaussian function;
and performing skin rendering on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual portrait.
2. The method of claim 1, prior to obtaining a two-dimensional virtual representation of a virtual character of skin to be rendered in a two-dimensional space, further comprising:
and if the virtual character is a three-dimensional virtual portrait in a three-dimensional space, mapping the three-dimensional virtual portrait to a two-dimensional space from the three-dimensional space to obtain a two-dimensional virtual portrait.
3. The method of claim 2, wherein the skin rendering of the avatar based on skin colors of target pixels of the two-dimensional virtual representation comprises:
and performing skin rendering on the three-dimensional virtual portrait according to the skin color of each target pixel point of the two-dimensional virtual portrait and the mapping relation between the three-dimensional virtual portrait and the two-dimensional virtual portrait.
4. The method of claim 1, wherein the number of spirals is any one or combination of:
a two-dimensional Fibonacci spiral number series, and a two-dimensional Archimedes spiral number series;
the number of each sampling point corresponding to one target pixel point is a specified number.
5. The method according to any one of claims 1 to 4, wherein rendering data in the skin rendering process of the virtual character is stored and acquired by tiles in a chain of tiles;
the rendering data comprises any one or a combination of the following parameters: the illumination distribution information of each sampling point and the skin color of each sampling point.
6. The method according to any one of claims 1 to 4, wherein the illumination distribution information of each sample point is determined according to the following steps:
acquiring external illumination information received in a preset sampling area around the sampling point, wherein the external illumination information is determined according to an application scene where the virtual character is located;
acquiring a scattering physical model correspondingly arranged to the skin identification information according to the acquired skin identification information of the virtual character, wherein the scattering physical model is used for determining illumination distribution information of the skin after external light irradiates the skin;
and acquiring the illumination distribution information of the sampling point by adopting the scattering physical model according to the external illumination information received in the preset sampling area around the sampling point.
7. The method of claim 6, wherein the scattering physics model is determined according to skin texture parameters correspondingly set by the skin identification information, and the skin texture parameters are used for representing skin characteristics of the virtual character.
8. An apparatus for skin rendering of a virtual character, comprising:
the system comprises an acquisition unit, a rendering unit and a rendering unit, wherein the acquisition unit is used for acquiring a two-dimensional virtual portrait of a virtual character of skin to be rendered in a two-dimensional space;
the sampling unit is used for sampling in a specified area around each target pixel point of the skin of the two-dimensional virtual portrait according to a preset spiral array sampling mode, wherein the spiral array sampling mode is that acquired sampling points are distributed according to a specified spiral array;
the obtaining unit is used for obtaining the skin color of the corresponding target pixel point by adopting a preset skin rendering algorithm according to the obtained illumination distribution information of each sampling point of each target pixel point, and the skin rendering algorithm is used for converting the illumination distribution information of each sampling point into the skin color by adopting a Gaussian function;
and the rendering unit is used for performing skin rendering on the virtual character according to the skin color of each target pixel point of the two-dimensional virtual portrait.
9. The apparatus of claim 8, wherein the obtaining unit is further configured to:
and if the virtual character is a three-dimensional virtual portrait in a three-dimensional space, mapping the three-dimensional virtual portrait to a two-dimensional space from the three-dimensional space to obtain a two-dimensional virtual portrait.
10. The apparatus of claim 9, wherein the rendering unit is specifically configured to:
and performing skin rendering on the three-dimensional virtual portrait according to the skin color of each target pixel point of the two-dimensional virtual portrait and the mapping relation between the three-dimensional virtual portrait and the two-dimensional virtual portrait.
11. The apparatus of claim 8, wherein the number of spirals is any one or combination of:
a two-dimensional Fibonacci spiral number series, and a two-dimensional Archimedes spiral number series;
the number of each sampling point corresponding to one target pixel point is a specified number.
12. The apparatus according to any one of claims 8-11, wherein rendering data in the skin rendering process of the virtual character is stored and retrieved by tiles in a chain of tiles;
the rendering data comprises any one or a combination of the following parameters: the illumination distribution information of each sampling point and the skin color of each sampling point.
13. The apparatus according to any one of claims 8-11, wherein the illumination distribution information of each sample point is determined according to the following steps:
acquiring external illumination information received in a preset sampling area around the sampling point, wherein the external illumination information is determined according to an application scene where the virtual character is located;
acquiring a scattering physical model correspondingly arranged to the skin identification information according to the acquired skin identification information of the virtual character, wherein the scattering physical model is used for determining illumination distribution information of the skin after external light irradiates the skin;
and acquiring the illumination distribution information of the sampling point by adopting the scattering physical model according to the external illumination information received in the preset sampling area around the sampling point.
14. A control device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-7 are implemented when the program is executed by the processor.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201910900572.8A 2019-09-23 2019-09-23 Method, device, equipment and medium for rendering skin of virtual character Active CN110599577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910900572.8A CN110599577B (en) 2019-09-23 2019-09-23 Method, device, equipment and medium for rendering skin of virtual character

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910900572.8A CN110599577B (en) 2019-09-23 2019-09-23 Method, device, equipment and medium for rendering skin of virtual character

Publications (2)

Publication Number Publication Date
CN110599577A CN110599577A (en) 2019-12-20
CN110599577B true CN110599577B (en) 2020-11-24

Family

ID=68862531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910900572.8A Active CN110599577B (en) 2019-09-23 2019-09-23 Method, device, equipment and medium for rendering skin of virtual character

Country Status (1)

Country Link
CN (1) CN110599577B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111744183B (en) * 2020-07-02 2024-02-09 网易(杭州)网络有限公司 Illumination sampling method and device in game and computer equipment
WO2022032452A1 (en) * 2020-08-10 2022-02-17 厦门雅基软件有限公司 Game engine-based shading data processing method and apparatus, and electronic device
CN113244613B (en) * 2021-06-01 2024-02-23 网易(杭州)网络有限公司 Method, device, equipment and medium for adjusting virtual tool display in game picture
WO2023230878A1 (en) * 2022-05-31 2023-12-07 华为技术有限公司 Coloring method and image processor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7073130B2 (en) * 2001-01-31 2006-07-04 Microsoft Corporation Methods and systems for creating skins
US7072908B2 (en) * 2001-03-26 2006-07-04 Microsoft Corporation Methods and systems for synchronizing visualizations with audio streams

Also Published As

Publication number Publication date
CN110599577A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110599577B (en) Method, device, equipment and medium for rendering skin of virtual character
Chatzopoulos et al. Mobile augmented reality survey: From where we are to where we go
Huang et al. Mobile augmented reality survey: a bottom-up approach
Zhou et al. Virtual reality: A state-of-the-art survey
CN110827390A (en) Method for handling unordered opacities and α ray/primitive intersections
CN106887183A (en) A kind of interactive demonstration method and system of BIM augmented realities in sand table is built
CN107132912A (en) A kind of interactive demonstration method and system of GIS and BIM augmented realities in building plans
CN107251098A (en) The true three-dimensional virtual for promoting real object using dynamic 3 D shape is represented
CN106797458A (en) The virtual change of real object
Hibbard Top ten visualization problems
CN110706300A (en) Virtual image generation method and device
US9905045B1 (en) Statistical hair scattering model
Demir et al. Detecting visual design principles in art and architecture through deep convolutional neural networks
Mousavi et al. Ai playground: Unreal engine-based data ablation tool for deep learning
Hong et al. Design and analysis of clothing catwalks taking into account unity's immersive virtual reality in an artificial intelligence environment
Jiang et al. AIDM: artificial intelligent for digital museum autonomous system with mixed reality and software-driven data collection and analysis
Zhang et al. Illumination estimation for augmented reality based on a global illumination model
CN1979508A (en) Simulated humanbody channel collateral cartoon presenting system capable of excuting by computer and method therefor
CN109065001A (en) A kind of down-sampled method, apparatus, terminal device and the medium of image
Yan et al. A non-photorealistic rendering method based on Chinese ink and wash painting style for 3D mountain models
CN115953524A (en) Data processing method and device, computer equipment and storage medium
Thorne Origin-centric techniques for optimising scalability and the fidelity of motion, interaction and rendering
CN114820968A (en) Three-dimensional visualization method and device, robot, electronic device and storage medium
Soliman et al. Artificial intelligence powered Metaverse: analysis, challenges and future perspectives
Wang et al. Research on 3D Terminal Rendering Technology Based on Power Equipment Business Features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant