CN111738967B - Model generation method and apparatus, storage medium, and electronic apparatus - Google Patents

Model generation method and apparatus, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN111738967B
CN111738967B CN202010438055.6A CN202010438055A CN111738967B CN 111738967 B CN111738967 B CN 111738967B CN 202010438055 A CN202010438055 A CN 202010438055A CN 111738967 B CN111738967 B CN 111738967B
Authority
CN
China
Prior art keywords
attribute
map
target
model
iris
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010438055.6A
Other languages
Chinese (zh)
Other versions
CN111738967A (en
Inventor
项维康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202010438055.6A priority Critical patent/CN111738967B/en
Publication of CN111738967A publication Critical patent/CN111738967A/en
Application granted granted Critical
Publication of CN111738967B publication Critical patent/CN111738967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a model generation method and device, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring a first eyeball model, wherein the first eyeball model is an unpatterned eyeball model; mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light; and displaying the generated second eyeball model through the target client. The method and the device solve the problem that the rendered eyeball fidelity is low due to the fact that rendering is too simple in the rendering mode of the eyeball in the related technology.

Description

Model generation method and apparatus, storage medium, and electronic apparatus
Technical Field
The present application relates to the field of internet, and in particular, to a model generation method and apparatus, a storage medium, and an electronic apparatus.
Background
The user may select a virtual character to be controlled during play of a game, or other service. The fidelity of the morphological features of the virtual character and the aesthetic appearance of the visual display can affect the business operations experience of the user.
At present, the rendering of the virtual character eyeball is rough, the generated eyeball effect is not vivid enough, for example, a reflection bright spot on the eyeball is generally controlled by a map, and the current art requirement cannot be met in the visual effect.
Therefore, the rendering mode of the eyeball in the related technology has the problem of low fidelity of the rendered eyeball due to too simple rendering.
Disclosure of Invention
The embodiment of the application provides a model generation method and device, a storage medium and an electronic device, which are used for at least solving the problem of low fidelity of an eyeball during rendering due to too simple rendering in an eyeball rendering mode in the related art.
According to an aspect of an embodiment of the present application, there is provided a model generation method including: acquiring a first eyeball model, wherein the first eyeball model is an unpatterned eyeball model; mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light; and displaying the generated second eyeball model through the target client.
According to another aspect of an embodiment of the present application, there is provided a model generation apparatus including: the device comprises an acquisition unit, a judging unit and a judging unit, wherein the acquisition unit is used for acquiring a first eyeball model, and the first eyeball model is an unpatterned eyeball model; the mapping unit is used for mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light; and the first display unit is used for displaying the second eyeball model generated by the target client.
According to a further aspect of an embodiment of the present application, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
According to a further aspect of an embodiment of the present application, there is also provided an electronic apparatus, including a memory and a processor, the memory storing a computer program therein, the processor being configured to execute the computer program to perform the steps in any of the above method embodiments.
In the embodiment of the application, a first eyeball model is obtained by adding a lens refraction effect to the eyeball model, wherein the first eyeball model is an unpatterned eyeball model; mapping (i.e. mapping rendering) the first eyeball model by using a target map to obtain a second eyeball model, wherein the target map comprises a first map corresponding to an iris of the first eyeball model, and the first map is used for simulating refraction of a crystalline lens to incident light; through the second eyeball model that target client shows the generation, owing to increase the refraction effect of the iris of eyeball model to the simulation crystalline lens is to the refraction of light, can reach the purpose that improves the authenticity of the reflection bright spot on the eyeball, thereby has reached the technological effect that promotes the fidelity of the eyeball that renders, and then has solved the problem that the eyeball fidelity that renders that the rendering that exists of the rendering mode of eyeball among the correlation technique leads to because rendering is too simple is low.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment for a model generation method according to an embodiment of the present application;
FIG. 2 is a schematic flow diagram of an alternative model generation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an alternative eyeball grid according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative eye rendering effect according to an embodiment of the present application;
FIG. 5 is a schematic view of an alternative eyeball structure according to an embodiment of the present application;
FIG. 6 is a schematic view of an alternative eye model according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an alternative material editing interface according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an alternative material editing interface according to an embodiment of the present application;
FIG. 9 is a schematic diagram of yet another alternative material editing interface according to an embodiment of the present application;
FIG. 10 is a schematic diagram of yet another alternative material editing interface according to an embodiment of the present application;
FIG. 11 is a schematic diagram of yet another alternative material editing interface according to an embodiment of the present application;
FIG. 12 is a schematic diagram of yet another alternative material editing interface in accordance with an embodiment of the present application;
FIG. 13 is a schematic diagram of yet another alternative material editing interface in accordance with an embodiment of the present application;
FIG. 14 is a schematic diagram of yet another alternative material editing interface according to an embodiment of the present application;
FIG. 15 is a schematic diagram of yet another alternative material editing interface according to an embodiment of the present application;
FIG. 16 is a schematic diagram of yet another alternative material editing interface according to an embodiment of the present application;
FIG. 17 is a schematic view of another alternative eye model according to an embodiment of the present application;
FIG. 18 is a schematic view of yet another alternative eyeball model in accordance with an embodiment of the present application;
FIG. 19 is a schematic diagram of an alternative model generation apparatus according to an embodiment of the present application;
fig. 20 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of embodiments of the present application, there is provided method embodiments of a model generation method. Alternatively, in the present embodiment, the model generation method described above may be applied to a hardware environment formed by the terminal 101 and the server 103 as shown in fig. 1. As shown in fig. 1, a server 102 is connected to a terminal 101 through a network, which may be used to provide services (such as game services, application services, etc.) for the terminal or a client installed on the terminal, and a database 105 may be provided on the server or separately from the server for providing data storage services for the server 103, and the network includes but is not limited to: the terminal 101 is not limited to a PC, a mobile phone, a tablet computer, and the like. The model generation method according to the embodiment of the present application may be executed by the server 103, the terminal 101, or both the server 103 and the terminal 101. The terminal 101 executing the model generation method according to the embodiment of the present application may also be executed by a client installed thereon.
Optionally, an embodiment of the present application provides a model generation method, fig. 2 is a schematic flow chart of an optional model generation method according to an embodiment of the present application, and as shown in fig. 2, the method may include the following steps:
step S202, a first eyeball model is obtained, wherein the first eyeball model is an unpatterned eyeball model;
step S204, using a target map to map the first eyeball model to obtain a second eyeball model, wherein the target map comprises a first map corresponding to the iris of the first eyeball model, and the first map is used for simulating the refraction of the lens to incident light;
and step S206, displaying the generated second eyeball model through the target client.
Through the steps S202 to S206, a first eyeball model is obtained, wherein the first eyeball model is an unpatterned eyeball model; mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light; the generated second eyeball model is displayed through the target client, the problem that the fidelity of the rendered eyeballs is low due to too simple rendering in the rendering mode of the eyeballs in the prior art is solved, and the fidelity of the rendered eyeballs is improved.
In the technical solution provided in step S202, a first eyeball model is obtained, wherein the first eyeball model is an unpatterned eyeball model.
The eyeball model generated in the present embodiment may be an eyeball model of a virtual object, which may be a human-shaped object, an animal object, or other object having eyes in a virtual scene, for example, a player character, NPC, monster, or the like in a game scene. When creating the virtual object or modifying the eyeball model of the virtual object, the model generation method in the present embodiment may be used to generate the eyeball model.
For example, for an eyeball model of a virtual object in a target game, a developer may configure the material of the eyeball model to set the expression effect of the eyeball material. The tester can test the performance of the configured eyeball material to determine whether the performance of the eyeball material reaches the expectation. The user can set the material parameters of the eyeball model of the virtual character controlled by the user according to the preference of the user so as to define the eyeball image by the user. The texture is a data set, the main function is to provide data and lighting algorithm to a renderer (a core part of a rendering engine, which may be a hardware renderer, a software renderer, or a combination thereof), and the map is a part of the data.
The present embodiment takes the eyeball model of the virtual character in the game scene configured by the target object (e.g., a worker, a game player) as an example to describe the model creation method, and similar manners can be adopted for other models for performing eyeball creation, for example, a tester performs an eyeball material test.
The model generation method may be applied to the network architecture shown in fig. 1, and the model generation method may be executed by a terminal device alone, or may be executed by both the terminal device and a server, and in this embodiment, the model generation method executed by the terminal device alone is taken as an example, but it is not excluded that the eyeball model generation step is executed by the terminal device partially, and is executed by the server partially, which is not specifically limited in this embodiment.
The target object may be created using a client running on the terminal device (e.g., animation software, a client of a game application). The client may obtain an initial eyeball model of the virtual object, that is, a first eyeball model, which is not mapped, according to a trigger of the control instruction or another instruction. The initial eye model may be a model pre-configured by the client and stored in the terminal device in data or other type of data format. The manner of obtaining the first eyeball model may be: and calling a first eyeball model which is configured on the terminal equipment running the client in advance.
For example, a player may enter a target game through a client running the target game. The generation of the eye model may be performed at the time of starting a game, creating a virtual character, or at another timing (event trigger) when the model needs to be generated.
As an alternative embodiment, the first eye model is egg-shaped.
In this embodiment, the refraction of the lens of the eye to the incident light is simulated by the map, so as to improve the rendering effect of the eye. In order to achieve better refraction effect, the Mesh (grid) of the eyeball is not a regular sphere, but should be shaped as shown in fig. 3, i.e. egg-like, so that the refraction effect is better and closer to reality. An egg-like shape may be: one end is big and the other end is small.
Each cross section of the first eyeball model in the direction perpendicular to the axis is circular, the area of the cross section of the first eyeball model in the axis direction is gradually increased from two end points to a middle target position, and the distance from the first end point to the target position is larger than the distance from the second end point to the target position.
Through this embodiment, the shape through configuration eyeball model is egg type (similar egg type), can improve the refraction effect of eye model, and then promotes the accuracy of eyes simulation.
In the technical solution provided in step S204, the first eyeball model is mapped by using a target map to obtain a second eyeball model, where the target map includes a first map corresponding to an iris of the first eyeball model, and the first map is used to simulate refraction of a crystalline lens to incident light.
For the initial eyeball model, the client may render the initial eyeball model using the target map, thereby obtaining a second eyeball model.
The target map may comprise one or more maps, and different maps may be used to simulate different parts of the eye. The attributes of the map of the eye model may correspond to the attributes of the material of the eye model. Texture is an abstract concept in an engine, where the engine programmer upgrades the texture (mainly involving changes to the rendering algorithm) in the engine code. The user can define eyeball images according to own preference, which are expressed in the form of material parameters (attribute parameters), and the user-defined material parameters can be supported.
It should be noted that one attribute of the map may correspond to one attribute of one part of the eyeball model, and by setting each attribute of the material of the eyeball model, the corresponding map may be obtained, so that the model may be rendered using the generated map, and the second eyeball model may be obtained.
The target map may include a first map corresponding to an iris of the eyeball model (or corresponding to the iris and sclera, or corresponding to other parts of the eyeball) for rendering the iris of the first eyeball model. Furthermore, the first map is also used to simulate the refraction of the lens to the incident light, so that the refraction effect of the lens can be increased.
For example, as shown in fig. 4, the iris of the eyeball model does not increase the refraction effect, the reflection bright spot on the eyeball is controlled by a map, and the degree of the simulation of the eyeball is poor. And by increasing the refraction effect (refraction effect of iris) of the crystalline lens (the structure of the eyeball is shown in fig. 5), the degree of the simulation of the eyeball is increased, and a more realistic eye can be represented, as shown in fig. 6.
In the technical solution provided in step S206, the generated second eyeball model is displayed by the target client.
After obtaining the second eye model, the second eye model may be output and displayed, and the target client displaying the second eye model may be, but is not limited to, one of the following: animation software, a client of a gaming application. The second eye model may be output in various ways, the second eye model itself may be displayed, a virtual object including the second eye model may be a complete virtual object, or a specific part (for example, a head) of the virtual object.
For example, in animation software, the second eye model may be generated based on a default material property configuration and displayed on an interface of the animation software. In the target area of the interface, a material property setting area may be displayed to adjust the material of the second eyeball model.
For another example, the background server of the game application may generate the second eye model based on a default configuration or a user configuration, configure the eye model into the corresponding virtual character, set the virtual character into a game scene, and send an image of the virtual character or the virtual character and an image in the game scene to the client of the game application for display.
As an alternative embodiment, after the generated second eyeball model is displayed by the target client, an adjustment interface may be displayed on the target client, wherein the adjustment interface includes an adjustment area for adjusting the target attribute of the target map; detecting a target operation executed on an adjusting area, wherein the target operation is used for adjusting an attribute value of a target attribute; responding to the detected target operation, adjusting the target attribute of the target map, and updating the second eyeball model; and displaying the updated second eyeball model through the target client.
The user can configure the attribute information of the eyeball material in animation software or an editing interface of a game so as to adjust the display effect of the eyeball model.
After the second eyeball model is displayed on the target client, an adjustment interface may be displayed on the target client, where the adjustment interface may be a material editing interface of the eyeball model, and the material editing interface may be used to adjust the material (attribute value of the material) of the eyeball model and adjust the display effect of the eyeball model. The different attributes of the material may correspond to attributes of one or more of the target maps.
The material editing interface may include a plurality of areas, for example, a model display area for displaying an eyeball model, and an adjustment area for adjusting attributes. The user can adjust the attribute values of one or more attributes (also referred to as parameters) of the eyeball material, and display the eyeball model corresponding to the adjusted attribute values through the model display area.
The step of adjusting the parameter value of the parameter and the step of adjusting the displayed eyeball model may be performed synchronously, that is, the displayed eyeball model is updated in real time as the attribute value of the target attribute changes, and the adjustment of the attribute value may be continuous, for example, by clicking, continuous clicking, or otherwise operating a specific button in the adjustment area to control the attribute value of the target attribute to continuously change, the displayed eyeball model also changes correspondingly with the change of the attribute value, so as to visually show the influence of the target attribute on the display of the eyeball model.
The user may perform a target operation on the adjustment region, such as clicking, double-clicking, continuous clicking, dragging, or otherwise operating the adjustment region to adjust the attribute value of the target attribute. The target client (or the terminal device where the target client is located) may detect the target operation, and obtain the change information of the attribute value of the target attribute.
And responding to the target operation, adjusting the target attribute of the target map according to the attribute value of the target attribute configured by the user, and regenerating the eyeball model according to the adjusted attribute value of the target attribute so as to update the second eyeball model (obtain a third eyeball model). The updated second eyeball model can be displayed through the target client.
For example, as shown in fig. 7, the material editing interface of the animation software includes a 3D model display area and a material attribute adjustment area (second area), and the eyeball model can be arranged by manipulating each attribute in the material attribute adjustment area.
Through the embodiment, the attribute of the map is adjusted by executing the target operation on the adjusting interface so as to update the eyeball model, and the flexibility of the eyeball model setting can be improved.
As an alternative embodiment, adjusting the target attribute of the target map, the updating the second eyeball model includes: and in the case that the target attribute comprises a first attribute of the first map, adjusting the first attribute of the first map to update the iris, wherein the first attribute is a display attribute of the iris.
The attribute of the target map may include a plurality of attributes, and the adjusted target attribute may be at least one of the plurality of attributes, and may include, for example, the first attribute of the first map. The first attribute is a display attribute of the iris in the eyeball model, and can control the state of the iris displayed in the eyeball model.
If the attribute value of the first attribute is adjusted, the first map may be updated according to the adjusted attribute value of the first attribute, thereby updating the iris displayed in the eyeball model. The method for updating the first map may be to select a map corresponding to the current attribute value from the plurality of maps as the first map, or to update data of the first map according to the adjusted attribute value.
By the embodiment, the iris in the eyeball model can be updated by adjusting the attribute (corresponding to the attribute of the eyeball material) of the map corresponding to the iris, so that partial adjustment of the eyeball model can be realized, and the flexibility of adjusting the eyeball model is improved.
In order to improve the reality of iris representation and the diversity of iris display in the eyeball model, a plurality of display characteristics of the iris may be corresponded by a plurality of attributes.
As an alternative embodiment, the first attribute of the first map is adjusted to update the iris of the second eye model, including but not limited to at least one of:
(1) and adjusting a first sub-attribute of the first map to update the size of the iris, wherein the first sub-attribute is used for controlling the size of the iris display.
The first sub-attribute of the first map may correspond to a size of the iris. The sub-attributes of the first map may correspond to attributes of the material of the eyeball, for example, the material editing interface may include a plurality of material attributes, and different material attributes may be used to control display attributes of the iris, sclera, and the like of the eyeball. For the first sub-attribute, it can be used to control the size of the iris in the eyeball model.
For example, as shown in fig. 8, the size of the Iris corresponds to "Iris UV Radius" in the attribute column, that is, the Iris UV Radius, and by adjusting the attribute value, the size of the Iris can be adjusted, the larger the Radius, the larger the size of the Iris.
(2) And adjusting a second sub-attribute of the first map to update the roughness of the iris, wherein the second sub-attribute is used for controlling the roughness of the iris display.
The second sub-attribute of the first map may correspond to a roughness of the iris. For example, in a material editing interface, the second sub-attribute "Iris Roughness" may be used to control the Roughness of the Iris in the eye model.
For example, as shown in fig. 9, the size of the Iris corresponds to "Iris Roughness" in the attribute column, that is, Iris Roughness, and by adjusting the attribute value, the Roughness of the Iris can be adjusted, and the larger the attribute value, the coarser the displayed Iris.
(3) And adjusting a third sub-attribute of the first map to update the highlight intensity of the iris, wherein the third sub-attribute is used for controlling the highlight intensity displayed by the iris.
The third sub-attribute of the first map may correspond to a high light intensity of the iris. For example, in the material editing interface, the third sub-attribute "Iris Specular" may be used to control the high light intensity of the Iris in the eyeball model.
For example, as shown in fig. 10, the highlight intensity of the Iris corresponds to "Iris Specular" in the attribute column, that is, the highlight intensity of the Iris can be adjusted by adjusting the attribute value, and the larger the attribute value, the larger the highlight intensity of the Iris is displayed.
(4) And adjusting a fourth sub-attribute of the first map to update the brightness of the iris, wherein the fourth sub-attribute is used for controlling the brightness of the iris display.
The fourth sub-attribute of the first map may correspond to a luminance of the iris. For example, in the material editing interface, the fourth sub-attribute "Iris Brightness" may be used to control the Brightness of the Iris in the eyeball model.
For example, as shown in fig. 11, the Brightness of the Iris corresponds to "Iris Brightness" in the attribute field, that is, the Iris Brightness, and by adjusting the attribute value, the Brightness of the Iris can be adjusted, and the larger the attribute value, the brighter the displayed Iris.
(5) And adjusting a fifth sub-attribute of the first map to update the color of the iris, wherein the fifth sub-attribute is used for controlling the color displayed by the iris.
The fifth sub-attribute of the first map may correspond to a color of the iris. For example, in the material editing interface, the fifth sub-attribute "Iris Color" may be used to control the Color of the Iris in the eyeball model.
For example, as shown in fig. 12, the Color of the Iris corresponds to "Iris Color" in the attribute column, that is, the Iris Color, and by adjusting the attribute value, the Color of the Iris can be adjusted.
Through this embodiment, the display characteristic of iris is controlled through a plurality of sub-attributes of first map, can improve the flexibility of iris control, and then improves the variety that the iris shows, satisfies different user demands.
As an alternative embodiment, the target map further comprises: a second map corresponding to a sclera of a second eye model, the target attribute of the target map being adjusted, the updating the second eye model comprising: and under the condition that the target attribute comprises a second attribute of the second map, adjusting the second attribute of the second map to update the sclera, wherein the second attribute is a display attribute of the sclera.
In addition to the first map, the target map may contain a second map corresponding to a sclera of the eyeball model. The attribute of the target map may include a plurality of attributes, and the adjusted target attribute may be at least one of the plurality of attributes, and may include, for example, a second attribute of the second map. The second attribute is a display attribute of the sclera in the eyeball model, and can control the state of the sclera displayed in the eyeball model.
If the attribute value of the second attribute is adjusted, the second map may be updated according to the adjusted attribute value of the second attribute, thereby updating the sclera displayed in the eyeball model. The method for updating the second map may be to select a map corresponding to the current attribute value from the plurality of maps as the second map, or to update data of the second map according to the adjusted attribute value.
Through the embodiment, the sclera in the eyeball model can be updated by adjusting the attribute of the map corresponding to the sclera, so that partial adjustment of the eyeball model can be realized, and the flexibility of adjusting the eyeball model is improved.
As an alternative embodiment, adjusting the second attribute of the second map to update the sclera includes at least one of:
(1) adjusting a sixth sub-attribute of the second map to update a size of the pupil in the sclera, wherein the sixth sub-attribute is used to control a size of the pupil display.
The sixth sub-attribute of the second map may correspond to a size of a pupil. For example, in the material editing interface, a sixth sub-attribute "Pupil Scale" may be used to control the size of the Pupil in the eyeball model.
For example, as shown in fig. 13, the size of the Pupil corresponds to "Pupil Scale" in the attribute column, that is, the Pupil ratio, and the size of the Pupil can be adjusted by adjusting the attribute value, and the larger the attribute value, the larger the displayed Pupil.
(2) And adjusting a seventh sub-attribute of the second map to update the roughness of the sclera, wherein the seventh sub-attribute is used for controlling the roughness of the sclera display.
The seventh sub-attribute of the second map may correspond to a roughness of the sclera. For example, in the material editing interface, the seventh sub-attribute "Sclera roughress" may be used to control the Roughness of the Sclera in the eyeball model.
For example, as shown in fig. 14, the Roughness of the Sclera corresponds to "Sclera Roughness" in the attribute column, that is, the Roughness of the Sclera, and by adjusting the attribute value, the Roughness of the Sclera can be adjusted, and the larger the attribute value, the rougher the Sclera is displayed.
(3) And adjusting an eighth sub-attribute of the second map to update the highlight intensity of the sclera, wherein the eighth sub-attribute is used for controlling the highlight intensity displayed by the sclera.
The eighth sub-attribute of the second map may correspond to a high light intensity of the sclera. For example, in the material editing interface, the eighth sub-attribute "Sclera Specular" may be used to control the high light intensity of the Sclera in the eyeball model.
For example, as shown in fig. 15, the high light intensity of the Sclera corresponds to "Sclera Specular" in the attribute column, that is, the high light intensity of the Sclera is adjusted by adjusting the attribute value, and the higher the attribute value, the higher the high light intensity of the Sclera is displayed.
Through this embodiment, the display characteristic of sclera is controlled through a plurality of sub-attributes of second map, can improve the flexibility of sclera control, and then improves the variety that the sclera shows, satisfies different user demands.
As an alternative embodiment, the target map further comprises: a third map for simulating eye wettability, adjusting target properties of the target map, the updated second eye model comprising:
and in the case that the target attribute comprises a third attribute of a third map, adjusting the third attribute of the third map to update the wettability of the second eyeball model, wherein the third attribute is used for controlling the wettability displayed by the second eyeball model.
In addition to the first map, the target map may contain a third map for simulating eye wettability. The third map may be an eye normal map. The attribute of the target map may include a plurality of attributes, and the adjusted target attribute may be at least one of the plurality of attributes, and may include, for example, a third attribute of a third map. This third attribute may control the degree of wetness exhibited in the eyeball model, resulting in a bumpy feel of the sclera (white) portion.
For example, as shown in fig. 16, the eyeball Normal map is used to indicate the degree of wetness of the eyeball, and corresponds to "scatter Normal" in the attribute column, that is, the flattened Normal line, and the degree of wetness of the eyeball can be adjusted by adjusting the attribute value, and the larger the attribute value, the larger the degree of wetness of the eyeball is indicated.
Through the embodiment, the wettability of the displayed eyeball model can be updated by adjusting the attribute of the normal map for controlling the wettability displayed by the eyeball, so that partial adjustment of the eyeball model can be realized, and the flexibility of eyeball model adjustment is improved.
The following describes a model generation method in the embodiment of the present application with reference to an alternative example. The model generation method in this example can be applied to a scene in which eye modeling is performed using animation software. The engine programmer upgrades the material in the engine code (mainly involving the change of the rendering algorithm), so that the upgraded expression effect of the eyeballs has more real reflection images and highlight points, the refraction effect of the iris, the moistening effect of the eyes and the like.
The animation software provides a material editing interface, the material of the eyeball can be adjusted through the interface, the material of the eyeball can be upgraded, and the upgrade of the material of the eyeball can be realized by modifying a shader algorithm of the material of the eyeball.
By upgrading the material of the eyeball, the expression effect of the eyeball can be improved, and the expression effect can include but is not limited to at least one of the following:
(1) increase the lens refractive effect (refractive effect of the iris);
(2) separating the maps of the iris and sclera so that the iris and pupil can be individually sized;
(3) the roughness and high light intensity of the iris and sclera are separately adjusted;
(4) adjusting the brightness and color of the iris;
(5) the eye normal map is added to indicate the degree of eye wetting.
The expression effect of the human eyes can be obviously enhanced by using the new eyeball materials, so that the virtual object is more attractive (as shown in fig. 17 and 18), and the simulation degree of the virtual object is improved.
Except for the above-mentioned upgrading to eyeball material to promote the rendering effect of people's eyes, in order to promote the eye rendering effect, can also carry out the following processing:
(1) in order to achieve better refraction effect, the Mesh of the eyeball is not a regular sphere but is shaped as shown in fig. 3, so that the refraction effect is better and close to reality;
(2) the rendering effect of lacrimal glands around the eyes is improved, so that the eyes are more attractive;
(3) improving the shadow expression around the eyeball;
(4) and (5) upgrading Mesh production and material quality of eyelashes.
Through the example, the material of the eyeball of the character is upgraded, the expression effect of the eyes of the character can be improved, and the simulation degree of the virtual object is improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided a model generation apparatus for implementing the above model generation method. The apparatus can be applied to the animation creating device described above. Fig. 19 is a schematic diagram of an alternative model generation apparatus according to an embodiment of the present application, as shown in fig. 19, the apparatus may include:
(1) an obtaining unit 192, configured to obtain a first eyeball model, where the first eyeball model is an unpatterned eyeball model;
(2) the mapping unit 194 is connected to the obtaining unit 192, and configured to map the first eyeball model with a target map to obtain a second eyeball model, where the target map includes a first map corresponding to an iris of the first eyeball model, and the first map is used to simulate refraction of a lens on incident light;
(3) and a first display unit 196, connected to the mapping unit 194, for displaying the second eyeball model generated by the target client display.
It should be noted that the obtaining unit 192 in this embodiment may be configured to execute step S202 in this embodiment, the mapping unit 194 in this embodiment may be configured to execute step S204 in this embodiment, and the first display unit 196 in this embodiment may be configured to execute step S206 in this embodiment.
Through the module, a first eyeball model is obtained, wherein the first eyeball model is an unpatterned eyeball model; mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light; the generated second eyeball model is displayed through the target client, the problem that the fidelity of the rendered eyeballs is low due to too simple rendering in the rendering mode of the eyeballs in the prior art is solved, and the fidelity of the rendered eyeballs is improved.
As an alternative embodiment, the apparatus further comprises:
(1) a second display unit, configured to display an adjustment interface on the target client after the generated second eyeball model is displayed by the target client, where the adjustment interface includes an adjustment area for adjusting a target attribute of the target map;
(2) a detection unit, configured to detect a target operation performed on an adjustment region, where the target operation is used to adjust an attribute value of a target attribute;
(3) the adjusting unit is used for responding to the detected target operation, adjusting the target attribute of the target map and updating the second eyeball model;
(4) and the third display unit is used for displaying the updated second eyeball model through the target client.
As an alternative embodiment, the adjusting unit comprises:
(1) and the first adjusting module is used for adjusting the first attribute of the first map to update the iris under the condition that the target attribute comprises the first attribute of the first map, wherein the first attribute is the display attribute of the iris.
As an alternative embodiment, the first adjusting module includes at least one of:
(1) the first adjusting sub-module is used for adjusting a first sub-attribute of the first map so as to update the size of the iris, wherein the first sub-attribute is used for controlling the size of the iris display;
(2) the second adjusting sub-module is used for adjusting a second sub-attribute of the first map so as to update the roughness of the iris, wherein the second sub-attribute is used for controlling the roughness of the iris display;
(3) the third adjusting sub-module is used for adjusting a third sub-attribute of the first map so as to update the high light intensity of the iris, wherein the third sub-attribute is used for controlling the high light intensity displayed by the iris;
(4) the fourth adjusting sub-module is used for adjusting a fourth sub-attribute of the first map so as to update the brightness of the iris, wherein the fourth sub-attribute is used for controlling the brightness displayed by the iris;
(5) and the fifth adjusting sub-module is used for adjusting the fifth sub-attribute of the first map so as to update the color of the iris, wherein the fifth sub-attribute is used for controlling the color displayed by the iris.
As an alternative embodiment, the target map further comprises: the second map corresponding to the sclera of the second eyeball model, and the adjusting unit comprises:
(1) and the second adjusting module is used for adjusting the second attribute of the second map to update the sclera under the condition that the target attribute comprises the second attribute of the second map, wherein the second attribute is the display attribute of the sclera.
As an alternative embodiment, the second adjusting module comprises at least one of:
(1) a sixth adjusting sub-module, configured to adjust a sixth sub-attribute of the second map to update a size of the pupil in the sclera, where the sixth sub-attribute is used to control a size of the pupil display;
(2) the seventh adjusting sub-module is used for adjusting a seventh sub-attribute of the second map so as to update the roughness of the sclera, wherein the seventh sub-attribute is used for controlling the roughness displayed by the sclera;
(3) and the eighth adjusting submodule is used for adjusting the eighth sub-attribute of the second map so as to update the high light intensity of the sclera, wherein the eighth sub-attribute is used for controlling the high light intensity displayed by the sclera.
As an alternative embodiment, the target map further comprises: a third map for simulating eye wettability, the adjusting unit comprising:
(1) and the third adjusting module is used for adjusting the third attribute of the third map to update the wettability of the second eyeball model under the condition that the target attribute comprises the third attribute of the third map, wherein the third attribute is used for controlling the wettability displayed by the second eyeball model.
As an alternative embodiment, the first eye model is egg-shaped.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the above model generation method, which may be a server, a terminal, or a combination thereof.
Fig. 20 is a block diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 20, the electronic device may include: one or more processors 2001 (only one of which is shown), a memory 2003, and a transmission device 2005, which may further include an input/output device 2007 as shown in fig. 20.
The memory 2003 may be used to store software programs and modules, such as program instructions/modules corresponding to the model generation method and apparatus in the embodiment of the present application, and the processor 2001 executes various functional applications and data processing by running the software programs and modules stored in the memory 2003, so as to implement the model generation method described above. The memory 2003 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 2003 may further include memory located remotely from the processor 2001, which may be connected to electronic devices via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 2005 is used for receiving or transmitting data via a network, and can also be used for data transmission between a processor and a memory. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 2005 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 2005 is a Radio Frequency (RF) module, which is used to communicate with the internet by wireless. Among them, the memory 2003 is used to store an application program, in particular.
The processor 2001 may call an application stored in the memory 2003 via the transmission means 2005 to perform the following steps:
s1, acquiring a first eyeball model, wherein the first eyeball model is an unpatterned eyeball model;
s2, mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the lens to incident light;
and S3, displaying the generated second eyeball model through the target client.
By adopting the embodiment of the application, a scheme for generating the model is provided. Obtaining a first eyeball model, wherein the first eyeball model is an unpatterned eyeball model; mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light; the generated second eyeball model is displayed through the target client, the problem that the fidelity of the rendered eyeballs is low due to too simple rendering in the rendering mode of the eyeballs in the prior art is solved, and the fidelity of the rendered eyeballs is improved.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It will be understood by those skilled in the art that the structure shown in fig. 20 is only an illustration and is not intended to limit the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 20, or have a different configuration than shown in FIG. 20. The terminal interacting with the electronic device through the network may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
According to still another aspect of an embodiment of the present application, there is also provided a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing the model generation method.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, acquiring a first eyeball model, wherein the first eyeball model is an unpatterned eyeball model;
s2, mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the lens to incident light;
and S3, displaying the generated second eyeball model through the target client.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, a ROM, a RAM, a removable hard disk, a magnetic disk, or an optical disk.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is merely an alternative embodiment of the present application and it should be noted that modifications and embellishments could be made by those skilled in the art without departing from the principle of the present application and should be considered as the scope of the present application.
The scope of the subject matter sought to be protected herein is defined in the appended claims. These and other aspects of the invention are also encompassed by the embodiments of the present invention as set forth in the following numbered clauses:
1. a model generation method, comprising:
acquiring a first eyeball model, wherein the first eyeball model is an unpatterned eyeball model;
mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light;
displaying the generated second eyeball model through the target client.
2. The method of clause 1, wherein after displaying the generated second eye model by the target client, the method further comprises:
displaying an adjustment interface on the target client, wherein the adjustment interface comprises an adjustment area used for adjusting the target attribute of the target map;
detecting a target operation executed on the adjustment area, wherein the target operation is used for adjusting the attribute value of the target attribute;
adjusting the target attribute of the target map in response to the detected target operation, the second eye model having been updated;
displaying the updated second eyeball model through the target client.
3. The method of clause 2, wherein adjusting the target attributes of the target map, the updated second eye model comprises:
adjusting the first attribute of the first map to update the iris if the target attribute comprises the first attribute of the first map, wherein the first attribute is a display attribute of the iris.
4. The method of clause 3, wherein adjusting the first attribute of the first map to update the iris of the second eye model comprises at least one of:
adjusting a first sub-attribute of the first map to update the size of the iris, wherein the first sub-attribute is used to control the size of the iris display;
adjusting a second sub-attribute of the first map to update the roughness of the iris, wherein the second sub-attribute is used for controlling the roughness of the iris display;
adjusting a third sub-attribute of the first map to update the highlight intensity of the iris, wherein the third sub-attribute is used for controlling the highlight intensity of the iris display;
adjusting a fourth sub-attribute of the first map to update the brightness of the iris, wherein the fourth sub-attribute is used for controlling the brightness of the iris display;
adjusting a fifth sub-attribute of the first map to update the color of the iris, wherein the fifth sub-attribute is used to control the color of the iris display.
5. The method of clause 2, wherein the target map further comprises: a second map corresponding to a sclera of the second eye model, the target attribute of the target map being adjusted, the updating the second eye model comprising:
adjusting a second attribute of the second map to update the sclera if the target attribute comprises the second attribute of the second map, wherein the second attribute is a display attribute of the sclera.
6. The method of clause 5, wherein adjusting the second attribute of the second map to update the sclera comprises at least one of:
adjusting a sixth sub-attribute of the second map to update a size of a pupil in the sclera, wherein the sixth sub-attribute is used to control a size of the pupil display;
adjusting a seventh sub-attribute of the second map to update the roughness of the sclera, wherein the seventh sub-attribute is used for controlling the roughness of the sclera display;
adjusting an eighth sub-attribute of the second map to update the highlight intensity of the sclera, wherein the eighth sub-attribute is used to control the highlight intensity displayed by the sclera.
7. The method of clause 2, wherein the target map further comprises: a third map for simulating eye wettability, adjusting the target property of the target map, the updated second eye model comprising:
and in the case that the target attribute comprises a third attribute of the third map, adjusting the third attribute of the third map to update the wettability of the second eyeball model, wherein the third attribute is used for controlling the wettability displayed by the second eyeball model.
8. The method of any of clauses 1-7, wherein the first eye model is egg-shaped.
9. A model generation apparatus comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first eyeball model, and the first eyeball model is an unpatterned eyeball model;
the mapping unit is used for mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light;
and the first display unit is used for displaying the second eyeball model generated by the target client.
10. The apparatus of clause 9, wherein the apparatus further comprises:
a second display unit, configured to display an adjustment interface on the target client after the generated second eyeball model is displayed by the target client, where the adjustment interface includes an adjustment area for adjusting a target attribute of the target map;
a detection unit configured to detect a target operation performed on the adjustment region, wherein the target operation is used to adjust an attribute value of the target attribute;
an adjusting unit, configured to adjust the target attribute of the target map in response to the detected target operation, so that the second eyeball model is updated;
and the third display unit is used for displaying the updated second eyeball model through the target client.
11. The apparatus of clause 10, wherein the adjusting unit comprises:
a first adjusting module, configured to adjust a first attribute of the first map to update the iris if the target attribute includes the first attribute of the first map, where the first attribute is a display attribute of the iris.
12. The apparatus of clause 11, wherein the first adjustment module comprises at least one of:
a first adjusting sub-module, configured to adjust a first sub-attribute of the first map to update the size of the iris, where the first sub-attribute is used to control the size of the iris display;
a second adjusting sub-module, configured to adjust a second sub-attribute of the first map to update the roughness of the iris, where the second sub-attribute is used to control the roughness of the iris display;
a third adjusting sub-module, configured to adjust a third sub-attribute of the first map to update the highlight intensity of the iris, where the third sub-attribute is used to control the highlight intensity displayed by the iris;
a fourth adjusting sub-module, configured to adjust a fourth sub-attribute of the first map to update the brightness of the iris, where the fourth sub-attribute is used to control the brightness of the iris display;
and the fifth adjusting sub-module is used for adjusting a fifth sub-attribute of the first map so as to update the color of the iris, wherein the fifth sub-attribute is used for controlling the color displayed by the iris.
13. The apparatus of clause 10, wherein the target map further comprises: a second map corresponding to a sclera of the second eyeball model, the adjustment unit comprising:
a second adjusting module, configured to adjust a second attribute of the second map to update the sclera if the target attribute includes the second attribute of the second map, where the second attribute is a display attribute of the sclera.
14. The apparatus of clause 13, wherein the second adjustment module comprises at least one of:
a sixth adjusting sub-module, configured to adjust a sixth sub-attribute of the second map to update a size of a pupil in the sclera, where the sixth sub-attribute is used to control a size of the pupil display;
a seventh adjusting sub-module, configured to adjust a seventh sub-attribute of the second map to update the roughness of the sclera, where the seventh sub-attribute is used to control the roughness of the sclera display;
and the eighth adjusting submodule is used for adjusting an eighth sub-attribute of the second map so as to update the high light intensity of the sclera, wherein the eighth sub-attribute is used for controlling the high light intensity displayed by the sclera.
15. The apparatus of clause 10, wherein the target map further comprises: a third map for simulating eye wettability, the adjusting unit comprising:
a third adjusting module, configured to, if the target attribute includes a third attribute of the third map, adjust the third attribute of the third map to update the wettability of the second eyeball model, where the third attribute is used to control the wettability displayed by the second eyeball model.
16. The apparatus of any of clauses 9-15, wherein the first eye model is egg-shaped.
17. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of clauses 1 to 8 when executed.
18. An electronic device comprising a memory having a computer program stored therein and a processor arranged to perform the method of any of clauses 1 to 8 by means of the computer program.

Claims (9)

1. A method of model generation, comprising:
acquiring a first eyeball model, wherein the first eyeball model is an unpatterned eyeball model;
mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light; the target map further comprises: a second map corresponding to a sclera of the second eyeball model and a third map for simulating eyeball wettability;
displaying the generated second eyeball model through a target client;
the method further comprises updating the second eye model as follows: adjusting a third sub-attribute of the first map to update the highlight intensity of the iris, wherein the third sub-attribute is used for controlling the highlight intensity of the iris display; adjusting an eighth sub-attribute of the second map to update the highlight intensity of the sclera, wherein the eighth sub-attribute is used for controlling the highlight intensity displayed by the sclera; and adjusting a third attribute of the third map to update the wettability of the second eyeball model so that the sclera part has rugged feeling, wherein the third attribute is used for controlling the wettability displayed by the second eyeball model.
2. The method of claim 1, wherein after displaying the generated second eye model by the target client, the method further comprises:
displaying an adjustment interface on the target client, wherein the adjustment interface comprises an adjustment area used for adjusting the target attribute of the target map;
detecting a target operation executed on the adjustment area, wherein the target operation is used for adjusting the attribute value of the target attribute;
adjusting the target attribute of the target map to update the second eye model in response to the detected target operation;
displaying the updated second eyeball model through the target client.
3. The method of claim 2, wherein adjusting the target property of the target map to update the second eye model comprises:
adjusting the first attribute of the first map to update the iris if the target attribute comprises the first attribute of the first map, wherein the first attribute is a display attribute of the iris.
4. The method of claim 3, wherein adjusting the first attribute of the first map to update the iris of the second eye model comprises at least one of:
adjusting a first sub-attribute of the first map to update the size of the iris, wherein the first sub-attribute is used to control the size of the iris display;
adjusting a second sub-attribute of the first map to update the roughness of the iris, wherein the second sub-attribute is used for controlling the roughness of the iris display;
adjusting a fourth sub-attribute of the first map to update the brightness of the iris, wherein the fourth sub-attribute is used for controlling the brightness of the iris display;
adjusting a fifth sub-attribute of the first map to update the color of the iris, wherein the fifth sub-attribute is used to control the color of the iris display.
5. The method of claim 2, wherein adjusting the target property of the target map to update the second eye model comprises:
adjusting a second attribute of the second map to update the sclera if the target attribute comprises the second attribute of the second map, wherein the second attribute is a display attribute of the sclera.
6. The method of claim 5, wherein adjusting the second attribute of the second map to update the sclera comprises at least one of:
adjusting a sixth sub-attribute of the second map to update a size of a pupil in the sclera, wherein the sixth sub-attribute is used to control a size of the pupil display;
and adjusting a seventh sub-attribute of the second map to update the roughness of the sclera, wherein the seventh sub-attribute is used for controlling the roughness of the sclera display.
7. A model generation apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first eyeball model, and the first eyeball model is an unpatterned eyeball model;
the mapping unit is used for mapping the first eyeball model by using a target mapping to obtain a second eyeball model, wherein the target mapping comprises a first mapping corresponding to the iris of the first eyeball model, and the first mapping is used for simulating the refraction of the crystalline lens to incident light; the target map further comprises: a second map corresponding to a sclera of the second eyeball model and a third map for simulating eyeball wettability;
the first display unit is used for displaying the second eyeball model generated by the display of the target client;
a second eyeball model updating unit, configured to update the second eyeball model as follows: adjusting a third sub-attribute of the first map to update the highlight intensity of the iris, wherein the third sub-attribute is used for controlling the highlight intensity of the iris display; adjusting an eighth sub-attribute of the second map to update the highlight intensity of the sclera, wherein the eighth sub-attribute is used for controlling the highlight intensity displayed by the sclera; and adjusting a third attribute of the third map to update the wettability of the second eyeball model so that the sclera part has rugged feeling, wherein the third attribute is used for controlling the wettability displayed by the second eyeball model.
8. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method of any one of claims 1 to 6 when executed.
9. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 6 by means of the computer program.
CN202010438055.6A 2020-05-21 2020-05-21 Model generation method and apparatus, storage medium, and electronic apparatus Active CN111738967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010438055.6A CN111738967B (en) 2020-05-21 2020-05-21 Model generation method and apparatus, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010438055.6A CN111738967B (en) 2020-05-21 2020-05-21 Model generation method and apparatus, storage medium, and electronic apparatus

Publications (2)

Publication Number Publication Date
CN111738967A CN111738967A (en) 2020-10-02
CN111738967B true CN111738967B (en) 2022-01-04

Family

ID=72647551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010438055.6A Active CN111738967B (en) 2020-05-21 2020-05-21 Model generation method and apparatus, storage medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN111738967B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114201046B (en) * 2021-12-10 2023-12-01 北京字跳网络技术有限公司 Gaze direction optimization method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632389A (en) * 2012-07-17 2014-03-12 索尼公司 System and method to achieve better eyelines in cg characters
CN106537219A (en) * 2014-05-30 2017-03-22 奇跃公司 Methods and system for creating focal planes in virtual and augmented reality
CN108875499A (en) * 2017-11-06 2018-11-23 北京旷视科技有限公司 Face shape point and status attribute detection and augmented reality method and apparatus
CN109903374A (en) * 2019-02-20 2019-06-18 网易(杭州)网络有限公司 Eyeball analogy method, device and the storage medium of virtual objects
CN111179396A (en) * 2019-12-12 2020-05-19 腾讯科技(深圳)有限公司 Image generation method, image generation device, storage medium, and electronic device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1259641C (en) * 2002-07-19 2006-06-14 矽统科技股份有限公司 Method and system for improving contour of embossed pinup pattern
US7388580B2 (en) * 2004-05-07 2008-06-17 Valve Corporation Generating eyes for a character in a virtual environment
CN107180441B (en) * 2016-03-10 2019-04-09 腾讯科技(深圳)有限公司 The method and apparatus for generating eye image
CN108780223B (en) * 2016-03-11 2019-12-20 脸谱科技有限责任公司 Corneal sphere tracking for generating eye models
US10217275B2 (en) * 2016-07-07 2019-02-26 Disney Enterprises, Inc. Methods and systems of performing eye reconstruction using a parametric model
US10217265B2 (en) * 2016-07-07 2019-02-26 Disney Enterprises, Inc. Methods and systems of generating a parametric eye model
CN107145224B (en) * 2017-04-07 2019-10-29 清华大学 Human eye sight tracking and device based on three-dimensional sphere Taylor expansion
CN109712226A (en) * 2018-12-10 2019-05-03 网易(杭州)网络有限公司 The see-through model rendering method and device of virtual reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632389A (en) * 2012-07-17 2014-03-12 索尼公司 System and method to achieve better eyelines in cg characters
CN106537219A (en) * 2014-05-30 2017-03-22 奇跃公司 Methods and system for creating focal planes in virtual and augmented reality
CN108875499A (en) * 2017-11-06 2018-11-23 北京旷视科技有限公司 Face shape point and status attribute detection and augmented reality method and apparatus
CN109903374A (en) * 2019-02-20 2019-06-18 网易(杭州)网络有限公司 Eyeball analogy method, device and the storage medium of virtual objects
CN111179396A (en) * 2019-12-12 2020-05-19 腾讯科技(深圳)有限公司 Image generation method, image generation device, storage medium, and electronic device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Real-Time 3D Eye Performance Reconstruction for RGBD Cameras;Quan Wen 等;《IEEE Transactions on Visualization and Computer Graphics》;20171201;第23卷(第12期);第2586-2598页 *
人物真实皮肤材质实时渲染研究;王舒一;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120115(第01期);第I138-534页 *

Also Published As

Publication number Publication date
CN111738967A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN108564646A (en) Rendering intent and device, storage medium, the electronic device of object
EP3882870A1 (en) Method and device for image display, storage medium and electronic device
KR102090891B1 (en) Methods and devices for generating eye images
CN112215934A (en) Rendering method and device of game model, storage medium and electronic device
CN110634175A (en) Neurobehavioral animation system
CN108447043A (en) A kind of image combining method, equipment and computer-readable medium
CN111179396B (en) Image generation method, image generation device, storage medium, and electronic device
CN110333924B (en) Image gradual change adjustment method, device, equipment and storage medium
CN108837510B (en) Information display method and device, storage medium and electronic device
CN106898040A (en) Virtual resource object rendering intent and device
EP4394713A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN112669194B (en) Animation processing method, device, equipment and storage medium in virtual scene
CN106683193A (en) Three-dimensional model design method and design device
US20230405452A1 (en) Method for controlling game display, non-transitory computer-readable storage medium and electronic device
CN112132936A (en) Picture rendering method and device, computer equipment and storage medium
CN110930484B (en) Animation configuration method and device, storage medium and electronic device
CN111738967B (en) Model generation method and apparatus, storage medium, and electronic apparatus
CN112231020B (en) Model switching method and device, electronic equipment and storage medium
CN114359458A (en) Image rendering method, device, equipment, storage medium and program product
CN112843704A (en) Animation model processing method, device, equipment and storage medium
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
WO2023089537A1 (en) Alternating perceived realities in a virtual world based on first person preferences and a relative coordinate system
CN113313798B (en) Cloud picture manufacturing method and device, storage medium and computer equipment
CN111714889B (en) Sound source control method, device, computer equipment and medium
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant