CN106910240B - Real-time shadow generation method and device - Google Patents

Real-time shadow generation method and device Download PDF

Info

Publication number
CN106910240B
CN106910240B CN201710054120.3A CN201710054120A CN106910240B CN 106910240 B CN106910240 B CN 106910240B CN 201710054120 A CN201710054120 A CN 201710054120A CN 106910240 B CN106910240 B CN 106910240B
Authority
CN
China
Prior art keywords
glasses
shadow
model
point
human head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710054120.3A
Other languages
Chinese (zh)
Other versions
CN106910240A (en
Inventor
曾强
罗杨
汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Topplusvision Science & Technology Co ltd
Original Assignee
Chengdu Topplusvision Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Topplusvision Science & Technology Co ltd filed Critical Chengdu Topplusvision Science & Technology Co ltd
Priority to CN201710054120.3A priority Critical patent/CN106910240B/en
Publication of CN106910240A publication Critical patent/CN106910240A/en
Application granted granted Critical
Publication of CN106910240B publication Critical patent/CN106910240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a real-time shadow generation method and a device, wherein a shadow texture map of a glasses model is constructed according to the glasses model and a human head model of wearable equipment; baking the shadow texture map in the human head model; adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses; and rendering the glasses shadow on the human head model to generate the shadow of the glasses under the corresponding light model. The method and the device for generating the real-time shadow have the advantages of good real-time performance, high robustness, high precision, capability of realizing synchronous real-time movement of the shadow and the virtual object, and high fidelity.

Description

Real-time shadow generation method and device
Technical Field
The invention relates to the technical field of computational graphics, in particular to a real-time shadow generation method and a real-time shadow generation device.
Background
Real-time shadow is a technique for drawing the shadow of an object under a light source in a three-dimensional space in real time by using an algorithm. Real-time shadow technology has very important applications in the fields of virtual reality, augmented reality, simulation and games.
The indexes for measuring the quality of the real-time shadow technology mainly comprise two points: real-time and fidelity. Known real-time shading techniques have room for improvement in all of the above criteria. With the wide use of mobile platforms such as iOS and Android, the real-time shadow technology realized based on the mobile platform has a wide application prospect in the field of virtual wearing. The traditional real-time shadow technology is based on shadow map technology and shadow casting, the shadow of the technology is not vivid enough, and the real-time performance of the shadow is not good enough.
Therefore, it is very necessary to find a real-time and high-fidelity shading technology, which brings great application value and commercial value.
Disclosure of Invention
The invention aims to provide a method and a device for generating real-time shadow, which are used for solving the problems of poor real-time performance and low fidelity of the existing real-time shadow technology.
In order to solve the above technical problem, the present invention provides a method for generating a real-time shadow, comprising:
according to the glasses model and the human head model of the wearable device, constructing a shadow texture mapping of the glasses model;
baking the shadow texture map in the head model;
adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses;
and rendering the glasses shadow on the human head model to generate the shadow of the glasses under the corresponding light model.
Optionally, the constructing a shadow texture map of the glasses model according to the glasses model and the head model of the wearable device includes:
placing the glasses model and the human head model in a 3D scene, and placing a light simulation real illumination scene in the 3D scene;
rendering the glasses model and the human head model, and constructing a shadow texture map of the glasses model.
Optionally, after the constructing the shadow texture map of the glasses model according to the glasses model and the head model of the wearable device, the method further includes:
and removing impurities from the shadow texture map.
Optionally, said baking the shadow texture map in the head model comprises:
traversing the image pixel coordinates of the human head model and the pixel coordinates of the shadow texture map;
and if the image pixel coordinates of the human head model are consistent with the pixel coordinates of the shadow texture mapping, baking the pixel values of the pixel points of the shadow texture mapping into the pixel points of the human head model until all the pixel points are traversed.
Optionally, the adjusting, in real time, the motion state of the shadow corresponding to the glasses according to the motion state of the glasses includes:
dividing the glasses into a glasses frame, a front half part of glasses legs and a rear half part of the glasses legs, and respectively performing motion matching processing on the motion states of the three parts;
as for the frame part, according to
Figure BDA0001218408910000021
Adjusting the motion state of the shadow by vOffset; wherein, (u, V) represents the coordinate before the shadow movement, (u ', V ') represents the coordinate after the shadow movement, vOffset is the influence factor of the glasses movement on the shadow movement, V ' is the ordinate of the glasses, and w is the conversion coefficient of the local coordinate system where the human head is located and the shadow texture mapping coordinate system;
for the anterior half of the temple, according to
Figure BDA0001218408910000022
y=vOffset-[|u-uO|-|uA-uO|]*tan n,
Figure BDA0001218408910000031
Adjusting the motion state of the shadow; wherein u isO,uA,uB,uCRespectively representing the abscissa of the point O, the point A, the point B and the point C; the point O is a point connected between the middle of the left lens and the middle of the right lens, the point A is a point connected between the glasses leg and the glasses frame, the point B is a point connected between the front half part and the rear half part of the glasses leg, and the point C is a point at the tail end of the glasses leg;
for the rear half of the temple, according to
Figure BDA0001218408910000032
The motion state of the shadow is adjusted.
The invention also provides a device for generating the real-time shadow, which comprises:
the construction module is used for constructing a shadow texture mapping of the glasses model according to the glasses model and the human head model of the wearable device;
a baking module for baking the shadow texture map in the human head model;
the adjusting module is used for adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses;
and the rendering module is used for rendering the glasses shadow on the head model to generate the shadow of the glasses under the corresponding light model.
Optionally, the building module comprises:
the simulation unit is used for placing the glasses model and the human head model in a 3D scene, and placing lamplight in the 3D scene to simulate a real illumination scene;
and the construction unit is used for rendering the glasses model and the human head model and constructing the shadow texture mapping of the glasses model.
Optionally, the method further comprises:
and the processing module is used for removing impurities from the shadow texture mapping after the shadow texture mapping of the glasses model is constructed according to the glasses model and the head model of the wearable device.
Optionally, the torrefaction module comprises:
the traversing unit is used for traversing the image pixel coordinates of the human head model and the pixel coordinates of the shadow texture mapping;
and the baking unit is used for baking the pixel values of the pixel points of the shadow texture mapping to the pixel points of the human head model if the image pixel coordinates of the human head model are consistent with the pixel coordinates of the shadow texture mapping until all the pixel points are traversed.
Optionally, the adjusting module includes:
the dividing unit is used for dividing the glasses into a glasses frame, a front half part of glasses legs and a rear half part of the glasses legs and respectively performing motion matching processing on the motion states of the three parts;
a first adjusting unit for adjusting the first adjusting unit in accordance with the frame portion
Figure BDA0001218408910000041
Adjusting the motion state of the shadow; wherein (u, v) represents the coordinates before the shadow movement, (u ', v') represents the coordinates after the shadow movement, and vOffset is the influence of the glasses movement on the shadow movementThe factor V' is the longitudinal coordinate of the glasses, and w is the conversion coefficient of the local coordinate system where the human head is located and the shadow texture mapping coordinate system;
a second adjusting unit for adjusting the front half of the temple according to
Figure BDA0001218408910000042
y=vOffset-[|u-uO|-|uA-uO|]*tan n,
Figure BDA0001218408910000043
Adjusting the motion state of the shadow; wherein u isO,uA,uB,uCRespectively representing the abscissa of the point O, the point A, the point B and the point C; the point O is a point connected between the middle of the left lens and the middle of the right lens, the point A is a point connected between the glasses leg and the glasses frame, the point B is a point connected between the front half part and the rear half part of the glasses leg, and the point C is a point at the tail end of the glasses leg;
a third adjusting unit for adjusting the rear half of the temple according to
Figure BDA0001218408910000044
The motion state of the shadow is adjusted.
According to the real-time shadow generation method and device provided by the invention, a shadow texture map of a glasses model is constructed according to the glasses model and the human head model of the wearable device; baking the shadow texture map in the human head model; adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses; and rendering the glasses shadow on the human head model to generate the shadow of the glasses under the corresponding light model. The method and the device for generating the real-time shadow have the advantages of good real-time performance, high robustness, high precision, capability of realizing synchronous real-time movement of the shadow and the virtual object, and high fidelity.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flow chart of one embodiment of a method for generating real-time shadows according to the present invention;
FIG. 2 is a flow chart of another embodiment of a real-time shadow generation method provided by the present invention;
FIG. 3 is a flow chart of another embodiment of a real-time shadow generation method provided by the present invention;
FIG. 4 is a schematic view of a simulated real-world lighting scene and the placement of lights;
FIG. 5 is a schematic illustration of a shadow map constructed using 3 Dmax;
FIG. 6 is a schematic illustration of baking shadows on a model of a human head;
FIG. 7 is a schematic view of uv coordinates of a human head model;
FIG. 8 is a detailed diagram of the shaded three parts;
FIG. 9 is a detailed view of three portions of the eyeglasses;
fig. 10 is a block diagram of a real-time shadow generation apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows a flowchart of a specific embodiment of a real-time shadow generation method provided by the present invention, where the method includes:
step S101: according to the glasses model and the human head model of the wearable device, constructing a shadow texture mapping of the glasses model;
specifically, the glasses model and the head model may be placed in a 3D scene, and a light simulation real illumination scene is placed in the 3D scene; rendering the glasses model and the human head model, and constructing a shadow texture map of the glasses model.
Step S102: baking the shadow texture map in the head model;
as a specific embodiment, the step may specifically be: traversing the image pixel coordinates of the human head model and the pixel coordinates of the shadow texture map; and if the image pixel coordinates of the human head model are consistent with the pixel coordinates of the shadow texture mapping, baking the pixel values of the pixel points of the shadow texture mapping into the pixel points of the human head model until all the pixel points are traversed.
Step S103: adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses;
step S104: and rendering the glasses shadow on the human head model to generate the shadow of the glasses under the corresponding light model.
According to the real-time shadow generation method provided by the invention, a shadow texture map of a glasses model is constructed according to the glasses model and the human head model of the wearable device; baking the shadow texture map in the human head model; adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses; and rendering the glasses shadow on the human head model to generate the shadow of the glasses under the corresponding light model. The real-time shadow generation method provided by the invention has the advantages of good real-time performance and high robustness, and can realize synchronous real-time movement of the shadow and the virtual object with high precision and high fidelity.
Fig. 2 shows a flow chart of another embodiment of the real-time shadow generation method provided by the present invention, where the method includes:
step S201: according to the glasses model and the human head model of the wearable device, constructing a shadow texture mapping of the glasses model;
step S202: removing impurities from the shadow texture map;
step S203: baking the shadow texture map in the head model;
step S204: adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses;
specifically, the glasses are divided into a glasses frame, a front half part of each glasses leg and a rear half part of each glasses leg, and the motion states of the three parts are subjected to motion matching processing respectively;
as for the frame part, according to
Figure BDA0001218408910000071
Adjusting the motion state of the shadow by vOffset; wherein, (u, V) represents the coordinate before the shadow movement, (u ', V') represents the coordinate after the shadow movement, vOffset is the influence factor of the glasses movement on the shadow movement, V is the ordinate of the glasses, and w is the conversion coefficient of the local coordinate system where the human head is located and the shadow texture mapping coordinate system;
for the anterior half of the temple, according to
Figure BDA0001218408910000072
y=vOffset-[|u-uO|-|uA-uO|]*tan n,
Figure BDA0001218408910000073
Adjusting the motion state of the shadow; wherein u isO,uA,uB,uCRespectively representing the abscissa of the point O, the point A, the point B and the point C; the point O is a point connected between the middle of the left lens and the middle of the right lens, the point A is a point connected with the glasses frame, the point B is a point divided into the front half part and the rear half part of the glasses frame, and the point C is a point at the tail end of the glasses frame;
for the rear half of the temple, according to
Figure BDA0001218408910000074
The motion state of the shadow is adjusted.
Step S205: and rendering the glasses shadow on the human head model to generate the shadow of the glasses under the corresponding light model.
In the embodiment, the shadow map is obtained by simulating the lighting scene through 3dmax, and then the impurity removal processing is performed on the shadow map, so that the rendered shadow is natural and vivid, and the shadow map is more perfect in appearance.
Referring to fig. 3, the method for generating real-time shadow provided by the present invention is further elaborated in detail with reference to specific scenarios.
Step S301: a shadow texture map of the eyewear model was constructed using 3dmax depending on the size of the eyewear model and the head model.
The concrete mode is as follows: and putting the glasses 3D model and the human head 3D model into a 3D scene. Simulating a real illumination scene: lights are placed in the 3D scene to simulate a real lighting scene, the positions of which are shown in fig. 4. It should be noted that when lighting the model, the light source can be used for lighting several more light sources, and the light sources are uniformly distributed in space, so as to simulate the brightness of the real light source distributed on the human face, adjust the lighting parameters, and form the contrast between light and dark. And (4) rendering by adjusting the parameters in the 3dmax to obtain a simulated shadow texture mapping projected on the human head model by the glasses.
Step S302: and (3) storing the shadow map obtained in the step (S301), and then removing impurities from the shadow map by utilizing a PS (packet switching) technology, so that the shadow map is more perfect in sense of appearance. As shown in fig. 5.
Step S303: calling the shadow map after the PS processing, and baking the shadow map in the human head model.
The concrete mode is as follows: and traversing the pixel coordinates (U, V) of the human head model image and the pixel coordinates (U, V) of the shadow map. If the pixel coordinates (U, V) of the human head model image are consistent with the pixel coordinates (U, V) of the shadow map, baking the pixel values of the pixel points (U, V) of the shadow map into the pixel points (U, V) of the human head model until all the pixel points are traversed, as shown in FIG. 6. In this manner, the shadow can be re-baked onto the model of the human head, as shown in FIG. 7.
Step S304: and adjusting the motion state of the shadow in real time according to the motion state of the glasses, so that the shadow is consistent with the motion state of the glasses.
The movement of the glasses mainly takes the bending part of the leg sleeves of the glasses legs as a fixed point and moves up and down along the bridge of the nose. In the process that the glasses move downwards along the nose bridge, the shadow also moves downwards, and in the process that the glasses move upwards along the nose bridge, the shadow also moves upwards, so that the effect of simulating real-time shadow is achieved. The movement of the shadow is controlled by the movement parameters of the glasses, so that the simulated real-time shadow is vivid.
The specific shadow movement adjustment mode is as follows:
the shadow and the glasses are divided into three parts, as shown in fig. 8 and 9, and the motion matching processing is respectively carried out on the motion states of the three parts.
The adjustment to the shadow movement of the first portion is described using the following mathematical formula:
Figure BDA0001218408910000091
wherein, (u, v) represents the coordinates before the shadow is moved, and (u ', v') represents the coordinates after the shadow is moved. The vioffset is an influence factor of the movement of the glasses on the movement of the shadow, represents the amount of coordinate change of the shadow by converting the vertical movement amount of the glasses in the local coordinate system, and is determined by the movement posture of the glasses. vOffset is expressed as follows: vOffset ═ w ═ V'
V' is a longitudinal coordinate of the glasses and can be acquired in real time through a flat screen; w is a conversion coefficient between the local coordinate system where the human head is located and the shadow map coordinate system, and is a constant value, and generally, w is 0.016.
The adjustment to the shadow movement of the second portion is described by the following mathematical formula:
Figure BDA0001218408910000092
y=vOffset-[|u-uO|-|uA-uO|]*tann
Figure BDA0001218408910000093
wherein u isO,uA,uB,uCRespectively, the abscissa of the point O, the point a, the point B, and the point C.
The adjustment to the shadow movement of the third portion is described by the following mathematical formula:
Figure BDA0001218408910000094
step S305: and rendering the glasses shadow on the human head model to obtain the glasses and the shadows of the glasses under the corresponding light model.
The method and the device can simulate the shadow in real time according to the movement of the glasses, so that the shadow is good in real-time performance and accords with the human perception. In addition, in the embodiment, the shadow map is obtained by simulating the lighting scene through 3dmax, and then the impurity removal processing is performed on the shadow map, so that the rendered shadow is natural and more vivid.
The following describes a real-time shadow generating apparatus according to an embodiment of the present invention, and the real-time shadow generating apparatus described below and the real-time shadow generating method described above may be referred to in correspondence.
Fig. 10 is a block diagram of a real-time shadow generating apparatus according to an embodiment of the present invention, where the real-time shadow generating apparatus according to fig. 10 may include:
the construction module 100 is configured to construct a shadow texture map of a glasses model according to a glasses model and a head model of a wearable device;
a baking module 200 for baking the shadow texture map in the human head model;
the adjusting module 300 is configured to adjust the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses, so that the shadow is consistent with the motion state of the glasses;
and the rendering module 400 is configured to render the glasses shadows on the head model, and generate shadows of the glasses under the corresponding light model.
As a specific implementation manner, in the real-time shadow generation apparatus provided by the present invention, the building module includes:
the simulation unit is used for placing the glasses model and the human head model in a 3D scene, and placing lamplight in the 3D scene to simulate a real illumination scene;
and the construction unit is used for rendering the glasses model and the human head model and constructing the shadow texture mapping of the glasses model.
Further, the embodiment of the present invention may further include:
and the processing module is used for removing impurities from the shadow texture mapping after the shadow texture mapping of the glasses model is constructed according to the glasses model and the head model of the wearable device.
On the basis of any of the above embodiments, the baking module may specifically include:
the traversing unit is used for traversing the image pixel coordinates of the human head model and the pixel coordinates of the shadow texture mapping;
and the baking unit is used for baking the pixel values of the pixel points of the shadow texture mapping to the pixel points of the human head model if the image pixel coordinates of the human head model are consistent with the pixel coordinates of the shadow texture mapping until all the pixel points are traversed.
Optionally, the adjusting module includes:
the dividing unit is used for dividing the glasses into a glasses frame, a front half part of glasses legs and a rear half part of the glasses legs and respectively performing motion matching processing on the motion states of the three parts;
a first adjusting unit for adjusting the first adjusting unit in accordance with the frame portion
Figure BDA0001218408910000111
vOffset-w' advances the motion state of the shadowAdjusting the rows; wherein, (u, V) represents the coordinate before the shadow movement, (u ', V ') represents the coordinate after the shadow movement, vOffset is the influence factor of the glasses movement on the shadow movement, V ' is the ordinate of the glasses, and w is the conversion coefficient of the local coordinate system where the human head is located and the shadow texture mapping coordinate system;
a second adjusting unit for adjusting the front half of the temple according to
Figure BDA0001218408910000112
y=vOffset-[|u-uO|-|uA-uO|]*tann,
Figure BDA0001218408910000113
Adjusting the motion state of the shadow; wherein u isO,uA,uB,uCRespectively representing the abscissa of the point O, the point A, the point B and the point C; the point O is a point connected between the middle of the left lens and the middle of the right lens, the point A is a point connected with the glasses frame, the point B is a point divided into the front half part and the rear half part of the glasses frame, and the point C is a point at the tail end of the glasses frame;
a third adjusting unit for adjusting the rear half of the temple according to
Figure BDA0001218408910000114
The motion state of the shadow is adjusted.
The real-time shadow generating device of this embodiment is used to implement the foregoing real-time shadow generating method, and therefore a specific implementation manner of the real-time shadow generating device can see the example portions of the real-time shadow generating method in the foregoing, for example, the constructing module 100, the baking module 200, the adjusting module 300, and the rendering module 400, which are respectively used to implement steps S101, S102, S103, and S104 in the foregoing real-time shadow generating method, so that the specific implementation manner thereof may refer to descriptions of corresponding respective partial embodiments, and will not be described again here.
The real-time shadow generation device provided by the invention constructs a shadow texture mapping of a glasses model according to the glasses model and the human head model of the wearable device; baking the shadow texture map in the human head model; adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses; and rendering the glasses shadow on the human head model to generate the shadow of the glasses under the corresponding light model. The real-time shadow generating device provided by the invention has the advantages of good real-time performance and high robustness, and can realize synchronous real-time movement of the shadow and the virtual object with high precision and high fidelity.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The method and the device for generating the real-time shadow provided by the invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (8)

1. A method for generating real-time shadows, comprising:
according to the glasses model and the human head model of the wearable device, constructing a shadow texture mapping of the glasses model;
baking the shadow texture map in the head model;
adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses;
rendering the glasses shadow on the human head model to generate the shadow of the glasses under the corresponding light model;
the adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses comprises:
dividing the glasses into a glasses frame, a front half part of glasses legs and a rear half part of the glasses legs, and respectively performing motion matching processing on the motion states of the three parts;
as for the frame part, according to
Figure FDA0002379296040000011
Adjusting the motion state of the shadow; wherein, (u, V) represents the coordinate before the shadow movement, (u ', V ') represents the coordinate after the shadow movement, vOffset is the influence factor of the glasses movement on the shadow movement, V ' is the ordinate of the glasses, and w is the conversion coefficient of the local coordinate system where the human head is located and the shadow texture mapping coordinate system;
for the anterior half of the temple, according to
Figure FDA0002379296040000012
y=vOffset-[|u-uO|-|uA-uO|]*tan n,
Figure FDA0002379296040000013
Adjusting the motion state of the shadow; wherein u isO,uA,uB,uCRespectively representing the abscissa of the point O, the point A, the point B and the point C; the point O is a point connected between the middle of the left lens and the middle of the right lens, the point A is a point connected between the glasses leg and the glasses frame, the point B is a point connected between the front half part and the rear half part of the glasses leg, and the point C is a point at the tail end of the glasses leg;
for the rear half of the temple, according to
Figure FDA0002379296040000014
The motion state of the shadow is adjusted.
2. The method for generating real-time shadows according to claim 1, wherein the constructing the shadow texture map of the glasses model according to the glasses model and the human head model of the wearable device comprises:
placing the glasses model and the human head model in a 3D scene, and placing a light simulation real illumination scene in the 3D scene;
rendering the glasses model and the human head model, and constructing a shadow texture map of the glasses model.
3. The method for generating real-time shadows according to claim 2, wherein after the constructing the shadow texture map of the glasses model according to the glasses model and the human head model of the wearable device, the method further comprises:
and removing impurities from the shadow texture map.
4. The method of generating real-time shadows according to any one of claims 1 to 3, wherein baking the shadow texture map into the head model comprises:
traversing the image pixel coordinates of the human head model and the pixel coordinates of the shadow texture map;
and if the image pixel coordinates of the human head model are consistent with the pixel coordinates of the shadow texture mapping, baking the pixel values of the pixel points of the shadow texture mapping into the pixel points of the human head model until all the pixel points are traversed.
5. An apparatus for generating real-time shadows, comprising:
the construction module is used for constructing a shadow texture mapping of the glasses model according to the glasses model and the human head model of the wearable device;
a baking module for baking the shadow texture map in the human head model;
the adjusting module is used for adjusting the motion state of the shadow corresponding to the glasses in real time according to the motion state of the glasses so as to keep the shadow consistent with the motion state of the glasses;
the rendering module is used for rendering the glasses shadow on the head model to generate the shadow of the glasses under the corresponding light model;
the adjustment module includes:
the dividing unit is used for dividing the glasses into a glasses frame, a front half part of glasses legs and a rear half part of the glasses legs and respectively performing motion matching processing on the motion states of the three parts;
a first adjusting unit for adjusting the first adjusting unit in accordance with the frame portion
Figure FDA0002379296040000031
Adjusting the motion state of the shadow by vOffset; wherein, (u, V) represents the coordinate before the shadow movement, (u ', V ') represents the coordinate after the shadow movement, vOffset is the influence factor of the glasses movement on the shadow movement, V ' is the ordinate of the glasses, and w is the conversion coefficient of the local coordinate system where the human head is located and the shadow texture mapping coordinate system;
a second adjusting unit for adjusting the position of the optical fiber,for the front half of the temple, according to
Figure FDA0002379296040000032
y=vOffset-[|u-uO|-|uA-uO|]*tann,
Figure FDA0002379296040000033
Adjusting the motion state of the shadow; wherein u isO,uA,uB,uCRespectively representing the abscissa of the point O, the point A, the point B and the point C; the point O is a point connected between the middle of the left lens and the middle of the right lens, the point A is a point connected between the glasses leg and the glasses frame, the point B is a point connected between the front half part and the rear half part of the glasses leg, and the point C is a point at the tail end of the glasses leg;
a third adjusting unit for adjusting the rear half of the temple according to
Figure FDA0002379296040000034
The motion state of the shadow is adjusted.
6. The real-time shadow generation apparatus of claim 5, wherein the construction module comprises:
the simulation unit is used for placing the glasses model and the human head model in a 3D scene, and placing lamplight in the 3D scene to simulate a real illumination scene;
and the construction unit is used for rendering the glasses model and the human head model and constructing the shadow texture mapping of the glasses model.
7. The real-time shadow generation apparatus of claim 6, further comprising:
and the processing module is used for removing impurities from the shadow texture mapping after the shadow texture mapping of the glasses model is constructed according to the glasses model and the head model of the wearable device.
8. The real-time shadow generation apparatus according to any one of claims 5 to 7, wherein the baking module comprises:
the traversing unit is used for traversing the image pixel coordinates of the human head model and the pixel coordinates of the shadow texture mapping;
and the baking unit is used for baking the pixel values of the pixel points of the shadow texture mapping to the pixel points of the human head model if the image pixel coordinates of the human head model are consistent with the pixel coordinates of the shadow texture mapping until all the pixel points are traversed.
CN201710054120.3A 2017-01-24 2017-01-24 Real-time shadow generation method and device Active CN106910240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710054120.3A CN106910240B (en) 2017-01-24 2017-01-24 Real-time shadow generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710054120.3A CN106910240B (en) 2017-01-24 2017-01-24 Real-time shadow generation method and device

Publications (2)

Publication Number Publication Date
CN106910240A CN106910240A (en) 2017-06-30
CN106910240B true CN106910240B (en) 2020-04-28

Family

ID=59207139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710054120.3A Active CN106910240B (en) 2017-01-24 2017-01-24 Real-time shadow generation method and device

Country Status (1)

Country Link
CN (1) CN106910240B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492001B (en) * 2017-05-17 2020-03-27 成都通甲优博科技有限责任公司 Virtual glasses try-on method and device and service terminal
CN116188668B (en) * 2023-04-25 2023-07-07 北京渲光科技有限公司 Shadow rendering method, medium and electronic device based on IOS platform

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262869B2 (en) * 2012-07-12 2016-02-16 UL See Inc. Method of 3D model morphing driven by facial tracking and electronic device using the method the same
KR102212030B1 (en) * 2014-05-26 2021-02-04 엘지전자 주식회사 Glass type terminal and control method thereof
CN104077802B (en) * 2014-07-16 2018-01-05 四川蜜蜂科技有限公司 The method that enhancing virtual scene real-time simulation image is shown
CN104123748A (en) * 2014-07-18 2014-10-29 无锡梵天信息技术股份有限公司 Screen space point light source based method for achieving real-time dynamic shadows
CN106056663B (en) * 2016-05-19 2019-05-24 京东方科技集团股份有限公司 Rendering method, processing module and augmented reality glasses in augmented reality scene

Also Published As

Publication number Publication date
CN106910240A (en) 2017-06-30

Similar Documents

Publication Publication Date Title
CN109427088B (en) Rendering method for simulating illumination and terminal
CN112316420B (en) Model rendering method, device, equipment and storage medium
CN111127624A (en) Illumination rendering method and device based on AR scene
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
KR102637901B1 (en) A method of providing a dolly zoom effect by an electronic device and the electronic device utilized in the method
CN110838084B (en) Method and device for transferring style of image, electronic equipment and storage medium
US11600040B2 (en) Illumination rendering method and apparatus, storage medium, and electronic device
CN109985384B (en) Method and device for dynamically adjusting map
CN101968891A (en) System for automatically generating three-dimensional figure of picture for game
CN102819855B (en) The generation method of two dimensional image and device
CN104392479A (en) Method of carrying out illumination coloring on pixel by using light index number
KR102353556B1 (en) Apparatus for Generating Facial expressions and Poses Reappearance Avatar based in User Face
CN108986232A (en) A method of it is shown in VR and AR environment picture is presented in equipment
CN111047709A (en) Binocular vision naked eye 3D image generation method
CN112419472A (en) Augmented reality real-time shadow generation method based on virtual shadow map
CN106910240B (en) Real-time shadow generation method and device
CN106204703A (en) Three-dimensional scene models rendering intent and device
CN109461197A (en) A kind of cloud real-time rendering optimization algorithm based on spherical surface UV and re-projection
CN109447931B (en) Image processing method and device
CN107293162A (en) Move teaching auxiliary and device, terminal device
CN113648655A (en) Rendering method and device of virtual model, storage medium and electronic equipment
CN111402385B (en) Model processing method and device, electronic equipment and storage medium
CN106780702B (en) Method and system based on physical coloring
CN109446945A (en) Threedimensional model treating method and apparatus, electronic equipment, computer readable storage medium
CN110838167A (en) Model rendering method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant