CN111862338A - Display method and device for simulating glasses wearing image - Google Patents

Display method and device for simulating glasses wearing image Download PDF

Info

Publication number
CN111862338A
CN111862338A CN202010577638.7A CN202010577638A CN111862338A CN 111862338 A CN111862338 A CN 111862338A CN 202010577638 A CN202010577638 A CN 202010577638A CN 111862338 A CN111862338 A CN 111862338A
Authority
CN
China
Prior art keywords
glasses
image
texture
map
wearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010577638.7A
Other languages
Chinese (zh)
Other versions
CN111862338B (en
Inventor
肖华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen New Mirror Media Network Co ltd
Original Assignee
Shenzhen New Mirror Media Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen New Mirror Media Network Co ltd filed Critical Shenzhen New Mirror Media Network Co ltd
Priority to CN202010577638.7A priority Critical patent/CN111862338B/en
Publication of CN111862338A publication Critical patent/CN111862338A/en
Application granted granted Critical
Publication of CN111862338B publication Critical patent/CN111862338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a display method, a display device, terminal equipment and a storage medium for simulating glasses wearing images, wherein the method comprises the following steps: carrying out element mapping on the glasses image based on the target scene map to obtain a new glasses image; generating a simulated glasses wearing image according to the new glasses image; and displaying the simulated glasses wearing image. Therefore, according to the target scene map, the surrounding environment of the user wearing the glasses is simulated, so that the user can select to simulate wearing the glasses under various scenes, elements to be mapped in the target scene map are mapped to the glasses images through element mapping, the condition that objects in the real environment are inverted on the glasses frames, the glasses legs and the lenses of the glasses is simulated, the texture details of the glasses images are enriched by combining the target scene, so that the new glasses images are more real under the target scene, the glasses wearing effect is more real, and the glasses try-on and shopping experience of the user are improved.

Description

Display method and device for simulating glasses wearing image
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a display method and device for simulating glasses wearing images, terminal equipment and a storage medium.
Background
Today, it has become commonplace to enable online shopping via the internet. In order to improve the experience of online shopping of a user, a plurality of commodities sold online can simulate the specific use state of the commodities, such as clothes trying effect simulation, glasses wearing effect simulation and the like.
In the prior art, the on-line glasses try-on function can be adopted to simulate the glasses wearing effect of a user. Specifically, the glasses wearing model image is added into a shader, and the shader is used for rendering the glasses model in the glasses wearing model image according to the glasses material of the glasses model, so that the rendered glasses wearing model image is obtained. However, the glasses wearing model map obtained by rendering can only show the glasses models corresponding to different glasses materials, and thus, the existing rendering scheme has the problem of low reality degree of glasses wearing simulation.
Disclosure of Invention
The embodiment of the application provides a display method and device for simulating glasses wearing images, terminal equipment and a storage medium, and can solve the problem that the reality degree of the glasses wearing model image currently displayed to a user is low.
In a first aspect, an embodiment of the present application provides a display method for simulating glasses wearing images, including:
Carrying out element mapping on the glasses image based on the target scene map to obtain a new glasses image;
generating a simulated glasses wearing image according to the new glasses image;
and displaying the simulated glasses wearing image.
The method and the device for simulating the glasses image mapping based on the target scene map have the advantages that the element mapping is carried out on the glasses image based on the target scene map to obtain a new glasses image, so that the surrounding environment of a user when the user wears glasses is simulated according to the target scene map, the user can select to simulate wearing of the glasses under various scenes, elements to be mapped in the target scene map are mapped onto the glasses image through the element mapping, the condition that objects in the real environment are inverted on a glasses frame, glasses legs and lenses of the glasses is simulated, and therefore the detailed texture of the glasses image is enriched by combining the target scene, and the new glasses image is more real under the target scene. Because the glasses image is updated to a new glasses image, the simulation glasses wearing image is generated according to the new glasses image, and the simulation glasses wearing image is displayed, so that the glasses wearing image in the target scene is displayed for a user, the user can check texture details mapped on the glasses in the glasses wearing image, the glasses wearing effect is more real, and the glasses try-on and shopping experience of the user is improved.
In a second aspect, an embodiment of the present application provides a display device for simulating glasses wearing images, including:
the mapping module is used for carrying out element mapping on the glasses image based on the target scene map to obtain a new glasses image;
the generating module is used for generating a simulated glasses wearing image according to the new glasses image;
and the display module is used for displaying the simulated glasses wearing image.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method for displaying a simulated glasses-worn image according to any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method for displaying a simulated eyeglass wearing image according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the display method for simulating glasses wearing images according to any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a display method for simulating glasses wearing images according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a mapping process for element mapping of a glasses image based on a target scene map according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a process for generating an element to be mapped according to a texture map according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a display method for simulating glasses wearing images according to another embodiment of the present application;
fig. 5 is a schematic flowchart of a display method for simulating glasses wearing images according to another embodiment of the present application;
FIG. 6 is a schematic diagram of ambient reflection provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of ambient refraction provided by an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a front effect of a simulated glasses-wearing image provided by an embodiment of the present application;
FIG. 9 is a schematic side effect diagram of a simulated eyeglass wearing image according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating an effect of the simulation glasses-worn image after being partially enlarged according to the embodiment of the present application
Fig. 11 is a schematic structural diagram of a display device for simulating glasses wearing images according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
As described in the related background art, the simulated glasses wearing image in which the glasses model is worn on the head model is added to the shader, and the shader renders the glasses model in the simulated glasses wearing image according to the glasses material of the glasses model, so as to obtain a rendered simulated glasses wearing image. However, the rendered simulated glasses wearing image can only show the color of the glasses model, so that the rendered simulated glasses wearing image has low reality degree, and cannot provide a real glasses wearing effect for a user.
Therefore, the embodiment of the application provides a display method for simulating glasses wearing images, which realizes that a three-dimensional glasses model is placed in a preset scene, and the three-dimensional glasses model is rendered according to an object or ambient light in the preset scene, so that the true degree of the three-dimensional glasses model is improved.
The rendering method of the virtual glasses model provided in the present application is described in detail below. Fig. 1 shows a schematic flowchart of a display method of a simulated glasses-worn image provided in the present application, and by way of example and not limitation, the method may be applied to the terminal device, which includes but is not limited to a mobile phone, a tablet computer, a wearable device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like.
S101, element mapping is carried out on the glasses image based on the target scene map, and a new glasses image is obtained.
In S101, the glasses image is a three-dimensional glasses image, that is, the three-dimensional relationship among the horizontal direction, the vertical direction, and the depth of field of the glasses can be displayed on the glasses image. Illustratively, the glasses image may be constructed by three-dimensional drawing software, or may be obtained by scanning feature points of real glasses by a scanning device and then performing three-dimensional modeling according to the feature points. The target scene map is used to simulate the background environment where the user wears the glasses, which may be a panoramic map of the background environment. Illustratively, the target scene map can be generated by drawing software, and can also be obtained by shooting a panoramic image of a real scene by a panoramic camera. The above-described panorama Texture is created by calling the Cube Texture Loader, for example, Cube Texture Loader, and is passed to the scene generator sense.
It can be understood that the target scene is a wearing scene simulating the wearing of the glasses by the user, and may be a preset indoor scene or an outdoor scene, the indoor scene may be various indoor environments such as a home, a library or a bar, and the outdoor scene may be various outdoor environments such as a sunny beach, a tree shade, and a road.
In this embodiment, when performing element Mapping on a glasses image, elements that can be mapped to the glasses image in a target scene map are determined, and then the determined elements are mapped to the glasses image according to an Environment Mapping (Environment Mapping) technique. The elements may be objects or ambient light in the target scene map, such as buildings, trees, mountains, clouds in the sky, sunlight, and lighting. The element mapping is a process of reflecting or refracting elements in the target scene map onto the glasses legs, lenses and frames of the glasses in the glasses image by using an environment mapping technology. The environment Mapping technology is Reflection Mapping (Reflection Mapping) technology, for example, cube (sky box) environment Mapping technology and sphere (sky sphere) environment Mapping technology. According to the technology, a reflection object (such as a glasses image) is used as a virtual eye, the virtual texture map is mapped onto the reflection object according to the virtual texture map used for describing a real scene, the obtained image mapped onto the reflection object is an image of the real scene, and the image of the real scene is displayed on the surface of the object (such as glasses legs, lenses and glasses frames of glasses in the glasses image), so that the glasses image presents corresponding image effects according to different target scenes, the real degree of the glasses image is improved, a user can check the glasses wearing effect under various scenes, and the shopping experience of the user is improved.
The glasses image may be an image of glasses worn on the head or an image of glasses not worn on the head. Namely, the embodiment of the application can perform environment mapping on the glasses image combined with the three-dimensional head image, and can also perform environment mapping on the glasses image and then combine the glasses image with the three-dimensional head image.
In the embodiment of the present application, by way of example and not limitation, the glasses image and the three-dimensional head image are combined in advance to obtain an initial simulated glasses wearing image. The method comprises the following specific steps:
s1, initializing a preset 3D renderer based on a 3D drawing protocol WebGL (Web Graphics Library).
S2, loading and rendering a three-dimensional head image of the user, comprising: modeling the head portrait of the user through 3D scanner equipment or 3D modeling software (such as Maya and 3D Studio Max), and outputting a three-dimensional head model file corresponding to the user; and reading the three-dimensional head model file through the 3D renderer, and rendering the three-dimensional head model to obtain the three-dimensional head image.
S3, loading and rendering the glasses image, comprising: modeling real glasses by a 3D scanner device or 3D modeling software such as Maya, 3D stereo Max), and outputting a three-dimensional glasses model file; and importing the three-dimensional glasses model file into a 3D renderer, and rendering the three-dimensional glasses model through the 3D renderer to obtain the glasses image.
And S4, wearing the three-dimensional glasses model on the three-dimensional head model through the three-dimensional characteristic point set of the three-dimensional head model and the three-dimensional characteristic point set of the three-dimensional glasses model to obtain an initial simulated glasses wearing image. The three-dimensional feature point set of the three-dimensional head model comprises ear feature points, nose feature points and pupil feature points; the three-dimensional feature point set of the three-dimensional glasses model comprises the feature points of the glasses legs, the feature points of the nose pads and the feature points of the centers of the lenses.
It should be understood that "wearing" described in all embodiments of the present application is to move the glasses (or the glasses model) to a preset position of the head (or the head model).
And S102, generating a simulated glasses wearing image according to the new glasses image.
In step S102, the simulated glasses wearing image is a three-dimensional image obtained by combining the glasses image and the head image in the target scene, or a three-dimensional image obtained by wearing the three-dimensional glasses model on the three-dimensional head model in the target scene. Since the glasses image is updated to a new glasses image, the simulated glasses wearing image needs to be updated according to the new glasses image, so that the simulated glasses wearing image has a more real glasses wearing effect. Optionally, for the glasses image that is an image of glasses worn on the head, performing light rendering on the new glasses image and the head image according to the virtual light source in the preset direction to obtain a simulated glasses wearing image. And if the glasses image is the image of glasses which are not worn on the head, combining the new glasses image with the head image, and performing light rendering on the new glasses image and the head image according to the virtual light source in the preset direction to obtain a simulated glasses wearing image.
Illustratively, the ray rendering process includes: the method comprises the steps of moving a glasses image and a head image to a preset virtual space, arranging a virtual light source in the preset direction of the virtual space, reflecting the virtual light source to the head, a glasses frame, lenses and glasses legs of the glasses according to the glasses material, and refracting the virtual light source to the lenses of the glasses, so as to obtain a simulated glasses wearing image.
And S103, displaying the simulated glasses wearing image.
In the above S103, the simulated glasses wearing image is displayed on the device with the display function, so as to display the simulated glasses wearing image in the target scene, so that the simulated glasses wearing image can display glasses wearing effects in various scenes, and display mapping elements mapped onto the glasses image.
Therefore, the method and the device have the advantages that the element mapping is carried out on the glasses image based on the target scene map to obtain a new glasses image, so that the surrounding environment of a user wearing glasses is simulated according to the target scene map, the user can select to simulate wearing glasses in various scenes, elements to be mapped in the target scene map are mapped to the glasses image through the element mapping, the situation that objects in the real environment are inverted on the glasses frame, the glasses legs and the lenses is simulated, and therefore the texture details of the glasses image are enriched by combining the target scene, and the new glasses image is more real in the target scene. Because the glasses image is updated to a new glasses image, the simulation glasses wearing image is generated according to the new glasses image, and the simulation glasses wearing image is displayed, so that the glasses wearing image in the target scene is displayed for a user, the user can check texture details mapped on the glasses in the glasses wearing image, the glasses wearing effect is more real, and the glasses try-on and shopping experience of the user is improved.
In an embodiment of the application, a sky box environment mapping technique is used for element mapping of glasses images. For example, the specific process of obtaining a new glasses image by performing element mapping on the glasses image based on the target scene map may be as shown in fig. 2, and includes steps S201 to S203:
s201, acquiring a sky box background map corresponding to a target scene chartlet, wherein the sky box background map comprises six texture maps of six faces forming a cube;
in S201, the sky box background map is a continuous six-texture map wrapping the entire target scene into a cube. The six texture maps respectively correspond to the front, the rear, the left, the right, the upper and the lower of the cube, the six directions respectively correspond to six half shafts in a coordinate system, the front corresponds to a positive Z half shaft, the rear corresponds to a negative Z half shaft, the left corresponds to a negative X half shaft, the right corresponds to a positive X half shaft, the upper corresponds to a positive Y half shaft, and the lower corresponds to a negative Y half shaft.
The texture map is a map of a texture formed by objects or light rays in the target scene. The sky box background map can be generated in advance according to the target scene map and stored in a preset pre-stored space, and can also be generated in real time according to the target scene map. For example, the sky box background map is generated by calling Unity3D software or 3Dmax software to map the target scene.
Optionally, the glasses image and the head image are placed in the center of a cube formed by the texture map to simulate the real environment when the user wears the glasses.
S202, generating an element to be mapped according to the texture map;
in S202, the element to be mapped is a two-dimensional texture on a texture map, for example, a texture composed of buildings, trees, mountains, sky clouds, sunlight or light on the texture map. In this embodiment, for a three-dimensional coordinate corresponding to any pixel point on the glasses image in a three-dimensional space, a texture map corresponding to the three-dimensional coordinate and a texture coordinate corresponding to the three-dimensional coordinate on the texture map are determined, and a two-dimensional texture corresponding to the texture coordinate is obtained, so as to obtain an element to be mapped.
And S203, mapping the element to be mapped to the glasses image to obtain a new glasses image.
In S203, determining a texture coordinate corresponding to the texture map according to a three-dimensional coordinate corresponding to any pixel point on the glasses image, mapping a two-dimensional texture corresponding to the texture coordinate to the three-dimensional coordinate according to a correspondence between the texture coordinate and the three-dimensional coordinate, and rendering the two-dimensional texture mapped to the glasses image by using the 3D renderer.
In the embodiment of the application, the sky box background map comprises six texture maps, and the texture maps have elements to be mapped to the glasses image, so that the elements to be mapped are generated according to the texture maps in the sky box background map. For example, the specific process of generating the element to be mapped according to the texture map may be as shown in fig. 3, and includes steps S301 to S302:
s301, aiming at the three-dimensional coordinates of each feature point on the glasses image, determining texture coordinates corresponding to the three-dimensional coordinates on the texture map;
in S301, the feature point is a feature point of a position of a frame, a temple, a lens, or the like on the eyeglass image, and the three-dimensional coordinate is a coordinate value of the feature point in a predetermined three-dimensional space. By way of example and not limitation, if the eyeglass image is an image of eyeglasses worn on the head, a cartesian space rectangular coordinate system may be constructed with the center of the three-dimensional head image as the origin, the line connecting the pupils of the eyes as the X-axis, the line connecting the center of the pupil line and the human middle point as the Y-axis, and the line perpendicular to the XY plane as the Z-axis; if the eyeglass image is an image of eyeglasses not worn on the head, a cartesian space rectangular coordinate system can be constructed by using the centers of two lenses in the eyeglass image as original points, the connection line of the centers of the two lenses as an X axis, a straight line perpendicular to the X axis in the temple direction as a Z axis, and a straight line perpendicular to an XZ plane as a Y axis. According to the position of the glasses image in a cubic space formed by the texture map, texture coordinates corresponding to the three-dimensional coordinates are determined.
In one possible implementation manner, for each three-dimensional coordinate, the determining, on the texture map, a texture coordinate corresponding to the three-dimensional coordinate may include: aiming at each three-dimensional coordinate, acquiring a normal vector corresponding to the three-dimensional coordinate; acquiring a direction vector from a preset virtual camera to a three-dimensional coordinate; determining a reflection vector or a refraction vector passing through the three-dimensional coordinate according to the normal vector and the direction vector; and determining the intersection point of the reflection vector or the refraction vector and the texture map, and taking the intersection point as the texture coordinate corresponding to the three-dimensional coordinate.
In this embodiment, fig. 6 shows a schematic diagram of ambient reflection, and fig. 7 shows a schematic diagram of ambient refraction. A. B, C and D are four texture maps among them that constitute the sky box, O is a three-dimensional coordinate of the three-dimensional glasses model, W is the viewing angle displayed to the user (i.e. the preset virtual camera), G is the direction vector from the preset virtual camera to the three-dimensional coordinate, F is the normal vector corresponding to the three-dimensional coordinate, K is the reflection vector, and J is the refraction vector.
As shown in fig. 6, the normal vector F may be determined as a reflection vector K according to one half of an angle between the normal vector F and the direction vector G, and texture coordinates corresponding to three-dimensional coordinates may be obtained according to an intersection point of an extension line of the reflection vector K and the texture map D. As shown in fig. 7, the refraction vector J may be determined according to one half of an angle between a reverse extension line of the normal vector F and a forward extension line of the direction vector G, and texture coordinates corresponding to three-dimensional coordinates may be obtained according to the extension line of the refraction vector J and the texture map B. Illustratively, the reflection vector may be obtained according to a reflex () function in the OpenGL coloring Language (OpenGL coloring Language), and the refraction vector may be obtained according to a reflex () function.
In another possible implementation manner, for each three-dimensional coordinate, the determining, on the texture map, a texture coordinate corresponding to the three-dimensional coordinate may include: determining a first dimension with the largest absolute value in three dimensions of the three-dimensional coordinates aiming at each three-dimensional coordinate; determining a texture map corresponding to the first dimension according to the first dimension; converting the numerical values of other two second dimensions except the first dimension into a preset range according to a preset numerical value conversion formula; and forming two second dimensions after the numerical value conversion into a two-dimensional coordinate, and taking the two-dimensional coordinate as a texture coordinate corresponding to the three-dimensional coordinate on the texture map of the first dimension.
In this embodiment, the dimension is X, Y and Z of the coordinate system. The predetermined range may be [0, 1], and the numerical conversion formula may be (a/b +1)/2, where a is the second dimension and b is the first dimension. By way of example and not limitation, the three-dimensional coordinates are (-1, 5, -8), -8 has the largest absolute value, i.e., the dimension with the largest absolute value is Z, and-8 is on the negative Z half axis, so according to sky box theory, the texture map corresponding to the first dimension can be determined to be the texture map corresponding to the back of the sky box. The value (-1/-8+1)/2 ═ 0.5625, (5/-8+1)/2 ═ 0.1875 for the two second dimensions converted to the preset range can be obtained according to the value conversion formula. Accordingly, two-dimensional coordinates (0.5626, 0.1875) may be obtained. It should be understood that in the present embodiment, the size of the texture map is 1 × 1, i.e., the texture coordinates are at most (1, 1).
S302, taking the two-dimensional texture corresponding to the texture coordinate as the element to be mapped.
In S302, the two-dimensional texture is data obtained by reading a texture picture according to texture coordinates (texture coordinates), and the data is stored in a two-dimensional array, where an element in the array is called a texel (texel), and the texel includes a color value and an alpha value, and it can be understood that the two-dimensional texture is a texel. Where the size of the two-dimensional texture should be an integer power of 2 in width and height, e.g., 16,32,64,128, 256.
On the basis of any one of the above embodiments in fig. 1 to fig. 3, fig. 4 is a schematic flowchart illustrating another display method for simulating glasses-worn images according to an embodiment of the present application. As shown in fig. 4, the above step S101 further includes S401 and S402. It should be noted that the same steps as those in the embodiment of fig. 1 to 3 are not shown and explained here.
S401, determining a target scene selected by a user from a virtual scene list;
s402, calling the target scene map corresponding to the target scene.
In S401 and S402, the virtual scene list is a list displayed on the display unit and provided to the user for selecting a target scene, and the list includes a plurality of virtual scenes, and each virtual scene corresponds to a scene map prestored in the preset storage space. In this embodiment, the target scene is determined according to a selection instruction for selecting the target scene from the virtual scene list by the user, and then the target scene map corresponding to the target scene is called from the preset storage space. According to the embodiment, various target scenes are switched according to the selection of the user, so that the glasses wearing effect of the user in various scenes can be simulated.
On the basis of any one of the above embodiments in fig. 1 to fig. 3, fig. 5 is a schematic flowchart illustrating another display method for simulating glasses-worn images according to an embodiment of the present application. As shown in fig. 5, the above step S103 is followed by S501 and S502. It should be noted that the same steps as those in the embodiment of fig. 1 to 3 are not shown and explained here.
S501, acquiring an image zooming instruction input by a user;
in the above S501, the image scaling instruction may be an instruction for controlling the three-dimensional head image and the glasses image in the simulated glasses-worn image to rotate to the horizontal direction and/or the vertical direction, and to enlarge or reduce through a touch screen or a mouse trigger.
For example, when the finger touches the screen, the finger is moved at the screen coordinates p1(x1, y1) and the timestamp t1 until the finger leaves the screen at the screen coordinates p2(x2, y2), the timestamp t2, and the middle state is the finger movement trajectory. Assuming the moving distance d, i.e. the Euclidean distance between p1 and p2 is d, the corresponding three-dimensional head model rotates 360 degrees around the three-dimensional head center. And when the moving distance is d1, rotating d1/d x 360 degrees.
And S502, displaying the mapping elements in the enlarged or reduced simulated glasses wearing image based on the zooming instruction.
In the above S501 and S502, the mapping element is a two-dimensional texture obtained by mapping the element to be mapped to the eyeglass image. As shown in fig. 8 to 10, based on the zoom instruction, it may be displayed to enlarge or reduce the image in which the environmental object and the environmental light are mapped onto the eyeglass model in the simulated eyeglass wearing image, so that the user can view a more realistic eyeglass wearing effect.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 11 shows a block diagram of a display device for simulating glasses-worn images according to an embodiment of the present application, which corresponds to the display method for simulating glasses-worn images according to the foregoing embodiment.
Referring to fig. 11, the apparatus includes:
a mapping module 1101, configured to perform element mapping on the glasses image based on the target scene map to obtain a new glasses image;
a generating module 1102, configured to generate a simulated glasses wearing image according to the new glasses image;
A display module 1103, configured to display the simulated glasses wearing image.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 12, the terminal device 12 of this embodiment includes: at least one processor 120 (only one shown in fig. 12), a memory 121, and a computer program 122 stored in the memory 121 and executable on the at least one processor 120, the processor 120 implementing the steps of any of the above-described method embodiments when executing the computer program 122.
The terminal device 12 may be a mobile phone, a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 120, a memory 121. Those skilled in the art will appreciate that fig. 12 is merely an example of terminal device 12 and does not constitute a limitation on terminal device 12, and may include more or less components than those shown, or some components in combination, or different components, such as input output devices, network access devices, etc.
The Processor 120 may be a Central Processing Unit (CPU), and the Processor 120 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 121 may be an internal storage unit of the terminal device 12 in some embodiments, for example, a hard disk or a memory of the terminal device 12. The memory 121 may also be an external storage device of the terminal device 12 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 12. Further, the memory 121 may also include both an internal storage unit and an external storage device of the terminal device 12. The memory 121 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 121 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A display method for simulating glasses-worn images is characterized by comprising the following steps:
carrying out element mapping on the glasses image based on the target scene map to obtain a new glasses image;
generating a simulated glasses wearing image according to the new glasses image;
And displaying the simulated glasses wearing image.
2. The display method of claim 1, wherein the element mapping the glasses image based on the target scene map to obtain a new glasses image, comprises:
acquiring a sky box background map corresponding to the target scene map, wherein the sky box background map comprises six texture maps of six faces forming a cube;
generating an element to be mapped according to the texture map;
and mapping the element to be mapped to the glasses image to obtain the new glasses image.
3. The display method according to claim 2, wherein the generating the element to be mapped according to the texture map comprises:
determining texture coordinates corresponding to the three-dimensional coordinates on the texture map aiming at the three-dimensional coordinates of each feature point on the glasses image;
and taking the two-dimensional texture corresponding to the texture coordinate as the element to be mapped.
4. The display method according to claim 3, wherein the determining, for the three-dimensional coordinates of each feature point on the eyeglass image, texture coordinates corresponding to the three-dimensional coordinates on the texture map includes:
aiming at each three-dimensional coordinate, acquiring a normal vector corresponding to the three-dimensional coordinate;
Acquiring a direction vector from a preset virtual camera to the three-dimensional coordinate;
determining a reflection vector or a refraction vector passing through the three-dimensional coordinate according to the normal vector and the direction vector;
and determining the intersection point of the reflection vector or the refraction vector and the texture map, and taking the intersection point as the texture coordinate corresponding to the three-dimensional coordinate.
5. The display method according to claim 3, wherein the determining, for the three-dimensional coordinates of each feature point on the eyeglass image, texture coordinates corresponding to the three-dimensional coordinates on the texture map includes:
for each three-dimensional coordinate, determining a first dimension with the largest absolute value in three dimensions of the three-dimensional coordinate;
determining the texture map corresponding to the first dimension according to the first dimension;
converting the numerical values of the other two second dimensions except the first dimension into a preset range according to a preset numerical value conversion formula;
and forming two second dimensions after numerical value conversion into a two-dimensional coordinate, and taking the two-dimensional coordinate as a texture coordinate corresponding to the three-dimensional coordinate on the texture map of the first dimension.
6. The display method according to any one of claims 1 to 5, wherein before the element mapping of the glasses image based on the target scene map to obtain a new glasses image, further comprising:
Determining a target scene selected by a user from the virtual scene list;
and calling the target scene map corresponding to the target scene.
7. The display method according to any one of claims 1 to 5, further comprising, after the displaying the simulated eyeglass wearing image:
acquiring an image zooming instruction input by a user;
and displaying the mapping elements in the simulated glasses wearing image after the simulation glasses wearing image is enlarged or reduced based on the zooming instruction.
8. A display device for simulating an image worn by eyeglasses, comprising:
the mapping module is used for carrying out element mapping on the glasses image based on the target scene map to obtain a new glasses image;
the generating module is used for generating a simulated glasses wearing image according to the new glasses image;
and the display module is used for displaying the simulated glasses wearing image.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010577638.7A 2020-06-23 2020-06-23 Display method and device for simulated eyeglass wearing image Active CN111862338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010577638.7A CN111862338B (en) 2020-06-23 2020-06-23 Display method and device for simulated eyeglass wearing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010577638.7A CN111862338B (en) 2020-06-23 2020-06-23 Display method and device for simulated eyeglass wearing image

Publications (2)

Publication Number Publication Date
CN111862338A true CN111862338A (en) 2020-10-30
CN111862338B CN111862338B (en) 2024-06-18

Family

ID=72988372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010577638.7A Active CN111862338B (en) 2020-06-23 2020-06-23 Display method and device for simulated eyeglass wearing image

Country Status (1)

Country Link
CN (1) CN111862338B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541463A (en) * 2023-11-29 2024-02-09 沐曦集成电路(上海)有限公司 Sky box angular grain element loss processing system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180024355A1 (en) * 2016-07-19 2018-01-25 The Board Of Trustees Of The University Of Illinoi Method and system for near-eye three dimensional display
WO2019034142A1 (en) * 2017-08-17 2019-02-21 腾讯科技(深圳)有限公司 Three-dimensional image display method and device, terminal, and storage medium
CN109727097A (en) * 2018-12-29 2019-05-07 上海堃承信息科技有限公司 One kind matching mirror method, apparatus and system
CN110349269A (en) * 2019-05-21 2019-10-18 珠海随变科技有限公司 A kind of target wear try-in method and system
CN110945405A (en) * 2017-05-31 2020-03-31 奇跃公司 Eye tracking calibration techniques
CN111009031A (en) * 2019-11-29 2020-04-14 腾讯科技(深圳)有限公司 Face model generation method, model generation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180024355A1 (en) * 2016-07-19 2018-01-25 The Board Of Trustees Of The University Of Illinoi Method and system for near-eye three dimensional display
CN110945405A (en) * 2017-05-31 2020-03-31 奇跃公司 Eye tracking calibration techniques
WO2019034142A1 (en) * 2017-08-17 2019-02-21 腾讯科技(深圳)有限公司 Three-dimensional image display method and device, terminal, and storage medium
CN109727097A (en) * 2018-12-29 2019-05-07 上海堃承信息科技有限公司 One kind matching mirror method, apparatus and system
CN110349269A (en) * 2019-05-21 2019-10-18 珠海随变科技有限公司 A kind of target wear try-in method and system
CN111009031A (en) * 2019-11-29 2020-04-14 腾讯科技(深圳)有限公司 Face model generation method, model generation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HANWU HE: "Interactive projection images generation for swept-based 3D display", 《COMPUTING AND VISUALIZATION IN SCIENCE》, 4 July 2015 (2015-07-04), pages 33, XP035513985, DOI: 10.1007/s00791-015-0242-2 *
杨文超: "面向个人体验的人机交互技术研究与应用", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 11, 15 November 2017 (2017-11-15), pages 138 - 361 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541463A (en) * 2023-11-29 2024-02-09 沐曦集成电路(上海)有限公司 Sky box angular grain element loss processing system

Also Published As

Publication number Publication date
CN111862338B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
EP3959688B1 (en) Generative latent textured proxies for object category modeling
Fender et al. Optispace: Automated placement of interactive 3d projection mapping content
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
TW202019167A (en) Generating and modifying representations of objects in an augmented-reality or virtual-reality scene
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
EP4213102A1 (en) Rendering method and apparatus, and device
JP2012190428A (en) Stereoscopic image visual effect processing method
JP6553184B2 (en) Digital video rendering
AU6875500A (en) Method and apparatus for rendering images with refractions
CN110889384A (en) Scene switching method and device, electronic equipment and storage medium
CN111862338B (en) Display method and device for simulated eyeglass wearing image
US20120098833A1 (en) Image Processing Program and Image Processing Apparatus
JP3629243B2 (en) Image processing apparatus and method for rendering shading process using distance component in modeling
JPH113432A (en) Image processor, game machine, its method and recording medium
KR950025512A (en) Hardware-based graphical workstation solution for refraction
CN111710044A (en) Image processing method, apparatus and computer-readable storage medium
CN109949396A (en) A kind of rendering method, device, equipment and medium
US10713836B2 (en) Simulating lenses
EP4386682A1 (en) Image rendering method and related device thereof
Chen et al. Generating stereoscopic videos of realistic 3D scenes with ray tracing
US20230368432A1 (en) Synthesized Camera Arrays for Rendering Novel Viewpoints
Tredinnick et al. A tablet based immersive architectural design tool
JP2007299080A (en) Image generation method and image generation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant