CN107492001B - Virtual glasses try-on method and device and service terminal - Google Patents

Virtual glasses try-on method and device and service terminal Download PDF

Info

Publication number
CN107492001B
CN107492001B CN201710349233.6A CN201710349233A CN107492001B CN 107492001 B CN107492001 B CN 107492001B CN 201710349233 A CN201710349233 A CN 201710349233A CN 107492001 B CN107492001 B CN 107492001B
Authority
CN
China
Prior art keywords
model
glasses
glasses model
shadow
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710349233.6A
Other languages
Chinese (zh)
Other versions
CN107492001A (en
Inventor
曾强
罗杨
周剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tongjia Youbo Technology Co Ltd
Original Assignee
Chengdu Tongjia Youbo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tongjia Youbo Technology Co Ltd filed Critical Chengdu Tongjia Youbo Technology Co Ltd
Priority to CN201710349233.6A priority Critical patent/CN107492001B/en
Publication of CN107492001A publication Critical patent/CN107492001A/en
Priority to PCT/CN2017/120315 priority patent/WO2018209967A1/en
Application granted granted Critical
Publication of CN107492001B publication Critical patent/CN107492001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2008Assembling, disassembling

Abstract

The invention relates to the technical field of virtual reality, in particular to a virtual glasses try-on method, a virtual glasses try-on device and a service terminal, wherein the virtual glasses try-on method and the virtual glasses try-on device are applied to the service terminal, and the virtual glasses try-on method comprises the following steps: the method comprises the steps of collecting head images of users in different directions, establishing a human head model according to the collected head images of the users, and adjusting parameters of a selected preset glasses model to enable the preset eyes model to be matched with the human head model of the users so as to finish fitting. Through the scheme, the glasses suitable for people to select at any time and any place are realized, people do not need to go to a physical store for trying on, and the user experience is improved.

Description

Virtual glasses try-on method and device and service terminal
Technical Field
The invention relates to the technical field of virtual reality, in particular to a virtual glasses try-on method, a virtual glasses try-on device and a service terminal.
Background
With the development of economy and the popularization of cultural education, people pay more and more attention to the health care of eyes while paying attention to the quality of life. The glasses can make up for the lack of vision, and can also play a role in decorating the window of soul, expressing fashion personality and aesthetic orientation, so the glasses become the 'equipment' closely related to the life of people and are more and more paid attention.
Meanwhile, with the rapid development of the economy of China, the continuous improvement of the income and the consumption level of the nation, the aging degree of the society and the enhancement of the health care consciousness of domestic consumers, the growing of the big spectacle market in China is promoted and the growing potential is huge.
At present, more than 65% of people wear the glasses, and the competition of the glasses market is more and more intense due to the increase of the demand of people on the glasses. The main brand glasses shop has small area, simple and crude decoration, relatively few product varieties and few choices, but the price is more substantial; and the known brand stores have large in-store area, bright in-store and abundant product styles, but have relatively high price. Therefore, it is often inconvenient for people to choose the glasses, and it is therefore necessary to provide a method of wearing various styles of eyes anytime and anywhere.
Disclosure of Invention
The invention aims to provide a virtual glasses try-on method, so that people can select proper glasses at any time and any place without trying on the glasses in a physical store, and the user experience is improved.
Another object of the present invention is to provide a virtual glasses try-on device, so as to enable people to select glasses suitable for themselves anytime and anywhere without going to a brick-and-mortar store for try-on, thereby improving the user experience.
Another object of the present invention is to provide a service terminal, so that people can select glasses suitable for themselves anytime and anywhere without going to a brick-and-mortar store for trying on, thereby improving the user experience.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a virtual glasses try-on method, which is applied to a service terminal, and the method includes:
collecting head images of users in different directions;
establishing a human head model according to the collected head image of the user;
adjusting parameters of the selected preset glasses model;
and matching the preset glasses model with the human head model to finish the try-on.
In a second aspect, an embodiment of the present invention further provides a virtual glasses try-on device, which is applied to a service terminal, where the device includes:
the first image acquisition module is used for acquiring head images of users in different directions;
the first model establishing module is used for establishing a human head model according to the collected head image of the user;
the parameter adjusting module is used for adjusting the parameters of the selected preset glasses model;
and the matching module is used for matching the preset glasses model with the human head model to complete the try-on.
In a third aspect, an embodiment of the present invention further provides a service terminal, where the service terminal includes: a memory;
a processor; and
a virtual eyewear try-on device installed in the memory and including one or more software functional modules executed by the processor, the virtual eyewear try-on device comprising:
the first image acquisition module is used for acquiring head images of users in different directions;
the first model establishing module is used for establishing a human head model according to the collected head image of the user;
the parameter adjusting module is used for adjusting the parameters of the selected preset glasses model;
and the matching module is used for matching the preset glasses model with the human head model to complete the try-on.
The embodiment of the invention provides a virtual glasses try-on method, a virtual glasses try-on device and a service terminal, wherein the virtual glasses try-on method and the virtual glasses try-on device are both applied to the service terminal, and the virtual glasses try-on method comprises the following steps: the method comprises the steps of collecting head images of users in different directions, establishing a human head model according to the collected head images of the users, and adjusting parameters of a selected preset glasses model to enable the preset eyes model to be matched with the human head model of the users so as to finish fitting. Through the scheme, the glasses suitable for people to select at any time and any place are realized, people do not need to go to a physical store for trying on, and the user experience is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a schematic structural diagram of a service terminal according to an embodiment of the present invention.
Fig. 2 shows a flowchart of a virtual glasses try-on method according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating another virtual glasses try-on method according to an embodiment of the present invention.
Fig. 4 is a flowchart illustrating sub-steps of another virtual glasses try-on method provided by an embodiment of the present invention.
Fig. 5 is a flowchart illustrating another virtual glasses try-on method according to an embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating functional modules of a virtual glasses try-on device according to an embodiment of the present invention.
Fig. 7 is a schematic diagram illustrating functional modules of another virtual glasses try-on device according to an embodiment of the present invention.
Fig. 8 is a functional module diagram of a sub-module of another virtual glasses try-on device according to an embodiment of the present invention.
Fig. 9 is a schematic functional block diagram of another virtual glasses try-on device according to an embodiment of the present invention.
The figure is as follows: 100-a service terminal; 110-virtual eyewear try-on devices; 120-a memory; 130-a processor; 111-a first image acquisition module; 112-a first model building module; 113-parameter adjustment module; 114-a matching module; 121-a first image acquisition module; 122-a first model building module; 123-parameter adjusting module; 124-shadow creation module; 125-motion processing module; 126-matching module; 1241-a light rendering module; 1242-color setting module; 131-a second image acquisition module; 132-a second model building module; 133-naming module; 134-a selection module; 135-an adjustment module; 136-a first image acquisition module; 137-a first model building module; 138-parameter adjustment module; 139-shadow creation module; 141-a motion processing module; 142-matching module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic structural diagram of a service terminal 100 according to an embodiment of the present invention. The service terminal 100 includes a virtual glasses try-on device 110, a memory 120, and a processor 130.
The elements of the memory 120 and the processor 130 are electrically connected to each other directly or indirectly to achieve data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The virtual glasses try-on device 110 includes at least one software function module which may be stored in the memory 120 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the service terminal 100. The processor 130 is configured to execute executable modules stored in the memory 120, such as software functional modules and computer programs included in the virtual glasses try-on device 110.
The Memory 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 120 is used for storing a program, and the processor 130 executes the program after receiving the execution instruction.
Fig. 2 is a flowchart illustrating a virtual glasses try-on method according to an embodiment of the present invention, where the virtual glasses try-on method is applied to the service terminal 100, and the virtual glasses try-on method includes:
step S110, acquiring head images of users in different directions.
The user image is captured by a camera or other mobile electronic device, which includes not only the user head image but also the background on which the user is located, from which the user head image is captured. In other words, the background image in the user image is cleared, and the image of the user's head is retained. It is easy to understand that images of the head of the user at different orientations need to be acquired to obtain more image data for subsequent data processing.
And step S120, establishing a human head model according to the collected head image of the user.
According to the collected multiple user head images with different directions, a human head model is established by utilizing modeling software stored on the service terminal 100.
And step S130, adjusting the selected preset parameters of the glasses model.
The service terminal 100 has a plurality of glasses models stored in advance, and a user can select one of the glasses models according to his/her preference when trying on virtual glasses. The service terminal 100 adjusts parameters of the glasses model selected by the user so that the glasses model selected by the user can be adapted to the head model established according to the head image of the user. Parameters of the eyewear model include, but are not limited to, the outer width of the eyewear frame, the inner width of the frame, the frame size, the frame height, the nose bridge height, and the length of the temple origin from the eyewear bend.
Step S140, matching the preset glasses model with the human head model to complete the fitting.
And matching the glasses model with the head model after the parameters are adjusted, so that the user can actually feel the wearing effect of the glasses. Specifically, put the glasses model in the suitable position of people's head model to hold in the palm the nose and can laminate as the standard with the bridge of the nose department of people's head model, guarantee that picture frame central point and the axis of people's head model align, in order to realize the accurate cooperation of glasses model and people's head model. It is easy to understand that the user can also choose a plurality of different glasses models to try on according to the preference of the user.
Referring to fig. 3, a flowchart of another virtual glasses try-on method according to an embodiment of the present invention is shown, where the virtual glasses try-on method is applied to the service terminal 100, and the virtual glasses try-on method includes: step S210, acquiring head images of users in different directions.
The user image is captured by a camera or other mobile electronic device, which includes not only the user head image but also the background on which the user is located, from which the user head image is captured. In other words, the background image in the user image is cleared, and the image of the user's head is retained. It is easy to understand that images of the head of the user at different orientations need to be acquired to obtain more image data for subsequent data processing.
And step S220, establishing a human head model according to the collected head image of the user.
According to the collected multiple user head images with different directions, a human head model is established by utilizing modeling software stored on the service terminal 100.
And step S230, adjusting the parameters of the selected preset glasses model.
The service terminal 100 stores a plurality of glasses models in advance, and a user can select one of the glasses models according to his/her preference when trying on virtual glasses. The service terminal 100 adjusts parameters of the glasses model selected by the user so that the glasses model selected by the user can be adapted to the head model established according to the head image of the user. Parameters of the eyewear model include, but are not limited to, the outer width of the eyewear frame, the inner width of the frame, the frame size, the frame height, the nose bridge height, and the length of the temple origin from the eyewear bend.
Step S240, establishing a shadow of the preset glasses model.
The shadow is established for the glasses model selected by the user, so that the effect is more vivid when the glasses model is worn, and the user experience is improved. The method specifically comprises the following steps:
fig. 4 is a flowchart illustrating a sub-step of step S240 of another virtual glasses try-on method according to an embodiment of the present invention.
Step S241, simulating an application scene to perform light rendering on the preset glasses model so as to enable the preset glasses model to conform to the application scene where the user is located.
And respectively selecting a simulated indoor environment or an outdoor environment according to the environment of the user, and further performing light rendering on the glasses model. Specifically, the light is divided into a main light source and an auxiliary light source, the main light source is respectively arranged in the front and the left and right sides of the glasses model, the auxiliary light source is distributed around the glasses model, and the intensity of the main light source and the intensity of the auxiliary light source are adjusted according to the actual environment of a user, namely, an indoor environment or an outdoor environment, so that the light rendering effect is realized. And further, the application environment of the user is simulated, the adaptability of the user is enhanced, and the experience of the user is improved.
Step S242, performing color setting on the preset glasses model, so that the preset glasses model shows a stereoscopic effect.
The glasses model selected by the user is colored, such as red, so that the glasses model is more attractive, and meanwhile, the stereoscopic effect of the glasses model is increased, and the user experience is improved.
Step S250, performing motion synchronization processing on the preset glasses model and the shadow of the preset glasses model to control the preset glasses model and the shadow of the preset glasses model to perform motion synchronization.
The method comprises the following steps of carrying out motion synchronization processing on a glasses model selected by a user and a shadow formed by the glasses model, specifically: and equally dividing the glasses model and the shadow corresponding to the glasses model into three parts, and respectively carrying out motion synchronization processing on each part.
Applying an algorithm to the glasses model of the first part and the synchronized movement of the corresponding shadows of the glasses model
Figure GDA0001457316580000071
Adjustment is made wherein (u, v) represents the coordinates before the shadow movement, (u ', v')
Representing the coordinates after the shadow movement. The Set is an influence factor of the movement of the glasses model on the shadow movement of the glasses model, and represents that the movement amount of the glasses model in the coordinate system is converted into the movement amount of the shadow of the glasses model in the coordinate system. And further correlating the movement of the glasses model with the movement of the shadow of the glasses model to realize the synchronous movement of the glasses model and the shadow of the glasses model. The influence factor Set passes through the algorithm
vSet=w*s*V′
Where w and s are both constant values, typically w is 0.04 and s is 0.4.
Glasses model and synchronous motion adoption algorithm of shadow corresponding to glasses model
Figure GDA0001457316580000081
Making an adjustment in which uO,uA,uB,uCThe abscissa of the point O, the point A, the point B and the point C on the glasses model is randomly selected. And enabling the glasses model of the second part and the corresponding shadow of the glasses model to synchronously move through the algorithm.
Adopting an algorithm for the synchronous motion of the glasses model of the third part and the corresponding shadow of the glasses model
Figure GDA0001457316580000082
And processing to realize synchronous motion of the glasses model of the third part and the shadow corresponding to the glasses model so as to improve the experience of the user.
And step S260, matching the preset glasses model with the human head model to finish the try-on.
And matching the adjusted glasses model with the human head model so that the user can actually feel the wearing effect of the glasses. Specifically, put the glasses model in the suitable position of people's head model to hold in the palm the nose and can laminate as the standard with the bridge of the nose department of people's head model, guarantee that picture frame central point and the axis of people's head model align, in order to realize the accurate cooperation of glasses model and people's head model. It is easy to understand that the user can also choose a plurality of different glasses models to try on according to the preference of the user.
Referring to fig. 5, a virtual glasses try-on method according to another embodiment of the present invention is applied to the service terminal 100, and the virtual glasses try-on method includes:
step S310, glasses images of different directions of the glasses are collected.
Three views are taken of different types of physical glasses by an electronic device such as a camera to acquire a large number of glasses images for subsequent data analysis.
Step S320, establishing the preset glasses model according to the glasses image.
The service terminal 100 stores modeling software, and the modeling software in the service terminal 100 establishes a glasses model according to a large number of collected glasses images. It is easy to understand that the service terminal 100 can establish a plurality of different types of glasses models according to actual needs, so that the user can have a plurality of choices, and the user experience is improved.
Step S330, naming each component of each of the preset eyewear models.
The components of each glasses model, such as the frame, the lenses, the glasses legs and the nose pads, are respectively ordered, on one hand, the management is convenient, on the other hand, the service terminal 100 can perform new arrangement and combination according to the components of various glasses models to generate new glasses models, so that styles of the glasses models stored in the service terminal 100 are enriched.
Step S340, selecting different materials and textures for each of the preset glasses models.
The glasses model created by the service terminal 100 using the modeling software is only a simple glasses model, and therefore, the glasses model needs to be modified to make the glasses model have an aesthetic effect, so as to improve the user experience. Specifically, a suitable material and texture is selected for each lens model, the material may be, but is not limited to, plastic or metal, and the texture may be, but is not limited to, corrugations or straight lines. By selecting proper materials and textures for each glasses model, each glasses model is more attractive.
Step S350, adjusting one or more combinations of the preset glasses model from contrast, metal degree, transparency, roughness, and three primary color ratio.
The glasses models are adjusted from different dimensions such as contrast, metal degree, transparency, roughness and three primary color proportion values, so that the glasses models are more attractive, and the purpose of attracting users is achieved.
And step S360, acquiring head images of the user in different directions.
The user image is captured by a camera or other mobile electronic device, which includes not only the user head image but also the background on which the user is located, from which the user head image is captured. In other words, the background image in the user image is cleared, and the image of the user's head is retained. It is easy to understand that images of the head of the user at different orientations need to be acquired to obtain more image data for subsequent data processing.
And step S370, establishing a human head model according to the collected head image of the user.
According to the collected multiple user head images with different directions, a human head model is established by utilizing modeling software stored on the service terminal 100.
And step S380, adjusting the parameters of the selected preset glasses model.
The service terminal 100 stores a plurality of glasses models in advance, and a user can select one of the glasses models according to his/her preference when trying on virtual glasses. The service terminal 100 adjusts parameters of the glasses model selected by the user so that the glasses model selected by the user can be adapted to the head model established according to the head image of the user. Parameters of the eyewear model include, but are not limited to, the outer width of the eyewear frame, the inner width of the frame, the frame size, the frame height, the nose bridge height, and the length of the temple origin from the eyewear bend.
And step 390, establishing the shadow of the preset glasses model.
The shadow is established for the glasses model selected by the user, so that the effect is more vivid when the glasses model is worn, and the user experience is improved. In particular to a method for preparing a high-performance nano-silver alloy,
and respectively selecting a simulated indoor environment or an outdoor environment according to the environment of the user, and further performing light rendering on the glasses model. Specifically, the light is divided into a main light source and an auxiliary light source, the main light source is respectively arranged in the front and the left and right sides of the glasses model, the auxiliary light source is distributed around the glasses model, and the intensity of the main light source and the intensity of the auxiliary light source are adjusted according to the actual environment of a user, namely, an indoor environment or an outdoor environment, so that the light rendering effect is realized. And further, the application environment of the user is simulated, the adaptability of the user is enhanced, and the experience of the user is improved.
Further, the glasses model selected by the user is colored, such as red, so that the glasses model is more attractive, and meanwhile, the stereoscopic effect of the glasses model is increased, and the user experience is improved.
It is easily understood that the shadow of the glasses model can also be established when the glasses model is established.
Step S400, performing motion synchronization processing on the preset glasses model and the shadow of the preset glasses model to control the preset glasses model and the shadow of the preset glasses model to move synchronously.
The method comprises the following steps of carrying out motion synchronization processing on a glasses model selected by a user and a shadow formed by the glasses model, specifically: and equally dividing the glasses model and the shadow corresponding to the glasses model into three parts, and respectively carrying out motion synchronization processing on each part.
Applying an algorithm to the glasses model of the first part and the synchronized movement of the corresponding shadows of the glasses model
Figure GDA0001457316580000111
Adjustment is performed, wherein (u, v) represents the coordinates before the shadow movement, and (u ', v') represents the coordinates after the shadow movement. The Set is an influence factor of the movement of the glasses model on the shadow movement of the glasses model, and represents that the movement amount of the glasses model in the coordinate system is converted into the movement amount of the shadow of the glasses model in the coordinate system. And further correlating the movement of the glasses model with the movement of the shadow of the glasses model to realize the synchronous movement of the glasses model and the shadow of the glasses model. The influence factor Set passes through the algorithm
vSet=w*s*V′
Where w and s are both constant values, typically w is 0.04 and s is 0.4.
Glasses model and synchronous motion adoption algorithm of shadow corresponding to glasses model
Figure GDA0001457316580000112
Making an adjustment in which uO,uA,uB,uCThe abscissa of the point O, the point A, the point B and the point C on the glasses model is randomly selected. And enabling the glasses model of the second part and the corresponding shadow of the glasses model to synchronously move through the algorithm.
Adopting an algorithm for the synchronous motion of the glasses model of the third part and the corresponding shadow of the glasses model
Figure GDA0001457316580000113
And processing to realize synchronous motion of the glasses model of the third part and the shadow corresponding to the glasses model so as to improve the experience of the user.
And step S410, matching the preset glasses model with the human head model to finish the try-on.
And matching the adjusted glasses model with the human head model so that the user can actually feel the wearing effect of the glasses. Specifically, put the glasses model in the suitable position of people's head model to hold in the palm the nose and can laminate as the standard with the bridge of the nose department of people's head model, guarantee that picture frame central point and the axis of people's head model align, in order to realize the accurate cooperation of glasses model and people's head model. It is easy to understand that the user can also choose a plurality of different glasses models to try on according to the preference of the user.
Fig. 6 is a schematic functional module diagram of a virtual glasses try-on device 110 according to an embodiment of the present invention. The virtual glasses try-on device 110 is applied to the service terminal 100, and the virtual glasses try-on device 110 includes:
the first image acquisition module 111 is used for acquiring head images of users in different orientations.
In an embodiment of the present invention, step S110 may be performed by the first image capturing module 111.
And a first model building module 112, configured to build a human head model according to the acquired head image of the user.
In an embodiment of the present invention, step S120 may be performed by the first model building module 112.
And the parameter adjusting module 113 is configured to adjust parameters of the selected preset glasses model.
In the embodiment of the present invention, step S130 may be performed by the parameter adjusting module 113.
A matching module 114, configured to match the preset glasses model with the human head model to complete the fitting.
In an embodiment of the present invention, step S140 may be performed by the matching module 114.
Fig. 7 is a schematic functional module diagram of another virtual glasses try-on device 110 according to an embodiment of the present invention. The virtual glasses try-on device 110 is applied to the service terminal 100, and the virtual glasses try-on device 110 includes:
the first image acquisition module 121 is configured to acquire head images of users in different orientations.
In the embodiment of the present invention, step S210 may be performed by the first image capturing module 121.
And the first model establishing module 122 is configured to establish a human head model according to the acquired head image of the user.
In the embodiment of the present invention, step S220 may be performed by the first model building module 122.
And the parameter adjusting module 123 is configured to adjust parameters of the selected preset glasses model.
In the embodiment of the present invention, step S230 may be performed by the parameter adjusting module 123.
And a shadow establishing module 124 for establishing the shadow of the preset glasses model.
In an embodiment of the present invention, step S240 may be performed by the shadow establishing module 124.
Please refer to fig. 8, which is a diagram illustrating a sub-module of the shadow creating module 124 according to an embodiment of the present invention.
The shadow creation module 124 further includes:
and a light rendering module 1241, configured to simulate an application scene to perform light rendering on the glasses model, so that the glasses model conforms to the application scene where the user is located.
In this embodiment of the present invention, step S241 may be performed by the light rendering module 1241.
A color setting module 1242, configured to perform color setting on the preset glasses model, so that the preset glasses model exhibits a stereoscopic effect.
In the embodiment of the present invention, step S242 may be performed by the color setting module 1242.
A motion processing module 125, configured to perform motion synchronization processing on the preset glasses model and the shadow of the preset glasses model, so as to control the preset glasses model and the shadow of the preset glasses model to perform synchronous motion.
In an embodiment of the present invention, step S250 may be performed by the motion processing module 125.
A matching module 126, configured to match the preset glasses model with the human head model to complete fitting.
In an embodiment of the present invention, step S260 may be performed by the matching module 126.
Fig. 9 is a schematic functional module diagram of another virtual glasses try-on device 110 according to an embodiment of the present invention. The virtual glasses try-on device 110 is applied to the service terminal 100, and the virtual glasses try-on device 110 includes:
and the second image collecting module 131 is used for collecting glasses images of different orientations of glasses.
In the embodiment of the present invention, step S310 may be performed by the second image capturing module 131.
A second model building module 132, configured to build the preset glasses model according to the glasses image.
In the embodiment of the present invention, step S320 may be performed by the second model building module 132.
A naming module 133 for naming each component of each of the preset eyewear models.
In the embodiment of the present invention, step S330 may be performed by the naming module 133.
A selecting module 134, configured to select different materials and textures for each of the preset glasses models.
In an embodiment of the present invention, step S340 may be performed by the selection module 134.
And an adjusting module 135, configured to adjust the preset glasses model according to one or more combinations of contrast, metal degree, transparency, roughness, and three primary color ratio.
In the embodiment of the present invention, step S350 may be performed by the adjusting module 135.
A first image acquisition module 136 for acquiring head images of users in different orientations.
In an embodiment of the present invention, step S360 may be performed by the first image capturing module 136.
And the first model establishing module 137 is configured to establish a human head model according to the acquired head image of the user.
In an embodiment of the present invention, step S370 may be performed by the first model building module 137.
And a parameter adjusting module 138, configured to adjust parameters of the selected preset eyewear model.
In this embodiment of the present invention, step S380 may be performed by the parameter adjusting module 138.
A shadow establishing module 139 for the shadow of the preset glasses model.
In an embodiment of the present invention, step S390 may be performed by the shadow creation module 139.
The motion processing module 141 is configured to perform motion synchronization processing on the preset glasses model and the shadow of the preset glasses model to control the preset glasses model and the shadow of the preset glasses model to perform synchronous motion.
In the embodiment of the present invention, step S400 may be performed by the motion processing module 141.
A matching module 142, configured to match the preset glasses model with the human head model to complete fitting.
In an embodiment of the present invention, step S410 may be performed by the matching module 142.
The virtual glasses try-on device 110 has already been described in detail in the virtual glasses try-on method section, and is not described herein again.
In summary, the virtual glasses try-on method, apparatus and service terminal provided in the embodiments of the present invention are applied to the service terminal, and the virtual glasses try-on method includes: the method comprises the steps of collecting head images of users in different directions, establishing a human head model according to the collected head images of the users, and adjusting parameters of a selected preset glasses model to enable the preset eyes model to be matched with the human head model of the users so as to finish fitting. Through the scheme, the glasses suitable for people to select at any time and any place are realized, people do not need to go to a physical store for trying on, and the user experience is improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A virtual glasses try-on method is applied to a service terminal, and is characterized by comprising the following steps:
collecting head images of users in different directions;
establishing a human head model according to the collected head image of the user;
adjusting parameters of the selected preset glasses model; the parameters of the eyewear model include, but are not limited to: the outer width of the frame surface of the glasses, the inner width of the frame surface, the size of the glasses frame, the height of the nose bridge and the length of the starting point of the glasses legs from the bending part of the glasses;
matching the preset glasses model with the human head model to complete the try-on;
the matching is that the nose support of the glasses model is attached to the nose bridge of the head model, and the center point of the mirror frame is aligned with the central axis of the head model;
establishing the shadow of the preset glasses model, wherein the shadow comprises the following steps: simulating an application scene to perform lamplight rendering on the preset glasses model so as to enable the preset glasses model to conform to the application scene where a user is located;
the light is divided into a main light source and an auxiliary light source, the main light source is respectively arranged right in front of and on the left side and the right side of the glasses model, the auxiliary light source is distributed around the glasses model, and the intensity of the main light source and the intensity of the auxiliary light source are adjusted according to the actual environment of a user, namely, an indoor environment or an outdoor environment, so that the light rendering is realized;
setting the color of the preset glasses model to enable the preset glasses model to show a three-dimensional effect;
performing motion synchronization processing on the preset glasses model and the shadow of the preset glasses model to control the preset glasses model and the shadow of the preset glasses model to move synchronously;
the method comprises the following steps of carrying out motion synchronization processing on a glasses model selected by a user and a shadow formed by the glasses model, specifically: equally dividing the glasses model and the shadow corresponding to the glasses model into three parts, and respectively carrying out motion synchronization processing on each part;
using an algorithm for the synchronized movement of the eyewear model of the first portion and the corresponding shadows of the eyewear model
Figure FDA0002325404230000011
An adjustment is made, wherein (u, v) denotes the coordinates before the shadow movement,(u ', v') represents coordinates after the shadow motion; the Set is an influence factor generated by the movement of the glasses model on the shadow movement of the glasses model, and represents the movement of the glasses model in the coordinate system to be converted into the movement of the shadow of the glasses model in the coordinate system; the influence factor Set is obtained through an algorithm vSet w s V', wherein w and s are both constant values; and adopting an algorithm for the synchronous motion of the glasses model of the second part and the shadow corresponding to the glasses model:
Figure FDA0002325404230000012
y=vSet-[|u-uO|-|uA-uO|]*tan n,
Figure FDA0002325404230000013
adjusting;
wherein u isO,uA,uB,uCSelecting the abscissa of a point O, a point A, a point B and a point C on the glasses model at random;
adopting an algorithm for the synchronous motion of the glasses model of the third part and the corresponding shadow of the glasses model
Figure FDA0002325404230000021
Processing is performed to achieve a synchronized movement of the eyewear model of the third portion and the corresponding shade of the eyewear model.
2. A virtual glasses try-on device applied to a service terminal is characterized by comprising:
the first image acquisition module is used for acquiring head images of users in different directions;
the first model establishing module is used for establishing a human head model according to the collected head image of the user;
the parameter adjusting module is used for adjusting the parameters of the selected preset glasses model; the parameters of the eyewear model include, but are not limited to: the outer width of the frame surface of the glasses, the inner width of the frame surface, the size of the glasses frame, the height of the nose bridge and the length of the starting point of the glasses legs from the bending part of the glasses;
the matching module is used for matching the preset glasses model with the human head model to complete the try-on; the matching is that the nose support of the glasses model is attached to the nose bridge of the head model, and the center point of the mirror frame is aligned with the central axis of the head model;
the shadow establishing module is used for establishing a shadow of the preset glasses model, and specifically comprises the following steps: the light rendering module is used for simulating an application scene to perform light rendering on the glasses model so as to enable the glasses model to conform to the application scene where a user is located;
the light is divided into a main light source and an auxiliary light source, the main light source is respectively arranged right in front of and on the left side and the right side of the glasses model, the auxiliary light source is distributed around the glasses model, and the intensity of the main light source and the intensity of the auxiliary light source are adjusted according to the actual environment of a user, namely, an indoor environment or an outdoor environment, so that the light rendering is realized; setting the color of the preset glasses model to enable the preset glasses model to show a three-dimensional effect;
performing motion synchronization processing on the preset glasses model and the shadow of the preset glasses model to control the preset glasses model and the shadow of the preset glasses model to move synchronously;
the method comprises the following steps of carrying out motion synchronization processing on a glasses model selected by a user and a shadow formed by the glasses model, specifically: equally dividing the glasses model and the shadow corresponding to the glasses model into three parts, and respectively carrying out motion synchronization processing on each part;
using an algorithm for the synchronized movement of the eyewear model of the first portion and the corresponding shadows of the eyewear model
Figure FDA0002325404230000022
Performing adjustment, wherein (u, v) represents the coordinates before the shadow movement, and (u ', v') represents the coordinates after the shadow movement; the Set is an influence factor generated by the movement of the glasses model on the shadow movement of the glasses model, and represents the movement of the glasses model in the coordinate system to be converted into the movement of the shadow of the glasses model in the coordinate system; the influence factor Set is obtained through an algorithm vSet w s V', wherein w and s are both constant values; for the second partThe synchronous motion of the divided glasses model and the shadow corresponding to the glasses model adopts an algorithm:
Figure FDA0002325404230000031
y=vSet-[|u-uO|-|uA-uO|]*tan n,
Figure FDA0002325404230000032
adjusting;
wherein u isO,uA,uB,uCSelecting the abscissa of a point O, a point A, a point B and a point C on the glasses model at random;
adopting an algorithm for the synchronous motion of the glasses model of the third part and the corresponding shadow of the glasses model
Figure FDA0002325404230000033
Processing is performed to achieve a synchronized movement of the eyewear model of the third portion and the corresponding shade of the eyewear model.
3. The virtual eyeglass try-on apparatus of claim 2, wherein the apparatus further comprises:
the second image acquisition module is used for acquiring the glasses images of different directions of the glasses;
and the second model establishing module is used for establishing the preset glasses model according to the glasses image.
4. The virtual eyeglass try-on apparatus of claim 2, wherein the apparatus further comprises:
a naming module for naming each component of each of the preset eyewear models;
and the selection module is used for selecting different materials and textures for each of the preset glasses models.
5. The virtual eyeglass try-on apparatus of claim 3, wherein the apparatus comprises:
and the adjusting module is used for adjusting one or more combinations of contrast, metal degree, transparency, roughness and three primary color proportion values of the preset glasses model.
6. A service terminal, characterized in that the service terminal comprises:
a memory;
a processor; and
a virtual eyewear try-on device installed in the memory and including one or more software functional modules executed by the processor, the virtual eyewear try-on device comprising:
the first image acquisition module is used for acquiring head images of users in different directions;
the first model establishing module is used for establishing a human head model according to the collected head image of the user;
the parameter adjusting module is used for adjusting the parameters of the selected preset glasses model; the parameters of the eyewear model include, but are not limited to: the outer width of the frame surface of the glasses, the inner width of the frame surface, the size of the glasses frame, the height of the nose bridge and the length of the starting point of the glasses legs from the bending part of the glasses;
the matching module is used for matching the preset glasses model with the human head model to complete the try-on;
the shadow establishing module is used for establishing a shadow of the preset glasses model, and specifically comprises the following steps: the light rendering module is used for simulating an application scene to perform light rendering on the glasses model so as to enable the glasses model to conform to the application scene where a user is located;
the light is divided into a main light source and an auxiliary light source, the main light source is respectively arranged right in front of and on the left side and the right side of the glasses model, the auxiliary light source is distributed around the glasses model, and the intensity of the main light source and the intensity of the auxiliary light source are adjusted according to the actual environment of a user, namely, an indoor environment or an outdoor environment, so that the light rendering is realized; setting the color of the preset glasses model to enable the preset glasses model to show a three-dimensional effect;
performing motion synchronization processing on the preset glasses model and the shadow of the preset glasses model to control the preset glasses model and the shadow of the preset glasses model to move synchronously;
the method comprises the following steps of carrying out motion synchronization processing on a glasses model selected by a user and a shadow formed by the glasses model, specifically: equally dividing the glasses model and the shadow corresponding to the glasses model into three parts, and respectively carrying out motion synchronization processing on each part;
using an algorithm for the synchronized movement of the eyewear model of the first portion and the corresponding shadows of the eyewear model
Figure FDA0002325404230000041
Performing adjustment, wherein (u, v) represents the coordinates before the shadow movement, and (u ', v') represents the coordinates after the shadow movement; the Set is an influence factor generated by the movement of the glasses model on the shadow movement of the glasses model, and represents the movement of the glasses model in the coordinate system to be converted into the movement of the shadow of the glasses model in the coordinate system; the influence factor Set is obtained through an algorithm vSet w s V', wherein w and s are both constant values; and adopting an algorithm for the synchronous motion of the glasses model of the second part and the shadow corresponding to the glasses model:
Figure FDA0002325404230000042
y=vSet-[|u-uO|-|uA-uO|]*tan n,
Figure FDA0002325404230000043
adjusting;
wherein u isO,uA,uB,uCSelecting the abscissa of a point O, a point A, a point B and a point C on the glasses model at random;
adopting an algorithm for the synchronous motion of the glasses model of the third part and the corresponding shadow of the glasses model
Figure FDA0002325404230000044
The treatment is carried out, and the treatment is carried out,to achieve a synchronized movement of the eyewear model of the third section and the corresponding shade of the eyewear model.
CN201710349233.6A 2017-05-17 2017-05-17 Virtual glasses try-on method and device and service terminal Active CN107492001B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710349233.6A CN107492001B (en) 2017-05-17 2017-05-17 Virtual glasses try-on method and device and service terminal
PCT/CN2017/120315 WO2018209967A1 (en) 2017-05-17 2017-12-29 Virtual try-on method and device and service terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710349233.6A CN107492001B (en) 2017-05-17 2017-05-17 Virtual glasses try-on method and device and service terminal

Publications (2)

Publication Number Publication Date
CN107492001A CN107492001A (en) 2017-12-19
CN107492001B true CN107492001B (en) 2020-03-27

Family

ID=60642463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710349233.6A Active CN107492001B (en) 2017-05-17 2017-05-17 Virtual glasses try-on method and device and service terminal

Country Status (2)

Country Link
CN (1) CN107492001B (en)
WO (1) WO2018209967A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492001B (en) * 2017-05-17 2020-03-27 成都通甲优博科技有限责任公司 Virtual glasses try-on method and device and service terminal
CN109035375B (en) * 2018-06-22 2023-11-10 广州久邦世纪科技有限公司 OpenGL-based 3D glasses rendering method and system
US10685457B2 (en) 2018-11-15 2020-06-16 Vision Service Plan Systems and methods for visualizing eyewear on a user
CN113892902A (en) * 2021-10-12 2022-01-07 河北超视立科技股份有限公司 Lens matching method and lens matching device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344971A (en) * 2008-08-26 2009-01-14 陈玮 Internet three-dimensional human body head portrait spectacles try-in method
CN103456008A (en) * 2013-08-26 2013-12-18 刘晓英 Method for matching face and glasses
US9665984B2 (en) * 2014-07-31 2017-05-30 Ulsee Inc. 2D image-based 3D glasses virtual try-on system
CN106910240B (en) * 2017-01-24 2020-04-28 成都通甲优博科技有限责任公司 Real-time shadow generation method and device
CN107492001B (en) * 2017-05-17 2020-03-27 成都通甲优博科技有限责任公司 Virtual glasses try-on method and device and service terminal

Also Published As

Publication number Publication date
CN107492001A (en) 2017-12-19
WO2018209967A1 (en) 2018-11-22

Similar Documents

Publication Publication Date Title
CN107492001B (en) Virtual glasses try-on method and device and service terminal
KR101821284B1 (en) Method and system to create custom products
US10825260B2 (en) Virtual try-on systems and methods for spectacles
CN105842875B (en) A kind of spectacle frame design method based on face three-dimensional measurement
JP3250184B2 (en) Eyewear simulation system
AU2018214005A1 (en) Systems and methods for generating a 3-D model of a virtual try-on product
CN104408764A (en) Method, device and system for trying on glasses in virtual mode
EP1495447A1 (en) System and method for 3-dimension simulation of glasses
JP2021168157A (en) System, platform and method for personalized shopping using automated shopping assistant
CN115803750B (en) Virtual try-on system for glasses using a frame of reference
KR101805056B1 (en) Eyeglasses try-on simulation method using augumented reality
CN112509040A (en) Image-based detection of surfaces providing specular reflection and reflection modification
CN104750933A (en) Eyeglass trying on method and system based on Internet
US20140043332A1 (en) Method, device and system for generating a textured representation of a real object
CN110866966A (en) Rendering virtual objects with realistic surface properties matching the environment
CN109685911B (en) AR glasses capable of realizing virtual fitting and realization method thereof
Tang et al. Making 3D eyeglasses try-on practical
CN110349269A (en) A kind of target wear try-in method and system
CN113744411A (en) Image processing method and device, equipment and storage medium
CN108881892B (en) Anti-dizziness method and system for desktop virtual reality system
US20130282344A1 (en) Systems and methods for simulating accessory display on a subject
CN107784537A (en) Spectacles matching system based on virtual technology
Cheng et al. Mixed Reality for Virtual Jewellery Fitting
WO2024037722A1 (en) Devices, methods and computer programs for virtual eyeglasses try-on
KR20220079274A (en) Method of glasses wearing simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant