CN113592592A - Method for generating trial wearing effect picture of spectacle frame and virtual trial wearing system of spectacle frame - Google Patents

Method for generating trial wearing effect picture of spectacle frame and virtual trial wearing system of spectacle frame Download PDF

Info

Publication number
CN113592592A
CN113592592A CN202110859462.9A CN202110859462A CN113592592A CN 113592592 A CN113592592 A CN 113592592A CN 202110859462 A CN202110859462 A CN 202110859462A CN 113592592 A CN113592592 A CN 113592592A
Authority
CN
China
Prior art keywords
image
pixel size
user
face image
size ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110859462.9A
Other languages
Chinese (zh)
Other versions
CN113592592B (en
Inventor
严沛熙
苏乐欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110859462.9A priority Critical patent/CN113592592B/en
Publication of CN113592592A publication Critical patent/CN113592592A/en
Application granted granted Critical
Publication of CN113592592B publication Critical patent/CN113592592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method for generating a glasses frame try-on effect picture and a glasses frame virtual try-on system. The method for generating the trial wearing effect graph of the glasses frame comprises the following steps: determining a first pixel size ratio based on the user face image information, the first pixel size ratio representing a relationship between pixel points of the user face image and actual sizes of facial features displayed in the user face image; acquiring a second pixel size ratio, wherein the second pixel size ratio represents the relationship between the pixel point of the user glasses frame image and the actual size of the glasses frame; adjusting the size of the user face image and/or the user glasses frame image to keep the first pixel size ratio of the user consistent with the second pixel size ratio of the user; and superposing the glasses frame image of the user to the proper position in the face image of the user, thereby generating an effect picture of the glasses frame worn by the user. The method and the device adjust the pixel size ratio of the glasses frame image and the face image to be consistent and overlapped, and provide an accurate virtual effect image for wearing the glasses frame for a user.

Description

Method for generating trial wearing effect picture of spectacle frame and virtual trial wearing system of spectacle frame
Technical Field
The application relates to the field of virtual try-on equipment, in particular to a method for generating a glasses frame try-on effect picture, a glasses frame virtual try-on system and a method for establishing a database.
Background
Conventional glasses are cumbersome to purchase. The user first browses the local eyeglass shop and tries on a plurality of pairs of eyeglass frames. In deciding which eyeglass frame to purchase, the user may consider a number of factors, such as cost, style, fit, comfort, brand, quality, color, materials, and weight. Considering such a number of factors can lead to physical fatigue. Typically, when a user puts down one eyeglass frame to try on the next, the user has forgotten what the look and feel of the previous eyeglass frame was, forcing the user to try on each eyeglass frame multiple times. Furthermore, the sample frame typically does not have a suitable lens. This is a challenge for users with myopic eyes, as it is not easy to discern the appearance of the eyeglass frame without placing a mirror very close to the user's face, thereby forcing the user to not see the overall appearance of the eyeglass frame at a great distance. Again, each store has only a limited number of eyeglass frames. If the user cannot find a suitable eyeglass frame, the user will need to start the process of comparing eyeglass frames again in another store.
Online glasses stores on the internet attempt to address the aforementioned burdensome user pain points by allowing users to virtually compare different eyeglass frames on a smartphone or computer. In online stores on the internet, a user can easily browse, filter, sort, and compare eyeglass frames that involve many factors, such as price, style, color, brand, and the like. This convenience allows the user to quickly compare different eyeglass frames side-by-side, even across multiple online stores. Some online stores also have a virtual frame function that can overlay an image of the eyeglass frame over a photo or real-time video of the buyer, thereby providing a virtual model of the effect of the eyeglass frame on the buyer's face. However, the biggest drawback of the virtual fitting functionality available in all online glasses stores today is that the ratio of the glasses frame image to the user's image or video live is different, and does not provide a realistic presentation of the appearance of the glasses frame on the user's body. In practice, the user is easily given the wrong impression of fit. This drawback defeats the purpose of setting the virtual try-on function. Without an accurate virtual model, the user cannot correctly judge whether a particular eyeglass frame is appropriate unless the user decides to visit a local store to try on the eyeglass frame in a practical manner. The main drawback of purchasing eyeglass frames online forces many buyers to purchase goods from local stores, rather than ultimately purchasing goods online.
Disclosure of Invention
In order to solve at least one of the above problems, the present application provides a method for generating a glasses frame try-on effect map, a glasses frame virtual try-on system and a method for establishing a database, in which a glasses frame image is accurately superimposed on a position where a face image wears a glasses frame, so that a user can accurately know an effect of wearing the glasses frame.
One embodiment of the present application provides a method for generating a trial-wearing effect diagram of a spectacle frame, including: acquiring a face image of a user; acquiring face image information associated with the face image; determining a first pixel size ratio based on the face image information, the first pixel size ratio representing a relationship between pixel points of the face image and actual sizes of facial features displayed in the face image; acquiring a glasses frame image; acquiring a second pixel size ratio representing a relationship between a pixel point of the eyeglass frame image and an actual size of the eyeglass frame; adjusting the size of the face image and/or the eyeglass frame image such that the first pixel size ratio and the second pixel size ratio are consistent; and superposing the glasses frame image to a proper position in the face image so as to generate an effect picture of wearing the glasses frame by the user.
Optionally, the first pixel size ratio is defined as the number of pixels occupied by facial features with an actual length of every 1mm in the face image, and the second pixel size ratio is defined as the number of pixels occupied by glasses frame features with an actual length of every 1mm in the glasses frame image.
Optionally, the facial image information includes: a model of a device used to capture the face image, and a setting when the face image is captured, the step of determining a first pixel size ratio based on the face image information includes: a pixel size ratio matching the model number and the setting is obtained from a database.
Optionally, the model number comprises at least one of: camera body model, mobile phone model, lens model; the setting includes at least one of: shooting distance, image resolution, focal length and zoom multiple.
Optionally, the database includes pixel size ratios corresponding to different shooting distances, the shooting distances being 500mm to 700mm, for the same model.
Optionally, the step of obtaining face image information associated with the face image comprises:
-identifying specific facial features in the facial image;
-determining the number of pixels occupied by said specific facial feature;
the step of determining a first pixel size ratio based on the face image information comprises:
-determining the actual size of the specific facial feature from the user's height;
-determining a first pixel size ratio based on the actual size of the specific facial feature and the number of pixels occupied by the specific facial feature.
Optionally, the step of obtaining face image information associated with the face image comprises:
-identifying a reference object of known size in the face image;
-determining the number of pixels occupied by the known size of the reference object.
Optionally, the step of superimposing the eyeglass frame image into a suitable position in the face image comprises:
-determining a lens center position in the eyeglass frame image;
-determining an eye center position in the face image;
-aligning the lens center position with the eye center position.
One embodiment of the present application provides an electronic device, including: one or more processors; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors implement the methods as described above.
One embodiment of the present application provides a computer-readable storage medium having stored thereon a processor program for performing the above-described method.
One embodiment of the present application provides a virtual try-on system for spectacle frames, comprising: a spectacle frame database in which information of a plurality of spectacle frames is stored, the information of each spectacle frame including: a ratio of a pixel size of a frame image and the frame image; a user device database in which pixel size ratios of images obtained by photographing at different photographing distances by a plurality of photographing devices are stored; a request processing module that provides a user with a spectacle frame image of a selected spectacle frame and a pixel size ratio thereof based on a user selection, wherein the pixel size ratio represents a relationship between a pixel point in the image and an actual size of a photographic subject, and provides the user with a pixel size ratio that matches a photographing apparatus and a photographing distance of the user.
One embodiment of the present application provides a method of establishing an eyeglass frame database for an eyeglass frame virtual try-on system, comprising:
i) aligning the shooting equipment to the human head model;
ii) wearing a first of a plurality of eyeglass frames on the mannequin head,
iii) obtaining a first spectacle frame image using the capture device;
iv) identifying a particular eyeglass frame feature in the first eyeglass frame image and determining the number of pixel points occupied by the eyeglass frame feature;
v) determining the actual size of the particular eyeglass frame feature;
vi) determining the pixel size ratio of the first spectacle frame image according to the number of the pixel points and the actual size of the specific spectacle frame feature;
vii) performing steps ii) to vi) on remaining eyeglass frames of the plurality of eyeglass frames to obtain eyeglass frame images and pixel size ratios for the remaining eyeglass frames.
Optionally, the method further comprises:
viii) removing the background and the human head model in all the glasses frame images;
ix) determining the position of the lens center point in all the spectacle frame images.
One embodiment of the present application provides a method of building a user device database for a virtual fitting system for eyeglass frames, comprising:
i) selecting a first shooting device from a plurality of shooting devices;
ii) placing the first photographing apparatus on a movable platform such that the first photographing apparatus is located at a first photographing distance from a photographic subject;
iii) obtaining a front image of a photographic subject using the first photographing apparatus;
iv) identifying a specific feature on the photographic subject and determining the number of pixel points occupied by the specific feature;
v) determining the actual size of the particular feature;
vi) determining a pixel size ratio corresponding to the first shooting distance according to the number of the pixel points and the actual size of the specific feature;
vii) moving the movable platform to position the first photographing apparatus at a second photographing distance from the photographic subject, and performing steps iii) to vi) to obtain a pixel size ratio corresponding to the second photographing distance;
viii) repeatedly performing step vii) to obtain pixel size ratios corresponding to a plurality of photographing distances;
ix) performing steps ii) to viii) for the remaining photographing apparatuses of the plurality of photographing apparatuses to obtain pixel size ratios of the remaining photographing apparatuses at a plurality of photographing distances.
Optionally, the plurality of shooting distances each lie in a range of 500mm to 700 mm.
This application acquires the first pixel size ratio of face image and the second pixel size ratio of spectacle-frame image, will make first pixel size ratio and second pixel size ratio keep unanimous, superposes the spectacle-frame image to the suitable position in the face image, generates the effect picture that the user wore the spectacle-frame, lets the accurate model of wearing the spectacle-frame of knowing of user, and the user of being convenient for selects and buys the spectacle-frame.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for a person skilled in the art to obtain other drawings based on these drawings without exceeding the protection scope of the present application.
FIG. 1 is a flowchart of a method for generating a trial-fit effect diagram of a spectacle frame according to the present application;
FIG. 2 is a schematic view of a first apparatus of the present application;
FIG. 3 is a schematic diagram of a second apparatus of the present application;
FIG. 4 is a schematic diagram of a virtual fitting system for a spectacle frame of the present application;
fig. 5 is a schematic diagram of an electronic device of the present application.
Detailed Description
The technical solutions of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, not all, of the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, the present embodiment provides a method for generating a trial-wearing effect diagram of an eyeglass frame, so that a user can select and try on the eyeglass frame on line. The method for generating the trial wearing effect graph of the spectacle frame comprises the following steps:
and S1, acquiring a face image of the user. For example, the user can store the shot face image in the mobile phone, and when the glasses frame try-on effect picture needs to be generated, the face image selected by the user is obtained from the mobile phone.
S2, face image information associated with the face image is acquired.
S3, a first pixel size ratio is determined based on the face image information, the first pixel size ratio being a pixel size ratio of the face image and representing a relationship between a pixel point of the face image and an actual size of a face feature displayed in the face image.
Optionally, the first pixel size ratio is defined as the number of pixels occupied by a facial feature with an actual length of every 1mm in the face image.
And S4, acquiring the spectacle frame image. For example, a dedicated application on a cell phone obtains a spectacle frame image uploaded by a vendor.
And S5, acquiring a second pixel size ratio, wherein the second pixel size ratio is the pixel size ratio of the spectacle frame image and represents the relationship between the pixel point of the spectacle frame image and the actual size of the spectacle frame.
Optionally, the second pixel size ratio is defined as the number of pixels occupied by a frame feature of every 1mm in actual length in the frame image.
For example, the second pixel size ratio of the frame image is determined by the ratio of the number of pixels occupied by the frame width in the frame image to the actual size (measured in millimeters) of the frame width. Other features of the eyeglass frame may also be selected as desired as particular features.
S6, adjusting the size of the face image and/or the glasses frame image to make the first pixel size ratio and the second pixel size ratio consistent.
Specifically, by scaling the frame image, the aspect ratio of the frame image is not changed, and the second pixel size ratio of the frame image is adjusted to be equal to the first pixel size ratio of the face image of the user. Alternatively, the aspect ratio of the face image is not changed by scaling the face image, and the first pixel size ratio of the face image is adjusted to be equal to the second pixel size ratio of the eyeglass frame image.
And S7, superposing the glasses frame image to the proper position in the face image, and generating an effect picture of the glasses frame worn by the user.
Through a software algorithm, the spectacle frame images with the equal pixel size ratio can be superposed to the positions of the face images where the spectacle frames are worn, and an effect image of the user wearing the spectacle frames is obtained.
Through the obtained effect picture of the user wearing the glasses frame, the user can accurately know the model of the glasses frame, and the user can conveniently select and purchase the glasses frame on line.
Alternatively, the frame image and the pixel size ratio of the frame image are acquired by a frame database.
In an alternative, the method for establishing the spectacle frame database after taking the spectacle frame image comprises:
-determining the number of pixels of a specific feature in the frame image. For example, the number of pixels occupied by one end to the other end of the spectacle frame in the spectacle frame image is calculated by software.
-determining the actual dimensions of the specific features of the spectacle frame. For example, how many millimeters the width of the eyeglass frame is actually is determined by the measuring tool.
-dividing the number of pixels of the specific feature in the frame image by the actual size of the specific feature of the frame image to obtain a pixel size ratio of the frame image.
The pixel size ratio of the eyeglass frame image can be determined by the above scheme for a small number of eyeglass frames. However, for large batches of spectacle frames, the above solution is too cumbersome to operate. Alternatively, the determined eyeglass frame database may be established by the first device 100.
Specifically, as shown in fig. 2, the first device 100 includes a camera box 110 and a camera 120. The front end of the photographic box 110 is opened, a background cloth 111 is provided inside, and a human head model 112 is provided on the bottom surface of the photographic box 110. The number of the spectacle frames 2 to be photographed is multiple, and the flow is as follows:
i) the camera 120 is fixed after being directed to the manikin 112 in the photo box.
The camera 120 is fixed after being aligned with the manikin 112, so that the distance between the camera 120 and the front end face of the camera box 110 is not changed, and the camera 120 cannot rotate. The present embodiment fixes the camera 120 by the camera bracket, and the lens of the camera 120 is substantially flush with the position where the human head model 112 wears the glasses frame, so that the camera 120 perfectly captures the front image of the glasses frame.
ii) placing the first frame on the mannequin head 112.
iii) taking a picture by a camera to obtain a first spectacle frame image;
iv) identifying the specific spectacle frame characteristics in the first spectacle frame image, and determining the number of pixel points of the specific spectacle frame characteristics in the first spectacle frame image. The particular eyeglass frame characteristics may select the width of the eyeglass frame.
v) determining the actual size of a particular eyeglass frame feature of the first eyeglass frame.
vi) determining the pixel size ratio of the first spectacle frame image according to the number of pixel points of the specific spectacle frame feature in the first spectacle frame image and the actual size of the specific spectacle frame feature of the first spectacle frame.
vii) performing the above steps on the remaining eyeglass frames respectively to obtain eyeglass frame images and pixel size ratios of the remaining eyeglass frames.
The above method further comprises the steps of:
viii) removing the background and the human head model in all the glasses frame images.
The camera 120 may be connected to the computer 1, and the image taken by the camera 120 is transmitted to the computer 1. The images photographed by the camera 120 all show the background and the human head model, which is inconvenient for subsequent use. The software can automatically detect and eliminate the background and the human head model in the glasses frame image, only the glasses frame in the image is left, and a usable glasses frame image is obtained.
ix) determining the position of the lens center point in all the spectacle frame images.
According to an optional aspect of the present application, the facial image information includes: the model of the device used to capture the facial image, and the settings at the time of capturing the facial image. The step of determining the first pixel size ratio based on the face image information includes: the pixel size ratio matching the model and the setting is obtained from the database.
Further, the model number includes at least one of: camera body model, mobile phone model, lens model; the setting includes at least one of: shooting distance, image resolution, focal length and zoom multiple.
Specifically, it is not easy for the user to determine the first pixel size ratio of a pre-captured image. In general, the pixel size ratio of the user's facial image depends on several factors, such as the make and model of the camera, the camera lens, the focal length of the image, and the physical distance between the camera and the user at the time of image capture. Although sometimes these factors are recorded as metadata (most basic data) in exchangeable image file format (Exif) tags of image files. They are sometimes not readily available. Furthermore, even if they are available, they are not always accurate. For example, a camera of a smartphone typically does not report an actual focal length, but only an equivalent focal length, which is typically not an accurate representation of the actual focal length. Exif tags can also be easily edited or deleted during any photo editing process. Therefore, it is difficult to calculate the pixel size ratio of an image arbitrarily taken by the user in consideration of the number of indexes after combination of the above-described factors.
However, it is still worth considering that a subset of the solution problems is obtained by limiting some of the factors mentioned above. The camera market has been a decentralized market, offering users many options such as brand, model, and various shots. In recent years, however, conventional cameras have been largely replaced by camera-equipped smartphones. There is less and less selectivity in brand and model factors as only a few smart phone brands account for the majority of the smart phone market share across the globe. Since most smartphones have fixed and non-interchangeable lenses, there are fewer options for lenses. By requiring the user to take a facial image using a camera with specific software with fixed zoom levels, etc. options using a selected smartphone, the number of factors affecting the pixel size ratio of the image can be greatly reduced.
As shown in fig. 3, in order to facilitate calculation of the first pixel size ratio of the user face image, for the current most common model of smartphone around the world, a database of shooting distance data to pixel size ratio of the image, i.e., a user equipment database, is built using the second device 200 shown in fig. 3. The more photographing device models the user device database contains, the larger the subset of problems that can be solved.
The second apparatus 200 includes a movable platform 210 and a controller 220. The movable platform 210 has a slot on its upper surface, which can be moved along a straight line toward or away from the known object 3. The controller 220 is wirelessly connected to the movable platform 210 to control the movement of the movable platform 210. Alternatively, the distance between the movable platform 210 and the known object 3 is measured by the scale 230 to determine the photographing distance of the photographing apparatus. Taking the example that the shooting device is a mobile phone, the method for establishing the database for the user device comprises the following steps:
and selecting a smart phone. A smartphone 4 of known model (model including brand information) with a camera is mounted on a mobile card slot of the movable platform 210, and the movable platform 210 can move the smartphone 4. The original position of the smartphone 4 is located at the first shooting distance, e.g., 500mm, from the known object 3, so that the camera on the smartphone 4 can perfectly capture the front image of the known object 3.
The software on the smartphone will then continue to capture images. An image recognition software algorithm on the smartphone is used to detect which pixels on the image constitute known objects. Then, the number of horizontal pixels over the entire known object is divided by the physical width of the known object to determine a pixel size ratio at a first photographing distance of 500 mm.
Then, the movable platform 210 is instructed to move 10mm away from the object and continue to capture another image, and the pixel size ratio is calculated at a shooting distance of 510 mm. The face image scale determination module 200 continues to operate in a similar manner, capturing an image every 10mm from a shooting distance of 500mm to 700mm, determining the pixel size ratio of the image.
And constructing a two-dimensional table according to the relation between the pixel size ratio data of the smart phone with the specific model and the shooting distance, and storing the two-dimensional table on a remote server to form a database for later use. The process is completed for the smart phones with cameras of the general models as much as possible, and the smart phones used by users are provided with special application programs for merging, storing and retrieving data. Subsequently, when the user takes an image of his face (ranging from 500mm to 700mm at a certain focal distance) using a smartphone in which a proprietary application is installed, the software can use the previously constructed database to obtain the pixel size ratio of the picture.
Taking the setting of the shooting distance when the face image is shot as an example:
some smartphones include sensors for determining the distance from the smartphone camera to an object, but not all smartphones have such sensors.
Alternatively, in the medical field, the adult body follows certain proportions statistically closely. With a general body scale, the actual shooting distance of the user for shooting the face image can be estimated in various ways. For example, one way of calculation is:
the actual shooting distance is equal to height/2-shoulder width/2-palm length/2 is equal to height/2-height/8-height/20 is equal to height 0.325
Alternatively, other methods may be used to estimate the actual shooting distance according to the height of the person, as shown in table 1.
When the above formula is used, the user needs to obtain the self-shot image under the condition of completely straightening the arm.
And acquiring the pixel size ratio of the face image from the database according to the model of equipment used by the user for shooting the image and the actual shooting distance.
The general human body characteristics are in the following ratio:
TABLE 1
The length of the open arms of a person is approximately equal to their height.
The shoulder width of a person is one quarter of his height.
The length of the palm is one tenth of the height of a human body.
The height of the head (from chin to top of head) is one eighth of the height of a person.
The length of the face is one-eighth of the height of a person.
The height of the head (from the chin to the hairline) is one tenth of the height of a person.
The distance from the elbow to the tip of the hand is one fifth of the height of a person.
The distance from the elbow to the armpit is one-eighth of the height of a person.
The length of the ears is one third of the length of the face.
The distance from the nose to the base of the chin is one third of the face length.
The distance from the hairline to the eyebrows is one third of the face length.
In another alternative, the step of obtaining face image information associated with the face image comprises:
-identifying a specific facial feature in the facial image;
-determining the number of pixels occupied by a particular facial feature;
the step of determining the first pixel size ratio based on the face image information includes:
-determining the actual size of the specific facial feature from the height of the user;
-determining a first pixel size ratio based on the actual size of the specific facial feature and the number of pixels occupied by the specific facial feature.
In particular, machine learning may be used to train a computer program to recognize the length of a particular facial feature that follows the proportions of a normal human body, such as the length of the face or the distance from the nose to the base of the chin in an image. The number of pixels constituting a specific facial feature is recorded.
The actual length of the specific facial features can be estimated by the height of the user according to the rules of human statistics. For example, the length of the face may be estimated based on the height of the person.
Dividing the number of pixels occupied by the specific facial feature by the length of the specific facial feature determines the pixel size ratio of the facial image.
In another alternative, the step of obtaining face image information associated with the face image comprises:
-identifying a reference object of known size in the face image;
-determining the number of pixels occupied by the known size of the reference.
Specifically, when the user takes a self-portrait image, the image is required to include a known reference object of a known size, such as a coin.
A computer program is trained to recognize known references in an image and to count the number of pixels that make up a particular size (e.g., height or length).
And dividing the number of pixels with the known size of the known reference object by the known size of the known reference object to determine the pixel size ratio of the face image.
Several of the methods described above involve algorithms for estimating the pixel size ratio of an image. It can be calculated using a variety of methods and their resulting pixel size ratios can be averaged to obtain more accurate results.
According to an alternative of the present application, the step of superimposing the eyeglass frame image into the face image in a suitable position comprises:
-determining a lens center position in the eyeglass frame image;
-determining an eye center position in the face image;
-aligning the lens center position with the eye center position.
In particular, the center position of the lens in the frame image can be determined using software.
The center position of the eyes in the face image can be identified using software. The center of the left lens in the eyeglass frame image is aligned with the center of the corresponding left eye in the face image, and the center of the right lens in the eyeglass frame image is aligned with the center of the corresponding right eye in the face image. Ensure the accurate alignment of the glasses frame image and the face image.
By using the above method to obtain an accurate virtual try-on image of the appearance of the eyeglass frames in the eyes of the user, it is convenient for the user to compare eyeglass frames in many local eyeglass stores on the web to see how each pair of eyeglass frames is worn.
By the method, an online platform can be created to cooperate with an eyeglass store and help the eyeglass store sell eyeglass frames on line. The online platform can be used for free by the user. Revenue for the online platform may come from a variety of sources, such as advertising fees, eyeglass frame listing display fees, and sales credits.
To be able to access the online platform, a user must first register an account. The online platform can then be logged in and accessed using the above-mentioned proprietary application on the smartphone. The user may use a proprietary application to take one or more images of his face and store them in the user's profile. Different facial images may be used for different scenes. For example, a user may wish to store one image for a casual appearance and another for a more formal occasion. By creating multiple characters using multiple images, the user can virtually try on the eyeglass frame and observe their wearing effect on different occasions.
The online platform has a directory-level page that displays the eyeglass frame matching the user-defined search criteria and displays a short summary of the eyeglass frame. In each summary, a thumbnail of the eyeglass frame is displayed along with other information such as (but not limited to) the eyeglass frame's comments, ratings, price, and store location. On the catalog level page, the user can bookmark and compare eyeglass frames based on (but not limited to) fitness, color, shape, material, comments, style and style of eyeglass frames, as well as location of stores and comments of stores. In addition to displaying thumbnails of the eyeglass frames in each summary, the user may also switch to a "virtual try-on mode" in which the thumbnails of the eyeglass frames will be overlaid over the facial image (selected from the saved images) to create a virtual try-on image for each eyeglass frame. In this way, the user can easily compare the wearing effect of the glasses frames on the face side by side. The order of display of the eyeglass frames may be ordered according to user-defined criteria, or may be ordered by a proprietary software algorithm that ranks each eyeglass frame according to the likelihood of interest to a particular user. Machine learning can be used to train proprietary software algorithms based on many factors, such as how the user previously interacted with different eyeglass frames on an online platform.
The online platform also has a comparison page. The user may create a side-by-side comparison function table using a plurality of eyeglass frame images, e.g., bookmarked eyeglass frames, the side-by-side comparison function table displaying side-by-side information such as the appearance of the eyeglass frames, the effect of virtual wear, and the specification of the selected eyeglass frame, each row of the function table performing a comparison of one specification or function.
The online platform also allows users to share virtual try-on effect maps and comparison function lists with friends and family online and provides a voting way for sharees to provide suggestions they find more appropriate glasses frames for the users. The sharees' statistics, comments and opinions are then automatically compiled by software into a visually appealing informational map for viewing by the user.
The online platform also has an item level page that displays detailed information about a particular eyeglass frame. The item level page may be accessed by selecting one of the eyeglass frames on the category level page. The item level page may contain at least one of the following information, for example: multiple images of the eyeglass frame, multiple virtual fit effect maps with different roles, eyeglass frame specifications, video, pricing, comments, store locations, store comments, and store hours. If a store with a particular eyeglass frame allows online ordering, the user can click on a button on the item level page to order the particular eyeglass frame online. In addition, the user may also be directed to a local store with the selected eyeglass frame so that the user can actually confirm his fitness and appearance, conduct an ophthalmic exam, select the appropriate lens, and finally purchase the eyeglasses offline.
Optionally, the online platform also collects statistical information about the user's behavior. The big data of user behavior is then analyzed and refined into business intelligence. Business intelligence includes information and statistics such as: location of the user, consumption habits, popularity of the eyeglass frames, and sales conversion rate of the eyeglass frames. Business intelligence can then be provided to the glasses stores to provide a reference to business decisions regarding price, inventory, and marketing and advertising options.
As shown in fig. 4, one embodiment of the present application provides a virtual try-on system for spectacle frames, comprising: a glasses frame database 310, a user device database 320, and a request processing module 330.
The eyeglasses frame database 310 stores information of a plurality of eyeglasses frames, and the information of each eyeglasses frame includes: a ratio of a pixel size of the eyeglass frame image to a pixel size of the eyeglass frame image.
The user device database 320 stores pixel size ratios of images captured by a plurality of capturing devices at different capturing distances.
The request processing module 330 provides the user with the eyeglass frame image of the selected eyeglass frame and the pixel size ratio thereof based on the user selection, and provides the user with the pixel size ratio matching the user's photographing apparatus and photographing distance,
the pixel size ratio represents a relationship between a pixel point in an image and an actual size of an object.
The method and the system of the embodiment can provide a convenient one-stop platform for a user, and easily compare and browse the glasses frames. The virtual try-on function can accurately determine the size of the eyeglass frame and the user so that the user compares the fit and appearance of the eyeglass frame side-by-side in all of the category level page, the item level page, and the comparison page. By storing "self-portrait" type images in the user profile, the eyeglass frame can be placed virtually over the user's face so that the user can browse, sort, filter, and compare the eyeglass frame's fit and appearance, and share virtual fitting effects with friends and family, solicit opinions and votes. In addition, the user can view the eyeglass frame reviews and store reviews in order to make an informed purchasing decision.
For the glasses shop, the online platform of the present embodiment provides an alternative channel to attract new customers by showing their glasses frames that may fit their faces to new customers that may be located in other geographical locations. Using an online platform, the glasses shop can provide more frame fits the face to the purchaser without the need to display the actual floor space required for the frame in a physical store. When the user watches the glasses frame on line, the actual service can be provided to each purchaser without a physical shop, thereby saving time and cost.
Fig. 5 shows a block diagram of an electronic device according to an example embodiment of the present application.
Referring to fig. 5, an electronic device 400 according to this embodiment of the present application is shown. The electronic device 400 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 5, electronic device 400 is embodied in the form of a general purpose computing device. The components of electronic device 400 may include, but are not limited to: at least one processing unit 410, at least one memory unit 420, a bus 430 that couples various system components including the memory unit 420 and the processing unit 410, and the like.
The storage unit 420 stores program code, which can be executed by the processing unit 410, so that the processing unit 410 performs the methods according to the embodiments of the present application described herein.
The storage unit 420 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)4201 and/or a cache memory unit 4202, and may further include a read only memory unit (ROM) 4203.
The storage unit 420 may also include a program/utility 4204 having a set (at least one) of program modules 4205, such program modules 4205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 430 may be any bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 400 may also communicate with one or more external devices 4001 (e.g., touch screens, keyboards, pointing devices, bluetooth devices, etc.), with one or more devices that enable a user to interact with the electronic device 400, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 400 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 450. Also, the electronic device 400 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 460. The network adapter 460 may communicate with other modules of the electronic device 400 via the bus 430. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 400, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The present embodiment provides a computer-readable storage medium having stored thereon a processor program for executing the above-mentioned method.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The term "module" in this specification refers to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-Programmable Gate Array (FPGA), an Integrated Circuit (IC), or the like.
The embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the technical solutions and the core ideas of the present application. Therefore, the person skilled in the art should, according to the idea of the present application, change or modify the embodiments and applications of the present application based on the scope of protection of the present application. In view of the above, the description should not be taken as limiting the application.

Claims (15)

1. A method of generating a spectacle frame try-on effect diagram, comprising:
acquiring a face image of a user;
acquiring face image information associated with the face image;
determining a first pixel size ratio based on the face image information, the first pixel size ratio representing a relationship between pixel points of the face image and actual sizes of facial features displayed in the face image;
acquiring a glasses frame image;
acquiring a second pixel size ratio representing a relationship between a pixel point of the eyeglass frame image and an actual size of the eyeglass frame;
adjusting the size of the face image and/or the eyeglass frame image such that the first pixel size ratio and the second pixel size ratio are consistent;
and superposing the glasses frame image to a proper position in the face image so as to generate an effect picture of wearing the glasses frame by the user.
2. The method of claim 1, wherein the first pixel size ratio is defined as a number of pixels occupied by facial features of physical length per 1mm in the face image, and wherein the second pixel size ratio is defined as a number of pixels occupied by eyeglass frame features of physical length per 1mm in the eyeglass frame image.
3. The method of claim 1, wherein the facial image information comprises: the model of the device used to capture the facial image, and the settings at the time of capturing the facial image,
the step of determining a first pixel size ratio based on the face image information comprises: a pixel size ratio matching the model number and the setting is obtained from a database.
4. The method of claim 3, wherein the model comprises at least one of: camera body model, mobile phone model, lens model;
the setting includes at least one of: shooting distance, image resolution, focal length and zoom multiple.
5. The method of claim 3, wherein the database includes pixel size ratios corresponding to different shooting distances, the shooting distances being 500mm to 700mm, for the same model.
6. The method of claim 1, wherein the step of obtaining face image information associated with the face image comprises:
-identifying specific facial features in the facial image;
-determining the number of pixels occupied by said specific facial feature;
the step of determining a first pixel size ratio based on the face image information comprises:
-determining the actual size of the specific facial feature from the user's height;
-determining a first pixel size ratio based on the actual size of the specific facial feature and the number of pixels occupied by the specific facial feature.
7. The method of claim 1, wherein the step of obtaining face image information associated with the face image comprises:
-identifying a reference object of known size in the face image;
-determining the number of pixels occupied by the known size of the reference object.
8. The method of claims 1 to 7, wherein the step of superimposing the eyeglass frame image into a suitable position in the face image comprises:
-determining a lens center position in the eyeglass frame image;
-determining an eye center position in the face image;
-aligning the lens center position with the eye center position.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, implement the method of any of claims 1-8.
10. A computer-readable storage medium, on which a processor program is stored, wherein the processor program is adapted to perform the method of any of the preceding claims 1 to 8.
11. A virtual try-on system for eyewear frames, comprising:
a spectacle frame database in which information of a plurality of spectacle frames is stored, the information of each spectacle frame including: a ratio of a pixel size of a frame image and the frame image;
a user device database in which pixel size ratios of images obtained by photographing at different photographing distances by a plurality of photographing devices are stored;
a request processing module providing a user with a glasses frame image of a selected glasses frame and a pixel size ratio thereof based on a user selection, and providing the user with a pixel size ratio matching a photographing apparatus and a photographing distance of the user,
wherein the pixel size ratio represents a relationship between a pixel point in the image and an actual size of the subject.
12. A method of building an eyeglass frame database for an eyeglass frame virtual try-on system, comprising:
i) aligning the shooting equipment to the human head model;
ii) wearing a first of a plurality of eyeglass frames on the mannequin head,
iii) obtaining a first spectacle frame image using the capture device;
iv) identifying a particular eyeglass frame feature in the first eyeglass frame image and determining the number of pixel points occupied by the eyeglass frame feature;
v) determining the actual size of the particular eyeglass frame feature;
vi) determining the pixel size ratio of the first spectacle frame image according to the number of the pixel points and the actual size of the specific spectacle frame feature;
vii) performing steps ii) to vi) on remaining eyeglass frames of the plurality of eyeglass frames to obtain eyeglass frame images and pixel size ratios for the remaining eyeglass frames.
13. The method as in claim 12, further comprising:
viii) removing the background and the human head model in all the glasses frame images;
ix) determining the position of the lens center point in all the spectacle frame images.
14. A method of building a user device database for a glasses frame virtual fitting system, comprising:
i) selecting a first shooting device from a plurality of shooting devices;
ii) placing the first photographing apparatus on a movable platform such that the first photographing apparatus is located at a first photographing distance from a photographic subject;
iii) obtaining a front image of a photographic subject using the first photographing apparatus;
iv) identifying a specific feature on the photographic subject and determining the number of pixel points occupied by the specific feature;
v) determining the actual size of the particular feature;
vi) determining a pixel size ratio corresponding to the first shooting distance according to the number of the pixel points and the actual size of the specific feature;
vii) moving the movable platform to position the first photographing apparatus at a second photographing distance from the photographic subject, and performing steps iii) to vi) to obtain a pixel size ratio corresponding to the second photographing distance;
viii) repeatedly performing step vii) to obtain pixel size ratios corresponding to a plurality of photographing distances;
ix) performing steps ii) to viii) for the remaining photographing apparatuses of the plurality of photographing apparatuses to obtain pixel size ratios of the remaining photographing apparatuses at a plurality of photographing distances.
15. The method of claim 14, wherein the plurality of shot distances are each in a range of 500mm to 700 mm.
CN202110859462.9A 2021-07-28 2021-07-28 Method for generating glasses frame fitting effect diagram and glasses frame virtual fitting system Active CN113592592B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110859462.9A CN113592592B (en) 2021-07-28 2021-07-28 Method for generating glasses frame fitting effect diagram and glasses frame virtual fitting system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110859462.9A CN113592592B (en) 2021-07-28 2021-07-28 Method for generating glasses frame fitting effect diagram and glasses frame virtual fitting system

Publications (2)

Publication Number Publication Date
CN113592592A true CN113592592A (en) 2021-11-02
CN113592592B CN113592592B (en) 2023-11-07

Family

ID=78251425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110859462.9A Active CN113592592B (en) 2021-07-28 2021-07-28 Method for generating glasses frame fitting effect diagram and glasses frame virtual fitting system

Country Status (1)

Country Link
CN (1) CN113592592B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040004633A1 (en) * 2002-07-03 2004-01-08 Perry James N. Web-based system and method for ordering and fitting prescription lens eyewear
JP2004185555A (en) * 2002-12-06 2004-07-02 Fuji Photo Film Co Ltd Facial area extracting method and device
WO2008067754A1 (en) * 2006-12-04 2008-06-12 Yiling Xie Image database of computer assembling eyeglasses process method and device
US20100283844A1 (en) * 2009-05-11 2010-11-11 Acep France Method and system for the on-line selection of a virtual eyeglass frame
WO2012022380A1 (en) * 2010-08-18 2012-02-23 Optiswiss Ag Method and device for determining the spacing between a person's eyes
CN104408764A (en) * 2014-11-07 2015-03-11 成都好视界眼镜有限公司 Method, device and system for trying on glasses in virtual mode
CN104809638A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Virtual glasses trying method and system based on mobile terminal
CN104809761A (en) * 2014-01-29 2015-07-29 上海天昊信息技术有限公司 Virtual glasses trying system
WO2016011792A1 (en) * 2014-07-25 2016-01-28 杨国煌 Method for proportionally synthesizing image of article
CN105469072A (en) * 2015-12-14 2016-04-06 依视路国际集团(光学总公司) Method and system for evaluating matching degree of glasses wearer and the worn glasses
GB201603665D0 (en) * 2016-03-02 2016-04-13 Holition Ltd Augmenting object features in images
CN105975920A (en) * 2016-04-28 2016-09-28 上海交通大学 Method and system for trying glasses
US20170039775A1 (en) * 2015-08-07 2017-02-09 Ginman Group, Inc. Virtual Apparel Fitting Systems and Methods
CN107408315A (en) * 2015-02-23 2017-11-28 Fittingbox公司 The flow and method of glasses try-in accurate and true to nature for real-time, physics
WO2018072102A1 (en) * 2016-10-18 2018-04-26 华为技术有限公司 Method and apparatus for removing spectacles in human face image
CN109993781A (en) * 2019-03-28 2019-07-09 北京清微智能科技有限公司 Based on the matched anaglyph generation method of binocular stereo vision and system
US20190244407A1 (en) * 2016-08-10 2019-08-08 Zeekit Online Shopping Ltd. System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
CN110348936A (en) * 2019-05-23 2019-10-18 珠海随变科技有限公司 A kind of glasses recommended method, device, system and storage medium
WO2020057386A1 (en) * 2018-09-17 2020-03-26 菜鸟智能物流控股有限公司 Data processing method and device, logistics object processing system, and machine-readable medium
KR20200075541A (en) * 2018-12-18 2020-06-26 김재윤 Eyeglasses try-on simulation method
JP3230092U (en) * 2020-10-20 2021-01-07 株式会社東京メガネ Try-on image providing system
JP7095849B1 (en) * 2021-11-26 2022-07-05 アイジャパン株式会社 Eyewear virtual fitting system, eyewear selection system, eyewear fitting system and eyewear classification system

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040004633A1 (en) * 2002-07-03 2004-01-08 Perry James N. Web-based system and method for ordering and fitting prescription lens eyewear
JP2004185555A (en) * 2002-12-06 2004-07-02 Fuji Photo Film Co Ltd Facial area extracting method and device
WO2008067754A1 (en) * 2006-12-04 2008-06-12 Yiling Xie Image database of computer assembling eyeglasses process method and device
US20100283844A1 (en) * 2009-05-11 2010-11-11 Acep France Method and system for the on-line selection of a virtual eyeglass frame
WO2012022380A1 (en) * 2010-08-18 2012-02-23 Optiswiss Ag Method and device for determining the spacing between a person's eyes
CN104809761A (en) * 2014-01-29 2015-07-29 上海天昊信息技术有限公司 Virtual glasses trying system
WO2016011792A1 (en) * 2014-07-25 2016-01-28 杨国煌 Method for proportionally synthesizing image of article
CN104408764A (en) * 2014-11-07 2015-03-11 成都好视界眼镜有限公司 Method, device and system for trying on glasses in virtual mode
CN107408315A (en) * 2015-02-23 2017-11-28 Fittingbox公司 The flow and method of glasses try-in accurate and true to nature for real-time, physics
CN104809638A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Virtual glasses trying method and system based on mobile terminal
US20170039775A1 (en) * 2015-08-07 2017-02-09 Ginman Group, Inc. Virtual Apparel Fitting Systems and Methods
CN105469072A (en) * 2015-12-14 2016-04-06 依视路国际集团(光学总公司) Method and system for evaluating matching degree of glasses wearer and the worn glasses
GB201603665D0 (en) * 2016-03-02 2016-04-13 Holition Ltd Augmenting object features in images
CN105975920A (en) * 2016-04-28 2016-09-28 上海交通大学 Method and system for trying glasses
US20190244407A1 (en) * 2016-08-10 2019-08-08 Zeekit Online Shopping Ltd. System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
WO2018072102A1 (en) * 2016-10-18 2018-04-26 华为技术有限公司 Method and apparatus for removing spectacles in human face image
WO2020057386A1 (en) * 2018-09-17 2020-03-26 菜鸟智能物流控股有限公司 Data processing method and device, logistics object processing system, and machine-readable medium
KR20200075541A (en) * 2018-12-18 2020-06-26 김재윤 Eyeglasses try-on simulation method
CN109993781A (en) * 2019-03-28 2019-07-09 北京清微智能科技有限公司 Based on the matched anaglyph generation method of binocular stereo vision and system
CN110348936A (en) * 2019-05-23 2019-10-18 珠海随变科技有限公司 A kind of glasses recommended method, device, system and storage medium
JP3230092U (en) * 2020-10-20 2021-01-07 株式会社東京メガネ Try-on image providing system
JP7095849B1 (en) * 2021-11-26 2022-07-05 アイジャパン株式会社 Eyewear virtual fitting system, eyewear selection system, eyewear fitting system and eyewear classification system

Also Published As

Publication number Publication date
CN113592592B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
US11592691B2 (en) Systems and methods for generating instructions for adjusting stock eyewear frames using a 3D scan of facial features
US20210406987A1 (en) Recommendation system, method and computer program product based on a user's physical features
US20190228448A1 (en) System, Platform and Method for Personalized Shopping Using a Virtual Shopping Assistant
US20180350148A1 (en) Augmented reality display system for overlaying apparel and fitness information
US20240078584A1 (en) System Platform and Method for Personalized Shopping Using an Automated Shopping Assistant
US20190188784A1 (en) System, platform, device and method for personalized shopping
US20220188897A1 (en) Methods and systems for determining body measurements and providing clothing size recommendations
US20230377012A1 (en) System, platform and method for personalized shopping using an automated shopping assistant
WO2018191784A1 (en) Eyeglasses ordering system and digital interface therefor
US11676347B2 (en) Virtual try-on systems for spectacles using reference frames
JP2023515517A (en) Fitting eyeglass frames including live fitting
KR102506352B1 (en) Digital twin avatar provision system based on 3D anthropometric data for e-commerce
CN113592592B (en) Method for generating glasses frame fitting effect diagram and glasses frame virtual fitting system
US20210110161A1 (en) Interactive try-on system and method for eyeglass frame
Anand et al. Glass Virtual Try-On
TWI492174B (en) Cloud body-sensory virtual-reality eyeglasses prescription system
CN114730101B (en) System and method for adjusting inventory eyeglass frames using 3D scanning of facial features
KR20220124053A (en) Kiosk System for Eyeglasses Recommend Guide of Optical Store and Operation Method Thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant