WO2023130057A1 - Recommandation de lunettes assistée par un dispositif utilisateur - Google Patents

Recommandation de lunettes assistée par un dispositif utilisateur Download PDF

Info

Publication number
WO2023130057A1
WO2023130057A1 PCT/US2022/082603 US2022082603W WO2023130057A1 WO 2023130057 A1 WO2023130057 A1 WO 2023130057A1 US 2022082603 W US2022082603 W US 2022082603W WO 2023130057 A1 WO2023130057 A1 WO 2023130057A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
eyewear
face
facial
model
Prior art date
Application number
PCT/US2022/082603
Other languages
English (en)
Inventor
Brian William FARLEY
Original Assignee
Farley Technologies Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Farley Technologies Llc filed Critical Farley Technologies Llc
Publication of WO2023130057A1 publication Critical patent/WO2023130057A1/fr

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C13/00Assembling; Repairing; Cleaning
    • G02C13/003Measuring during assembly or fitting of spectacles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This disclosure relates to user-device assisted eyewear recommendation.
  • One aspect of the disclosure provides a method of generating an eyewear recommendation for a user.
  • the method includes receiving, at data processing hardware, sensor data from a sensor of a mobile device where the sensor data captures a face of a user of the mobile device.
  • the method further includes receiving, at the data processing hardware, user eyewear preferences from the user of the mobile device.
  • the method also includes determining, by the data processing hardware, facial characteristics of the face of the user using the sensor data captured for the face of the user.
  • the method further includes generating, by the data processing hardware, an eyewear recommendation for the user using an eyewear recommendation model.
  • the eyewear recommendation model is configured to receive, as inputs, the user eyewear preferences and the facial characteristics and to generate, as output, the eyewear recommendation indicating an eyewear model that (i) satisfies the user eyewear preferences and (ii) fits the face of the user characterized by the facial characteristics.
  • the system includes data processing hardware and memory hardware in communication with the data processing hardware.
  • the memory hardware stores instructions that, when executed on the data processing hardware cause the data processing hardware to perform operations.
  • the operations include receiving sensor data from a sensor of a mobile device where the sensor data captures a face of a user of the mobile device.
  • the operations further include receiving user eyewear preferences from the user of the mobile device.
  • the operations also include determining facial characteristics of the face of the user using the sensor data captured for the face of the user.
  • the operations further include generating an eyewear recommendation for the user using an eyewear recommendation model.
  • the eyewear recommendation model is configured to receive, as inputs, the user eyewear preferences and the facial characteristics and to generate, as output, the eyewear recommendation indicating an eyewear model that (i) satisfies the user eyewear preferences and (ii) fits the face of the user characterized by the facial characteristics.
  • the user device includes a display, data processing hardware in communication with the display, and memory hardware in communication with the data processing hardware that stores instructions that, when executed on the data processing hardware, cause the data processing hardware to perform operations.
  • the operations include receiving, at data processing hardware, sensor data from a sensor of a mobile device where the sensor data captures a face of a user of the mobile device.
  • the operations further include receiving, at the data processing hardware, user eyewear preferences from the user of the mobile device.
  • the operations also include determining, by the data processing hardware, facial characteristics of the face of the user using the sensor data captured for the face of the user.
  • the operations further include generating, by the data processing hardware, an eyewear recommendation for the user using an eyewear recommendation model.
  • the eyewear recommendation model is configured to receive, as inputs, the user eyewear preferences and the facial characteristics and to generate, as output, the eyewear recommendation indicating an eyewear model that (i) satisfies the user eyewear preferences and (ii) fits the face of the user characterized by the facial characteristics.
  • An aspect of the disclosure also provides another method for generating an eyewear recommendation.
  • the method includes receiving, at data processing hardware, sensor data from a sensor of a mobile device, the sensor data capturing a face of a user of the mobile device.
  • the method also includes detecting, by the data processing hardware, facial features for the face of the user.
  • the method further includes determining, by the data processing hardware, a respective location of each facial feature detected for the face of the user.
  • the method additionally includes generating, by the data processing hardware, a plurality of facial feature measurements using the respective locations of the facial features where each facial feature measurement of the plurality of facial feature measurements corresponds to a distance from a first location of a first facial feature to a second location of a second facial feature.
  • the method further includes determining, by the data processing hardware, a facial attribute for the face of the user using the sensor data captured by the senor of the mobile device.
  • the method also includes generating, by the data processing hardware, a facial shape classification for the face of the user based on a contour map of the plurality of facial feature measurements.
  • the method further includes generating, by the data processing hardware, an eyewear recommendation for the user using an eyewear recommendation model.
  • the eyewear recommendation model is configured to receive, as inputs, the plurality of facial feature measurements, the facial attribute for the face of the user, and the facial shape classification for the face of the user and to generate, as output, the eyewear recommendation indicating an eyewear model that fits the face of the user characterized by inputs.
  • the eyewear recommendation model is in communication with a database storing a plurality of rated eyewear models where each rated eyewear model is rated according to marketing intelligence data that characterizes consumer eyewear preference.
  • the eyewear recommendation model is configured to determine whether the eyewear model of the eyewear recommendation satisfies the user eyewear preferences by determining consumer eyewear preferences from marketing intelligence data accessible to the eyewear recommendation model that are similar to the user eyewear preferences and identifying that the eyewear model has a rating from consumers of the consumer eyewear preferences similar to the user eyewear preferences that satisfies a rating threshold.
  • the eyewear recommendation model may include a random forest classifier model.
  • the eyewear model indicated by the eyewear recommendation may correspond to an existing retail eyewear model.
  • the existing retail eyewear model may have dimensions that conform to the face of the user characterized by the facial characteristics.
  • the user eyewear preferences may be received at a user interface of the mobile device.
  • the sensor of the mobile device may include an accelerometer.
  • the sensor data may include video data defined by a sequential set of images of the face of the user.
  • the face of the user is displayed in real-time using current sensor data being captured by the sensor of the mobile device.
  • the user eyewear preferences may indicate style-based attributes for eyewear preferred by the user.
  • these aspects determine facial characteristics of the user by: detecting facial features for the face of the user; determining a respective location of each facial feature detected for the face of the user; generating a plurality of facial feature measurements using the respective locations of the facial features where each facial feature measurement of the plurality of facial feature measurements corresponds to a distance from a first location of a first facial feature to a second location of a second facial feature; and generating a facial shape classification for the face of the user based on a contour map of the plurality of facial feature measurements.
  • these aspects may also include determining facial contour lines for the face of the user using the sensor data captured for the face of the user and determine the facial features for the face of the user based on the facial contour lines generated from the sensor data.
  • determining the facial characteristics of the face of the user includes determining a facial attribute for the face of the user using a facial attribute assessment model where the facial attribute assessment model is trained to predict, as output, a respective facial attribute from input sensor data.
  • the facial attribute assessment model may include a convolutional neural network.
  • these aspects also include displaying, at a user interface of the mobile device, the face of the user.
  • These aspects further may include generating a three-dimensional rendering of the eyewear model indicated by the eyewear recommendation on the face of the user displayed at the user interface where the three-dimensional rendering is generated to represent a realistic scale of the eyewear model to the face of the user.
  • generating the three-dimensional rendering of the eyewear model may include anchoring one or more portions of the three-dimensional rendering of the eyewear model to facial features of the face of the user defined by the facial characteristics.
  • the aspects may scale the three-dimensional rendering of the eyewear model while the face of the user (i) is displayed in real-time at the user interface using current sensor data being captured by the sensor of the mobile device and (ii) is moving within a field of view of the sensor of the mobile device.
  • FIGS. 1A and IB are schematic views of an eyewear recommendation environments.
  • FIG. 2 is a schematic view of an example facial detector of the recommendation systems of FIGS. lA and IB.
  • FIG. 3 is a schematic view of an example recommendation engine of the recommendation systems of FIGS. lA and IB.
  • FIG. 4 is a schematic view of an example virtual engine of the recommendation system of FIG. IB.
  • FIG. 5 is a flow diagram of an example method of generating an eyewear recommendation .
  • FIG. 6 is a schematic view of an example computing device that may be used to implement the systems and methods described herein.
  • FIGS. 7A-7C are schematic views of example applications executing the recommendation systems of FIGS. 1A and IB.
  • shoe manufacturers use a plastic mold to develop the size and shape of a shoe. Just slight differences in the plastic mold cause significant fit differences, such that some manufacturer shoe sizes tend to be narrower than the standard sizing or tend to be a different size completely. For example, a size 9 shoe from one manufacturer may fit like a true size 8 from the standard sizing.
  • Uncertainty for online shoppers is not limited to sizing.
  • wearable articles may influence the aesthetics of a wearable article as it relates to the purchaser or the potential purchaser. For instance, people tend to have unique features or combinations of features that can help determine whether a wearable article is a pleasing design. Consumer preferences may depend on characteristics of the consumer such as body shape, skin complexion, eye color, hair color, etc. Meaning that, someone with blue eyes may enjoy their appearance more in blue articles or a complimentary color such as yellow rather than a color such as green. Similarly, someone with long legs may appreciate articles that accentuate their long legs rather than accentuate their torso.
  • Cart abandonment which occurs when a customer saves an item for purchase, but then decides to purchase elsewhere or not at all — is a considerable problem for online retailers. Some online retailers experience cart abandonment with near 75 percent frequency. A significant proportion of cart abandonment may be due to the customer’s inability to try on the clothing item. A recent retail report shows that nearly 80 percent of consumers try on a clothing item when in-store before purchase — a try-on that most online retailers are unable to afford. However, when online shoppers have the opportunity to visualize the fit and aesthetics of a product, there may be a greater likelihood that the item is purchased immediately.
  • the following approach is a system that recommends a wearable article by accounting for specific characteristics of the shopper.
  • this recommendation system may apply to a wide variety of wearable articles, the system is illustrated with regard to eyewear recommendations.
  • Eyewear suffers from many of the fit and sizing issues previously mentioned. That is, whether a consumer enjoys a pair of eyewear depends on the size, shape, and/or style of the eyewear with respect to the consumer’s face and facial characteristics. For example, some styles of eyewear generally do not appear to fit well with particular facial shapes. A square-style frame often does not appear to fit someone with a heart-shaped or triangularshaped face well.
  • the recommendation system determines facial characteristics of the face of a user (i.e., an online shopper or retail consumer) and generates an eyewear recommendation that indicates an eyewear model that fits the face of the user characterized by the facial characteristics.
  • the eye wear recommendation environment 100 (also referred to as the environment 100) includes a user 10 associated with a user device 110.
  • the user device 110 may be any computing device capable of interfacing with a recommendation system 120 that provides the user 10 with an eyewear recommendation 122.
  • the user 10 refers to a consumer or online shopper using a mobile device to shop for sunglasses.
  • Some examples of user devices 110 include, but are not limited to, desktop computing devices and mobile computing devices, such as laptops, tablets, mobile phones (e.g., smart phones), and wearable computing devices (e.g., headsets and/or watches).
  • the user device 110 includes data processing hardware 112 and memory hardware 114.
  • the data processing hardware 112 is able to execute computing instructions stored in the memory hardware 114 to perform various computing operations such as, for example, the functionality of the recommendation system 120.
  • the user device 110 is configured such that the user 10 may use one or more applications (e.g., application 116) operating on, or accessible to, the user device 110 to interface with the recommendation system 120.
  • the recommendation system 120 is part of the application 116 executing on the user device 110.
  • the user device 110 also includes or, is in communication with, a sensor system 130.
  • the sensor system 130 generally refers to one or more sensors 132 that capture sensor data 134 for the environment 100. That is, sensors 132 may be mounted on the user device 110 to capture information about the environment 100 in the form of sensor data 134. In some implementations, one or more sensors 132 capture sensor data 134 about the user 10.
  • the user device 110 includes one or more cameras as sensors 132 that have a field of view Fv configured to capture image data about the user 10 when the user 10 is within the field of view Fv.
  • FIG. 1 depicts a sensor 132 mounted on a front face of a mobile device 110 that has a field of view Fv arranged to capture image data corresponding to a face 12 of the user 10.
  • FIGS. 1A and IB depict the sensor 132 as a camera
  • the user device 110 may include or be in communication with a variety of sensors 132 including, but not limited to, vision/image sensors (e.g., cameras such as stereo cameras or depth cameras), inertial sensors (e.g., accelerometers and/or gyroscopes), force sensors, and/or kinematic sensors.
  • sensors 132 include a camera such as a stereo camera, a time- of-flight (TOF) sensor, a scanning light-detection and ranging (LIDAR) sensor, or a scanning laser-detection and ranging (LADAR) sensor.
  • TOF time- of-flight
  • LIDAR scanning light-detection and ranging
  • LADAR scanning laser-detection and ranging
  • each sensor 132 may have a field of view Fv that defines a sensing range or region corresponding to the sensor 132.
  • a sensor 132 of the user device 110 may be movable (e.g., pivotable and/or rotatable) such that the sensor 132 may, for example, change the field of view Fv about one or more axis in relation to the user device 110.
  • the sensor 132 may capture the field of view Fv at a particular frequency. For instance, one or more sensors 132 are configured to capture video for the field of view Fv.
  • a sensor 132 is capable of capturing a sequence of images at a particular frame rate to constitute a video stream of image data corresponding to the field of view Fv.
  • the recommendation system 120 may determine facial characteristics 202 by comparing and/or differentiating aspects of the face 12 of the user 10 from different frames of image data 134.
  • Sensor data 134 captured by the one or more sensors 132 of the user device 110 may be communicated to the recommendation system 120. With the sensor data 134, the recommendation system 120 may determine facial characteristics 202 of the face 12 of the user 10 using a facial detector 200. Any facial characteristics 202 determined by the facial detector 200 are then passed to a recommendation engine 300 of the recommendation system 120 in order to generate the eyewear recommendation 122.
  • the eyewear recommendation 122 refers to an indication of one or more articles of eyewear (e.g., that satisfy the parameters input into and/or programmed for the recommendation system 120).
  • the eyewear recommendation 122 is a digital representation of an article of eyewear referred to as an “eyewear model” that the recommendation system 120 is able to digitally and/or visually convey to the user 10 (e.g., in a window or interface on the user device 110).
  • the recommendation system 120 (e.g., the recommendation engine 300) is configured to receive one or more user preferences 124 from the user 10 in order to influence the eyewear recommendation 122.
  • the recommendation system 120 is able to generate an eyewear recommendation 122 that (i) satisfies the user preferences 124 and (ii) fits the face 12 of the user 10 characterized by the facial characteristics 202.
  • user preferences 124 indicate a liking (or disliking) of some aspect of eyewear.
  • a user preference 124 is a gender of the user 10 because the user’s gender informs the recommendation system 120 that the user 10 prefers styles of eyewear specific to a gender or that are gender neutral (i.e., not dedicated to a particular gender).
  • Other examples of user preferences 124 include the actual style of the eyewear, lens type, lens color, lens shape, lens coatings, frame color, frame type, frame material, etc.
  • the user preferences 124 may be manually input by the user 10 at a user interface associated with the recommendation system 120 (e.g., one or more windows of an application 116 with one or more prompts and/or fields to input user preferences 124). Additionally or alternatively, the user preferences 124 may be automatically generated from interpreting user-based actions.
  • the user 10 may permit the recommendation system 120 to capture purchasing data (e.g., purchasing history) or other transactional data about the user 10 (i.e., transactions that the user 10 performs using the user device 110) such that the recommendation system 120 can synthesize the captured data into one or more user preferences 124.
  • purchasing data e.g., purchasing history
  • other transactional data about the user 10 i.e., transactions that the user 10 performs using the user device 110
  • the recommendation system 120 may generate a user preference 124 for that particular color.
  • the recommendation system 120 may generate a user preference 124 for metal eyewear frames.
  • the actions of the user 10 indicate a propensity for athletics or sports (e.g., by purchasing sport related goods or articles of clothing), the recommendation system 120 generates a user preference 124 for sport-style sunglasses.
  • the recommendation system 120 may communicate with a storage system 140.
  • the storage system 140 may be part of the recommendation system 120 or remotely located, but in communication with, the recommendation system 120.
  • the storage system 140 may refer to one or more databases or libraries of information related to an eyewear model.
  • an eyewear model is a digital representation of a physically existing pair of eyewear (e.g., pair of glasses or sunglasses). That is, the eyewear model represents attributes and/or dimensions of a pair of eyewear.
  • Some examples of eyewear dimensions (i.e., quantifiable properties) that may be included in the eyewear model are frame height, nose bridge width, frame width, temple length, etc.
  • Eyewear attributes refer to properties or characteristics of the eyewear that are not readily quantifiable, such as frame color, lens shape, lens tint/color, lens reflectivity, lens coatings (e.g., anti-glare), an indication of frame/lens curvature, style of temple tips (e.g., straight temple tips or curved temple tips), etc.
  • the storage system 140 includes a database of market intelligence data 142 about one or more eyewear models or eyewear attributes.
  • Marketing intelligence data 142 generally refers to data that is not particularized to solely the user 10, but rather characteristic of more global eyewear trends or preferences (i.e., general consumer eyewear preferences/trends).
  • the recommendation system 120 may use the marketing intelligence data 142 or some form of the marketing intelligence data 142 to supplement or compliment the user preference(s) 122 from the user 10.
  • the recommendation system 120 may generate ratings for different eyewear models and then estimate the user’s preference for unseen eyewear based on ratings for similar items.
  • Marketing intelligence data 142 may refer to data characterizing fashion recommendations, customer survey data, and/or feedback from the user 10 or a set of users.
  • the recommendation system 120 may be configured to place greater weight on marketing intelligence data 142 than user preferences 124. This may occur when the recommendation system 120 has scarce or sparse information about a particular user 10 using the recommendation system 120. Due to scarce information (e.g., little to no user preferences 124), the recommendation system 120 may function more like a content-based recommendation system. For instance, as a content-based recommendation system, the recommendation system 120 may rank eyewear using predetermined features of the eyewear, including measurements, color, style based on the marketing intelligence data 142.
  • some mechanism at the storage system 140 or the recommendation system 120 itself may convert the raw marketing intelligence data 142 to some form of eyewear ranking/rating system (e.g., rated eyewear models).
  • the storage system 120 may include a database storing a plurality of rated eyewear models that are rated according to marketing intelligence data 142.
  • the recommendation system 120 may ensure the fit of the eyewear model indicated by the eyewear recommendation 122 based on the facial characteristics 202 of the user 10, but when there are multiple eyewear models that would satisfy fit metrics, the recommendation system 120 would generate one or more eyewear recommendations 122 that prioritize or rank the fitting eyewear models based on the ratings of the fitting eyewear models according to marketing intelligence data (e.g., the rated eyewear models). In other words, if marketing intelligence data indicated that a retro-style was currently the most popular style of sunglasses, the recommendation system 120 would recommend retro-style sunglasses that fit the user 10 as the leading eyewear recommendation 122.
  • marketing intelligence data indicated that a retro-style was currently the most popular style of sunglasses
  • the recommendation system 120 may generate an eyewear recommendation 122 that includes a set of eyewear models for the user 10 (e.g., to allow the user 10 further select from).
  • the set of eyewear models of the eyewear recommendation 122 may refer to the N best eyewear models.
  • the user 10 may be allowed (e.g., via the application 16 or the recommendation system 120) to designate the N-number of best eyewear models to be included in the eyewear recommendation 122. For instance, the user 10 may select to be given three or five eyewear models in the eyewear recommendation 122.
  • the eyewear recommendation system 122 may have a customizable, yet default number of eyewear models to be included in the eyewear recommendation 122.
  • the recommendation system 120 is setup such that each eyewear model included in an eyewear recommendation 122 satisfies a particular score or threshold.
  • the recommendation system 120 may use some type of optimization function that weighs or values each input parameter or preprogrammed parameter to score some or all of the eyewear models available to the recommendation system 120.
  • the function may have several terms where at least one of the terms represents how well a particular eyewear model satisfies a particular recommendation parameter.
  • the function when the recommendation parameters are (i) fit to the face 12 of the user 10 and (ii) satisfaction of user preferences 124, the function includes at least one term that represents the degree to which a particular eyewear model fits the face 12 of the user 10 and at least one term that represents the degree to which that particular eyewear model satisfies the user preferences 124. Accordingly, the function may be scaled to include additional terms such as, for example, a term that represents the degree to which the particular eyewear model satisfies global consumer preferences (e.g., based on marketing intelligence data).
  • the recommendations system 120 performs some or all of its functionality leveraging a remote system 160.
  • the user device 110 communicates with some portion of the recommendation system 120 via a network 150 with access to the remote system 160.
  • a remote system 160 may be advantageous when processing of the recommendation system 120 is computationally expensive. For instance, when the user device 110 is a mobile phone or some other computing device with a finite amount of computing resources (e.g., finite amount of data processing hardware 112 and memory hardware 114), this type of user device 110 is likely performing operations other than eyewear recommendations 122 and would prefer to not completely tax its finite resources or allocate some number of resources to the recommendation system 120.
  • the recommendation system 120 may be able to utilize remote computing resources such as remote data processing hardware 162 and remote memory hardware 164 to support the computing needs of the recommendation system 120.
  • processing with machine learning models can often sometimes be computationally expensive and, in these situations, if the user 10 can afford some degree of latency (e.g., compared to the recommendation system 120 operating locally on the user device 110), using a remote system 160 to perform the machine learning or neural network analysis may avoid having the user device 110 throttled by this computationally expensive activity.
  • the recommendation system 120 leverages the remote system 160, the user device 110 may access the recommendation system 120 with a web-based application 116.
  • the facial detector 200 is generally configured to determine facial characteristics 202 of the face 12 of the user 10 using sensor data 134 captured for the face 12 of the user 10.
  • facial characteristics 202 include facial features 212 (e.g., hair 212a, eyes 212b, nose 212c, mouth 212d, nose bridge 212e, cheeks/cheek bones 212e, lips, chin, etc.), locations of facial features, measurements (or ratios) between one or more facial features (e.g., between locations of facial features), facial attributes (e.g., facial properties that are qualitative rather than readily quantitative (e.g., hair color, skin tone/color, facial feature shape, etc.)), and facial classifications (e.g., a facial shape classification).
  • facial features 212 e.g., hair 212a, eyes 212b, nose 212c, mouth 212d, nose bridge 212e, cheeks/cheek bones 212e, lips, chin, etc.
  • locations of facial features e.g., measurements (or ratios) between
  • the facial detector 200 may include components, modules, or predictive models (e.g., machine learning models such as neural networks) that determine any number or combination of facial characteristics 202 to assist the recommendation system 120 in generating the eyewear recommendation 122.
  • the facial detector 200 includes an identifier 210, an analyzer 220, and an assessor 230.
  • the identifier 210 is configured to perform facial feature identification.
  • the identifier 210 identifies or detects facial features 212 using some version of a landmark detection algorithm.
  • the identifier 210 receives the sensor data 134 and generates facial contour lines CL for the face 12 of the user 10.
  • the identifier 210 may use sensor data 134 such as inertial measurement data (e.g., IMU or accelerometer data) in combination with image data. That is, with the combination of inertial data, angles represented in the image data, and other environment feature changes detected between frames of image data (i.e., video frames), this aggregation of motion data enables the identifier 210 to determine facial contour lines CL on the face 12 of the user 10. In other words, motion information from the sensor data 134 enables the identifier 210 to define facial contour lines CL for the face 12 of the user 10 that represent a topography of the face 12 of the user 10.
  • sensor data 134 such as inertial measurement data (e.g., IMU or accelerometer data) in combination with image data. That is, with the combination of inertial data, angles represented in the image data, and other environment feature changes detected between frames of image data (i.e., video frames), this aggregation of motion data enables the identifier 210 to determine facial contour lines CL on the face 12 of the
  • the identifier 210 is able to identify facial features 212. For example, the identifier 210 is trained to predict or determine that certain contour variations in the face 12 correspond to specific facial features 212.
  • the identifier 210 converts sensor data 134 to facial features 212 or facial contour lines using predictive modeling. That is, the identifier 210 is a model with a neural network trained to identify patterns in sensor data 134 (e.g., image data).
  • the neural network may include a plurality of network layers in order to identify different complex shapes that correspond to the facial features 212 of a face 12.
  • the identifier 210 includes a convolutional neural network (CNN) with between twenty to thirty convolutional layers trained to perform pattern recognition; allowing the identifier 210 to predict the presence and/or location of facial features 212 for a face 12 characterized by sensor data 134.
  • CNN convolutional neural network
  • the trained identifier 210 may be implemented to predict facial features 212 and/or facial feature locations from sensor data 134 that is input into the identifier 210.
  • the identifier 210 places an anchor A at that facial feature 212 at the particular location detected.
  • the identifier 210 or some other component of the recommendation system 120 can then track the anchor A over a period of time to determine facial movement (e.g., translational and/or rotational movement) of the face 12 of the user 10 from the sensor data 134.
  • the anchor A may also include a time stamp or time indicator to identify either when the identifier 210 placed the anchor A or the time corresponding to when the sensor data 134 was captured. This indication of time may assist the identifier 210 in understanding, tracking, and/or quantifying the facial movement of the face 12 of the user 10 characterized by the sensor data 134.
  • the identifier 210 communicates the facial features 212 to the analyzer 220.
  • the analyzer 220 is configured to determine a facial measurement 222 for a respective feature 212 relative to one or more other respective facial features 212.
  • a facial measurement 222 may refer to a measurement that identifies a relationship between at least two facial features 212
  • some examples of facial measurements 222 include: a vertical face distance that refers to a distance between a top of the face 12 of the user 10 (i.e., top of head) to the bottom of the face 12 of the user 10 (i.e., the base of the chin); a nose bridge distance that refers to a distance from the inner eye on one side of the face 12 to the inner eye on the other side of the face 12; and a horizontal face distance that refers to a width of the face 12.
  • the width of the face 12 i.e., the horizontal face distance
  • the horizontal face distance may be measured across a particular vertical position along the face 12, such as the midpoint of the vertical facial distance or a particular anthropometric proportion line of the face 12 (e.g., a l/3 rd sectional line of the face 12 such as the glabella line or the subnasale line).
  • a particular anthropometric proportion line of the face 12 e.g., a l/3 rd sectional line of the face 12 such as the glabella line or the subnasale line.
  • the analyzer 220 determines the facial measurements 222 using the facial contour lines CL created by the identifier 210.
  • the sensor data 134 e.g., motion data
  • the sensor data 134 captured by sensor(s) 132 associated with the user device 110 can be used to determine changes in angles of features in the field of view Fv (e.g., changes in the contours and lines of the face 12 of the user 10).
  • trigonometric functions such as trigonometric parallax, can approximate real- world measurements without known-size reference objects.
  • the analyzer 220 does not need to be informed of an object of known reference size to generate the facial measurements 222 between facial features of the face 12 of the user 10. Rather, for example, the analyzer 220 can leverage a particular combination of sensor data 134 to accurately estimate facial features 212 of the face 12 of the user 10 using trigonometric functions such as trigonometric parallax. In some examples, the analyzer 220 generates a contour mesh of the face 12 of the user 10 by performing a plurality of facial measurements 222 from the facial contour lines CL.
  • the analyzer 220 performs the trigonometric functions repeatedly over the facial contour lines CL to convert a facial topography into a contour mesh of the face 12 of the user 10 that includes facial measurements 222 corresponding to facial features 212 represented by the contour mesh. For instance, with a contour mesh of the face 12 of the user 10, the analyzer 220 is able to isolate or extract specific facial measurements 222 that the recommendation system 120 may use downstream (e.g., at the assessor 230 or the recommendation engine 300).
  • FIGS. 7A-7C illustrate facial contour lines CL converted to a facial mesh that includes the facial measurements 222 of eye distance and facial width.
  • the motion data of the user 10 turning his head from the left (FIG. 7A) to center (FIG. 7B) to the right (FIG. 7C) enables the identifier 210 to determine facial contour lines CL on the face 12 of the user 10. From these facial contour lines CL, the identifier 210 is able to identify facial features 212 that allow the analyzer 220 to create a facial mesh with the facial measurements 222 of eye distance and facial width.
  • the assessor 230 is configured to perform one or more facial assessments to generate facial characteristics 202. These facial characteristics 202 may be more qualitative in nature than quantitative.
  • the assessor 230 includes an attribute assessment model 232 that predicts facial characteristics 202 referred to as facial attributes 234 about the face 12 of the user 10 from sensor data 134 capturing the face 12 of the user 10.
  • facial attributes 234 include hair color 234a and skin color 234b.
  • the attribute assessment model 232 may be a trained predictive model with a plurality of neural network layers.
  • the attribute assessment model 232 includes a convolutional neural network (CNN) that has been trained on a vast amount of training examples to recognize (e.g., visually recognize) facial attributes 234.
  • the training examples may be a supervised set of training examples. This means that each training example may be an image or set of images with a label that informs the model 232 that the image(s) include or correspond to a particular facial attribute 234 (i.e., this image is a face with red hair).
  • the model 232 has learned how to predict the label.
  • the model 232 may proceed to inference or implementation and receive sensor data 134 as its input (i.e., an unlabeled input), and to generate a facial attribute 234 as its predictive output for the given input.
  • sensor data 134 as its input (i.e., an unlabeled input)
  • This approach therefore enables the attribute assessment model 232 to predict facial attributes 234 for the face 12 of the user 10 even though a particular face characterized by the input sensor data 134 has never been encountered by the attribute assessment model 232.
  • the assessor 230 may also determine a facial shape classification 236 as a facial characteristic 202 for the face 12 of the user 10.
  • the assessor 230 may include a facial shape classification model 238.
  • this facial shape classification model 238 is similar to the attribute assessment model 232, but instead of predicting a facial attribute 234 such as hair color or skin tone, the facial shape classification model 238 predicts a facial shape for the face 12 of the user 10 based on the input sensor data 134.
  • the facial shape classification model 238 determines a facial map FM for the face 12 of the user 10 (based on other facial characteristics 202, such as facial features 212, facial feature locations, and/or facial measurements 222).
  • FIGS. 7A-7C depicts a facial map FM for the face 12 of the user 10.
  • the facial shape classification model 238 may have a neural network, such as a CNN, similar to the attribute assessment model 232, but the facial shape classification model 238 is trained to predict the facial shape classification 236 from training examples that have supervised labels corresponding to facial shapes.
  • the degree of facial classification can be as simple or as complicated as the supervised set of training data is configured to be, some examples of facial shapes that a face 12 may be classified as are oval, round, square, diamond, heart, pear, and oblong.
  • the facial shape classification model 238 and the attribute assessment model 232 may be merged into a single model that can predict facial attributes 234 as well as facial shapes (i.e., classify the facial shape) from sensor data 134.
  • FIGS. 7A-7C show the predicted facial shape classification 236 for the user 10 as an oblong face shape (e.g., based on other facial characteristics 202, such as facial features 212, facial feature locations, and/or facial measurements 222).
  • the recommendation engine 300 is configured to generate the eyewear recommendation 122.
  • the eyewear recommendation 122 may refer to a single pair of eyewear (e.g., a single pair of sunglasses) or a set of eyewear (e.g., multiple sunglasses).
  • the user 10 may prefer to utilize the recommendation system 120 to generate an eyewear recommendation 122 because the recommendation system 120 can account for the face 12 of the user 10 and user preferences 122 in light of a significantly large number of retail eyewear options (i.e., that the user 10 would not be capable of analyzing by manual review). It also allows the user 10 to entertain eyewear options that the user 10 may not have reviewed on their own accord without the recommendations system 120.
  • the recommendation engine 300 receives the facial characteristics 202 (e.g., the facial features 212, the facial measurements 222, the facial attributes 234, and/or the facial shape classification 236) from the facial detector 200 and the user preferences 124. With these inputs, the recommendation engine 300 purposes to generate the best matching pair of eyewear (or a set of the best matching pairs of eyewear) for the user 10 as the eyewear recommendation 122. Generally speaking, the recommendation engine 300 is able to use the facial characteristics 202 to determine which eyewear models 144 would fit the face 12 of the user 10 and to use the user preferences 124 to inform the recommendation engine 300 which eyewear model 144 that fit the face 12 of the user 10 would be preferred by the user 10.
  • the facial characteristics 202 e.g., the facial features 212, the facial measurements 222, the facial attributes 234, and/or the facial shape classification 236
  • the recommendation engine 300 purposes to generate the best matching pair of eyewear (or a set of the best matching pairs of eyewear) for the user 10 as the eyewear recommendation 122.
  • the recommendation engine 300 is constructed from marketing intelligence data (e.g., from fashion guidance, customer surveys, or other user feedback), analysis of eyewear from stock images, and/or customer purchase behavior.
  • the recommendation engine 300 may improve over time since the recommendation engine may be configured with various feedback loops that account for user feedback and purchasing behavior data (e.g., transactional data and/or transactional data trends).
  • the recommendation engine 300 includes a comparer 310 that determines the eyewear recommendation 122 by determining whether one or more existing retail eyewear models 144, 144a-n (e.g., stored in the storage system 140) have dimensions that conform to the face 12 of the user 10 characterized by the facial characteristics 202. In other words, whether an eyewear model 144 fits the face 12 of the user 10 may be determined by comparing facial characteristics 202 of the face 12 of the user 10 to eyewear model dimensions (e.g., manufacturing specification). As shown in FIG. 3, the comparer 310 may receive multiple eyewear models 144a-n to compare against the facial characteristics 202 of the user’s face 12.
  • the recommendation engine 300 compares the eyewear model dimensions (e.g., shown as frame width, nose bridge width, and frame height) to the facial measurements 222 from the facial detector 200.
  • the comparer 310 includes one or more fit thresholds 312 for one or more facial measurements 222.
  • the fit threshold 312 refers to an indication or value that designates that a particular facial measurement 222 is within a specified tolerance to a dimension of an eyewear model 144.
  • the comparer 310 compares the nose bridge measurement 222c to the nose bridge width of an eyewear model 144 (e.g., each eyewear model 144) and determines that a comparative value between these dimensions (e.g., a difference value) satisfies or does not satisfy the fit threshold 312 for the nose bridge comparison.
  • a comparative value between these dimensions e.g., a difference value
  • the eyewear model 144 subject to the comparison compared is a model 144 that fits the face 12 of the user 10 and is considered to be a candidate model 314.
  • the comparer 310 may generate one or more candidate models 314, 314a-n that fit the face 12 of the user 10.
  • the comparer 310 may be more complex such that multiple dimensions are compared between the model 144 and the face 12 of the user 10 (e.g., facial measurements 222) and an aggregate score is compiled that represents the multiple dimensions compared.
  • the aggregate score generated by comparing multiple dimensions may be a function with tunable weights.
  • the scoring function can be configured to weigh one or more particular dimension comparisons with varying degrees of importance. For instance, the nose bridge comparison may be designated as one of the most important comparisons for eyewear fit and may be given a weight that reflects (e.g., proportionally reflects) this importance.
  • the fit threshold 312 may be more akin to a score threshold to designate when a particular model has a score that should be considered a candidate model 314.
  • the comparer 310 may then communicate the one or more candidate models 314 to a candidate determiner 320.
  • the candidate determiner 320 of the recommendation engine 300 is a configured to perform a process that determines whether a candidate model 314 (i.e., a model 144 determined to fit the face 12 of the user 10) satisfies the user preferences 124 provided by the user 10 (e.g., via the application 116). To make this determination, the candidate determiner 320 compares attributes of each candidate model 314 to the user preferences 124.
  • the recommendation engine 300 When the attribute(s) of a candidate model 314 match or share a particular degree of similarities (e.g., designated by a similarity threshold), the recommendation engine 300 outputs that particular candidate model 314 as a recommended model 322 for an eyewear recommendation 122. For example, if the user 10 specifies a user preference 124 for blue eyewear frames, the candidate determiner 320 analyzes the attributes of each candidate model 314 provided by the comparer 310 to determine whether a particular candidate model 314 has blue frames. Although this example is a simple color preference, the user preferences 124 may be more complicated and include multiple preferences 124 (e.g., style, lens type, frame color, etc.).
  • the recommendation engine 300 determines whether a candidate eyewear model 314 satisfies the user preferences 124 by determining consumer eyewear preferences from the marketing intelligence data 142.
  • the user preferences 124 may be compared to consumer eyewear preferences from the marketing intelligence data 142 in order to identify a model 144 to be the recommended model 322.
  • the recommendation system 120 (or some other system) may use the marketing intelligence data 142 to rate or rank each eyewear model 144 (e.g., to build a library or database of rated eyewear models). The rate/ranking system may therefore be akin to a popular assessment rate/ranking system based on the marketing intelligence data 142.
  • a consumer provides marketing intelligence data 142 that identifies that consumer’s eyewear preferences and how that consumer ranks or rates one or more eyewear models 144.
  • the user preferences 124 may be matched for similarity to the consumer eyewear preferences to determine eyewear models that were highly ranked/rated by consumers with similar preferences to the user 10.
  • the recommendation engine 300 can leverage marketing intelligence data 142 to understand how likely it is that the user 10 will prefer a particular eyewear model 144 by the shared preferences of one or more other consumers.
  • the recommendation engine 300 is an eyewear recommendation model that performs the functionality of the comparer 310 and the candidate determiner 320.
  • the recommendation engine 300 may receive, as inputs, the user preferences 124 and the facial characteristics 202 and to generate, as output, the eyewear recommendation 122.
  • the recommendation engine 300 is able to predict an eyewear model 144 for the user 10.
  • the eyewear recommendation model is a random forest classifier model.
  • the model may be trained by ensemble training methods to perform the tasks described above with respect to the comparer 310, the candidate determiner 320, and the recommendation engine 300 more generally.
  • the eyewear recommendation model may initially function like a content-based recommendation system that relies more on consumer ratings, but as the recommendation system 120 collects more purchase behavior data (e.g., transactional behavior) from recommendation system users, the recommendation system 120 and, accordingly, the eyewear recommendation model may shift towards a collaborative filtering system.
  • collaborative filtering systems are able to use linear regression and optimization models to predict features and preferences. This means that the recommendation model can rely on its own learning of consumer preferences as they relate to particular eyewear models 144 instead of having to, at least partially, rely on marketing intelligence data 142 or manually-rated eyewear libraries.
  • the recommendation system 120 may additionally include a virtual engine 400 to display a virtual fit 402 of the recommendation 122.
  • the virtual engine 400 displays, at the user device 110 (e.g., at a user interface of the user device 110), a three dimensional rendering 404 of the recommended model 322 from the eyewear recommendation 122 on a three dimensional (3D) representation 406 of the face 12 of the user 10.
  • the virtual engine 400 When generating the 3D rendering 404 of the recommended model 322, the virtual engine 400 generates the 3D rendering 404 to represent a realistic scale of the recommended eyewear model 322.
  • the 3D rendering 404 depicts how the eyewear of the recommended model 322 would fit the face 12 of the user 10 in real life (e.g., as though the user 10 was trying on the eyewear at a retail store in a mirror).
  • Some other approaches in the field may perform some aspect of virtual fitting, but generally these approaches are for customizing eyewear to understand how to adjust dimensions of the eyewear to provide a better fit (or confirm) to the face 12 of the user 10.
  • the recommendation system 120 performs a true measurement fit that does not scale or dimensionally change the eyewear represented by the virtual engine 400, but only ensures that the eyewear is a 1 : 1 ratio to the face 12 of the user 10 to represent how the eyewear realistically would fit the user 10 (e.g., in its existing retail form). That is, the virtual fit 402 is a true measurement fit that demonstrates the actual fit and look of the eyewear to simulate the user 10 having the opportunity to try-on the eyewear of the recommended model 322.
  • the virtual engine 400 is also configured to generate the virtual fit 402 in realtime or near real-time as current sensor data 134 captures the face 12 of the user 10. That is, the user 10 perceives that the virtual fit 402 is moving with the user 10 with minimal latency. In this manner, the user 10 may move or rotate his or her face 12 in the field of view Fv and the virtual engine 400 is able to rotate the virtual fit 402 (i.e., the 3D rendering 404 of the recommended model 322 on the 3D representation 406 of the face 12 of the user 10).
  • the virtual engine 400 is able to realistically track the movement of the user’s face 12 by anchoring one or more portions of the 3D rendering 404 of the eyewear model 322 to facial features 212 of the 3D representation 406 of the user’s face 12.
  • older techniques attempted to perform some degree of tracking or alignment during virtual modeling, these approaches often had their downfalls. For instance, one approach tracked a basic triangle between a user’ forehead and two eyes, such that glasses would be aligned on a virtual face according to the horizontal line between the eyes. Yet, in this approach, when the user moved to certain positions, rotations, or even closed his/her eyes (e.g., blinking), this type of tracking would lose its reference and cause noticeable virtual misalignment.
  • the virtual engine 400 leverages the entire face 12 of the user 10 such that multiple facial anchors help ensure that the alignment of the 3D rendering 404 for the eyewear to the 3D representation 406 of the face 12 is properly maintained. This also allows the user 10 to have a greater range of motion to move his/her face without concern that the virtual fit 402 will fail or become compromised. Moreover, with the 3D rendering 404 anchored to the 3D representation 406 of the face 12, the virtual engine 400 can modify the 3D rendering 404 with the 3D representation 406 of the face 12 without distorting or misaligning the virtual fit 402. Therefore, even as the 3D representation 406 changes perspective (e.g., due to the user’s face rotating), the 3D rendering 404 proportionally changes perspective between the anchor points at facial features 212.
  • the visual engine 400 uses three dimensional photogrammetry to generate the virtual fit 402.
  • Photogrammetry is able to use image data (e.g., from the sensor data 134) along with other sensor data 134 or processed sensor data 134 to determine depth and scale for a particular image.
  • image data e.g., from the sensor data 134
  • other sensor data 134 or processed sensor data 134 to determine depth and scale for a particular image.
  • photogrammetry multiple images can be stitched together into a 3D object by feeding multiple measurements and image analysis into a computational model.
  • photogrammetry may be used to generate the 3D rendering 404 of the recommended model 322 on the user’s face 12 (i.e., the 3D representation 406 of the face 12).
  • the user 10 may receive, at a user interface associated with the user device, multiple recommended eyewear models 322.
  • FIG. 4 depicts the application 116 displaying two recommended eyewear models 322a, b.
  • the user 10 can select which of the recommended eyewear models 322 (by a user selection 16) to display as a virtual fit 402.
  • the user 10 selects the first recommended eyewear model 322a and the virtual engine 400 generates a virtual fit 402 for the first recommended eyewear model 322a upon receipt of the user selection 16.
  • FIG. 5 is a flow diagram of a method 500 of generating an eyewear recommendation 122 for the user 10.
  • the method 500 receives sensor data 134 from a sensor 132 of a mobile device 110 where the sensor data 134 captures a face 12 of a user 10 of the mobile device 110.
  • the method 500 receives user eyewear preferences 124 from the user 10 of the mobile device 110.
  • the method 500 determines facial characteristics 202 of the face 12 of the user 10 using the sensor data 134 captured for the face 12 of the user 10.
  • the method 500 generates an eyewear recommendation 122 for the user 10 using an eyewear recommendation model 300.
  • the eyewear recommendation model 300 is configured to receives, as inputs, the user eyewear preferences 124 and the facial characteristics 202 and to generate, as output, the eyewear recommendation 122 indicating an eyewear model 144 that satisfies the user eyewear preferences 124 and fits the face 12 of the user 10 characterized by the facial characteristics 202.
  • FIG. 6 is schematic view of an example computing device 600 that may be used to implement the systems (e.g., the recommendation system 120) and methods (e.g., the method 500) described in this document.
  • the computing device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
  • the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
  • the computing device 600 includes a processor 610, memory 620, a storage device 630, a high-speed interface/controller 640 connecting to the memory 620 and high-speed expansion ports 650, and a low speed interface/controller 660 connecting to a low speed bus 670 and a storage device 630.
  • Each of the components 610, 620, 630, 640, 650, and 660 are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 610 can process instructions for execution within the computing device 600, including instructions stored in the memory 620 or on the storage device 630 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 680 coupled to high speed interface 640.
  • GUI graphical user interface
  • multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
  • multiple computing devices 600 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
  • the memory 620 stores information non-transitorily within the computing device 600.
  • the memory 620 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s).
  • the non-transitory memory 620 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 600.
  • non-volatile memory examples include, but are not limited to, flash memory and read-only memory (ROM) / programmable read-only memory (PROM) / erasable programmable read-only memory (EPROM) / electronically erasable programmable readonly memory (EEPROM) (e.g., typically used for firmware, such as boot programs).
  • volatile memory examples include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes.
  • the storage device 630 is capable of providing mass storage for the computing device 600.
  • the storage device 630 is a computer- readable medium.
  • the storage device 630 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
  • a computer program product is tangibly embodied in an information carrier.
  • the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
  • the information carrier is a computer- or machine-readable medium, such as the memory 620, the storage device 630, or memory on processor 610.
  • the high speed controller 640 manages bandwidth-intensive operations for the computing device 600, while the low speed controller 660 manages lower bandwidthintensive operations. Such allocation of duties is exemplary only.
  • the high-speed controller 640 is coupled to the memory 620, the display 680 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 650, which may accept various expansion cards (not shown).
  • the low-speed controller 660 is coupled to the storage device 630 and a low-speed expansion port 690.
  • the low-speed expansion port 690 which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • the computing device 600 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 600a or multiple times in a group of such servers 600a, as a laptop computer 600b, or as part of a rack server system 600c.
  • Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
  • ASICs application specific integrated circuits
  • These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter affecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing apparatus encompass all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus.
  • a computer program (also known as an application, program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input
  • One or more aspects of the disclosure can be implemented in a computing system that includes a backend component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a frontend component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such backend, middleware, or frontend components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network.
  • Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
  • LAN local area network
  • WAN wide area network
  • inter-network e.g., the Internet
  • peer-to-peer networks e.g., ad hoc peer-to-peer networks.
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client- server relationship to each other.
  • a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device).
  • client device e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device.
  • Data generated at the client device e.g., a result of the user interaction

Abstract

Un procédé de génération d'une recommandation de lunettes pour un utilisateur consiste à recevoir des données de capteur provenant d'un capteur d'un dispositif mobile, les données de capteur capturant un visage d'un utilisateur du dispositif mobile et à recevoir des préférences de lunettes d'utilisateur de l'utilisateur du dispositif mobile. Le procédé consiste également à déterminer des caractéristiques faciales du visage de l'utilisateur à l'aide des données de capteur capturées pour le visage de l'utilisateur. Le procédé comprend en outre la génération d'une recommandation de lunettes pour l'utilisateur à l'aide d'un modèle de recommandation de lunettes. Le modèle de recommandation de lunettes est configuré pour recevoir, en tant qu'entrées, les préférences de lunettes d'utilisateur et les caractéristiques faciales et pour générer, en tant que sortie, la recommandation de lunettes indiquant un modèle de lunettes qui (i) satisfait les préférences de l'utilisateur et (ii) est adapté au visage de l'utilisateur caractérisé par les caractéristiques faciales.
PCT/US2022/082603 2021-12-30 2022-12-30 Recommandation de lunettes assistée par un dispositif utilisateur WO2023130057A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163295500P 2021-12-30 2021-12-30
US63/295,500 2021-12-30

Publications (1)

Publication Number Publication Date
WO2023130057A1 true WO2023130057A1 (fr) 2023-07-06

Family

ID=87000344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/082603 WO2023130057A1 (fr) 2021-12-30 2022-12-30 Recommandation de lunettes assistée par un dispositif utilisateur

Country Status (1)

Country Link
WO (1) WO2023130057A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130088490A1 (en) * 2011-04-04 2013-04-11 Aaron Rasmussen Method for eyewear fitting, recommendation, and customization using collision detection
US9086582B1 (en) * 2014-08-20 2015-07-21 David Kind, Inc. System and method of providing custom-fitted and styled eyewear based on user-provided images and preferences
US20180252942A1 (en) * 2015-09-12 2018-09-06 Shamir Optical Industry Ltd. Automatic eyewear measurement and specification
US20200349631A1 (en) * 2017-10-19 2020-11-05 NWO Group Pty Ltd System and Method for Eyewear Recommendation
US20210088811A1 (en) * 2019-09-24 2021-03-25 Bespoke, Inc. d/b/a Topology Eyewear. Systems and methods for adjusting stock eyewear frames using a 3d scan of facial features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130088490A1 (en) * 2011-04-04 2013-04-11 Aaron Rasmussen Method for eyewear fitting, recommendation, and customization using collision detection
US9086582B1 (en) * 2014-08-20 2015-07-21 David Kind, Inc. System and method of providing custom-fitted and styled eyewear based on user-provided images and preferences
US20180252942A1 (en) * 2015-09-12 2018-09-06 Shamir Optical Industry Ltd. Automatic eyewear measurement and specification
US20200349631A1 (en) * 2017-10-19 2020-11-05 NWO Group Pty Ltd System and Method for Eyewear Recommendation
US20210088811A1 (en) * 2019-09-24 2021-03-25 Bespoke, Inc. d/b/a Topology Eyewear. Systems and methods for adjusting stock eyewear frames using a 3d scan of facial features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GU XIAOLING BELIZABETH@ZJU.EDU.CN; SHOU LIDAN SHOULD@ZJU.EDU.CN; PENG PAI PENGPAI_SH@ZJU.EDU.CN; CHEN KE CK@ZJU.EDU.CN; WU SAI WUS: "iGlasses A Novel Recommendation System for Best-fit Glasses", PARTIAL EVALUATION AND PROGRAM MANIPULATION, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 7 July 2016 (2016-07-07) - 17 January 2017 (2017-01-17), 2 Penn Plaza, Suite 701 New York NY 10121-0701 USA , pages 1109 - 1112, XP058697030, ISBN: 978-1-4503-4721-1, DOI: 10.1145/2911451.2911453 *

Similar Documents

Publication Publication Date Title
US11914226B2 (en) Method and system to create custom, user-specific eyewear
US20210065285A1 (en) Virtual fitting systems and methods for spectacles
US20180268458A1 (en) Automated recommendation and virtualization systems and methods for e-commerce
CN105512931A (zh) 在线智能配镜的方法和装置
US10685457B2 (en) Systems and methods for visualizing eyewear on a user
CN115803750B (zh) 使用参考框架的眼镜的虚拟试戴系统
Zhang et al. Fashion evaluation method for clothing recommendation based on weak appearance feature
WO2023130057A1 (fr) Recommandation de lunettes assistée par un dispositif utilisateur
Welivita et al. Virtual product try-on solution for e-commerce using mobile augmented reality
SWPNM et al. Virtual Dressing Room: Smart Approach to Select and Buy Clothes
CN114943572A (zh) 大小比较系统和包含利用大小比较系统的在线商业实例的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22917593

Country of ref document: EP

Kind code of ref document: A1