CN116529750A - Method and system for interface for product personalization or recommendation - Google Patents

Method and system for interface for product personalization or recommendation Download PDF

Info

Publication number
CN116529750A
CN116529750A CN202180055866.0A CN202180055866A CN116529750A CN 116529750 A CN116529750 A CN 116529750A CN 202180055866 A CN202180055866 A CN 202180055866A CN 116529750 A CN116529750 A CN 116529750A
Authority
CN
China
Prior art keywords
user
data
product
metrics
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180055866.0A
Other languages
Chinese (zh)
Inventor
S·V·艾伦
T·M·沃勒
P·R·D·桑德
A·S·卡斯加
R·J·盖瑟科尔
J·马科夫斯基
J·J·萨恩特里
E·M·巴克里奇
S·E·戈登
T·J·史密斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canada Luluolemeng Sporting Goods Co ltd
Original Assignee
Canada Luluolemeng Sporting Goods Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canada Luluolemeng Sporting Goods Co ltd filed Critical Canada Luluolemeng Sporting Goods Co ltd
Publication of CN116529750A publication Critical patent/CN116529750A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/12Healthy persons not otherwise provided for, e.g. subjects of a marketing survey
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/02Operational features
    • A61B2560/0242Operational features adapted to measure environmental factors, e.g. temperature, pollution
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/029Humidity sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02405Determining heart rate variability
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1112Global tracking of patients, e.g. by using GPS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6898Portable consumer electronic devices, e.g. music players, telephones, tablet computers

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Molecular Biology (AREA)
  • Economics (AREA)
  • Veterinary Medicine (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Human Resources & Organizations (AREA)
  • Game Theory and Decision Science (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Computing Systems (AREA)
  • Social Psychology (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Primary Health Care (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physiology (AREA)

Abstract

A system for providing an interface with personalized products or product recommendations by capturing user data using sensors and calculating physical and/or emotional markers of a user is described. The user data includes at least image data, text input, biometric data, and audio data, and may have been captured using one or more sensors on the user device. Processing the user data using at least the following: facial analysis; body analysis; eye tracking; behavioral analysis; social network analysis; position analysis; user activity analysis; speech analysis and text analysis. Based on the user data, one or more states of one or more cognitive affective abilities of the user can be determined. Based on the one or more states of the one or more cognitive affective abilities of the user. One or more personalized products or product recommendations are generated based on the body markers and/or mood markers to improve the mood markers or body markers.

Description

Method and system for interface for product personalization or recommendation
Technical Field
The present disclosure relates generally to the field of computing, and in particular, to methods and systems for interfaces for products and/or services that involve capturing user attributes and categorizing the user attributes for personalization or recommendation of the products and/or services. The methods and systems may involve an interface for capturing user measurements and activities using sensors and other data sources, determining physical and emotional markers of a user for personalization or recommendation of products and/or services, and generating personalization or recommendation of products and/or services based on the physical and emotional markers of the user.
Background
Embodiments described herein relate to an automated system for personalization or recommendation of products and/or services that may involve detecting physical and/or personality types, emotions, and other emotional characteristics of a person through the use of different information capture technologies, including invasive and non-invasive sensors. As such, attempts may be made to establish a current physical and emotional state of a person based on data such as the heart rate, facial expression, or intonation of his voice captured by various different sensors. A person exhibiting or desiring to exhibit physical and/or emotional health states may need product or service assistance, and there may be different types of products, activities, coaching sessions, and therapies that may be used to help a person enhance their general physical and/or emotional fitness or wellbeing. In one aspect, embodiments described herein relate to an automated system that provides personalization or recommendation for products and/or services with the aid of tailoring according to individual specific personality and current physical and emotional health states captured by different information capture devices, such as sensors.
Disclosure of Invention
Embodiments relate to methods and systems with non-transitory memory storing data records for product personalization using body markers and mood markers of a user.
Embodiments relate to methods and systems with non-transitory memory storing data records for product recommendations using body markers and mood markers of a user.
In one aspect, a system for providing an interface for product personalization using body and mood markers of a user is provided. The system involves a non-transitory memory storing a affiliatable database of at least one of product measurement records, movement features, perceptual preference features, body marker features, emotional marker features, user records, product records, and generated design models. The system has a hardware processor programmed with executable instructions for an interface to: user data for a user session is obtained over a period of time, a product request for the user session is transmitted, a visualization of a product generated for the user session in response to the product request is displayed, and quantitative and qualitative feedback data regarding the product is received. The system has a hardware server coupled to the memory to access the homeable database. The hardware server is programmed with executable instructions to: selecting a product category and a product variable in response to receiving the product request from the interface; extracting user attributes from the user data of the user session and associated with the product variables, the user attributes including at least one of a measurement metric, a movement metric, a perceptual preference metric, a body marker metric, an emotion marker metric, a purchase history, and an activity intent; calculating a target parameter of a target sensory state of the user using the extracted user attribute; generating the product and associated manufacturing instructions by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database; transmitting the visualization of the product to the interface; and updating the attribution database or the user record based on the feedback data regarding the product. The system relates to a user device, the user device comprising: one or more sensors for capturing the user data of the user session during the time period; and a transmitter for transmitting the captured user data to the interface of the hardware processor or the hardware server over a network to generate the product for the user session.
In some embodiments, the interface receives purchase instructions for the product, and wherein the hardware server transmits manufacturing instructions for the product in response to the received purchase instructions.
In some embodiments, the hardware server generates the product and the associated manufacturing instructions by generating a bill of materials file.
In some embodiments, the product comprises video content, and wherein the hardware server generates the product and associated code file by assembling a content file for the video content.
In some embodiments, the interface receives a modification request for the product, and wherein the hardware server updates the product and the associated manufacturing instructions based on the modification request.
In some embodiments, the user device captures the user data from a plurality of channels, wherein the user data includes at least one of image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user.
In some embodiments, the hardware server is programmed with executable instructions to calculate activity metrics, cognitive emotion ability metrics, and social metrics using the user data and the user attributes of the user session by: for the image data and the data defining the physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user activity analysis; for the audio data, using voice analysis; and for the text input, using text analysis; calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric; calculating the emotional indicia metrics of the user based on the one or more states of the one or more cognitive emotional abilities of the user; and generating the product based on at least one of the emotional indicia of the user, the activity metric, the product record, and/or the user record.
In some embodiments, the hardware server generates the measurement metrics using at least one of 3D scanning, machine learning prediction, and user measurements.
In some embodiments, the product comprises a garment, and wherein the hardware server uses garment measurements to generate the measurement metrics using garment data capturing the product data.
In some embodiments, the hardware server generates the movement metric based on at least one of Inertial Measurement Unit (IMU) data, computer vision, pressure data, radio frequency data.
In some embodiments, the hardware server extracts the user attributes from user data, the user attributes including at least one of purchase history, activity intent, interaction history, and comment data of the user.
In some embodiments, the hardware server generates the perceived preference metrics based on at least one of clothing feel, preferred hand feel, thermal preference, and movement feel.
In some embodiments, the hardware server generates the emotion markup metrics based on at least one of personality data, emotional state data, emotional fitness data, personal value perspective data, objective data, and physiological data.
In some embodiments, the hardware processor calculates the preferred sensory state as part of the extracted user attributes.
In some embodiments, the hardware server calculates social indicia metrics, connectivity metrics, and/or resonance indicia metrics.
In some embodiments, the user device is connected to or integrated with an immersive hardware device that captures audio data, the image data, and data defining physical or behavioral characteristics of the user as part of the user data.
In some embodiments, the non-transitory memory has a content repository and the hardware server has a content syndication engine that generates content as part of the product and transmits the generated content to the interface.
In some embodiments, the hardware processor receives object identification data and calculates a preferred sensory state as part of the object identification data.
In some embodiments, the product includes content for display or play on the hardware processor or the user device.
In some embodiments, the product relates to a garment, wherein the affiliation database includes simulated garment records, wherein the hardware server generates simulated product options as part of the product and the associated manufacturing instructions, and wherein the interface displays a visualization of the simulated product options.
In some embodiments, the simulated product options include at least one of software physical simulation, hardware physical simulation, static 3D viewer, and AR/VR experience content.
In some embodiments, the hardware server uses multi-modal feature extraction to extract product variables or attributes and categorize the product variables and attributes.
In some embodiments, the hardware server classifies different types of data streams for the user data for multimodal feature extraction.
In some embodiments, the user data includes image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user. The hardware server uses multi-modal feature extraction to extract the user attributes: for the image data and the data defining the physical or behavioral characteristics of the user, the multi-modal feature extraction implements at least one of: facial analysis; body analysis; eye tracking; behavioral analysis; social network or graph analysis; position analysis; user activity analysis; for the audio data, the multimodal feature extraction performs speech analysis; and for the text input, the multimodal feature extraction performs text analysis; and calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric.
In some embodiments, the hardware server extracts the user attributes by calculating emotion markup metrics based on one or more states of one or more cognitive emotion abilities of the user and social metrics of the user.
In some embodiments, the non-transitory memory stores a classifier for generating data defining physical or behavioral characteristics of the user, and the hardware server extracts the user attributes by calculating activity metrics, cognitive emotional ability metrics, and social metrics using the classifier.
In some embodiments, the non-transitory memory stores a user model corresponding to the user, and the hardware server uses the user model to calculate the emotion marking metric for the user.
In some embodiments, a system has one or more modulators in communication with one or more environmental fixtures to change an external sensory environment based on the product, the one or more modulators in communication with a hardware server to automatically modulate the external sensory environment of the user during the user session.
In some embodiments, the one or more environmental fixtures include at least one of: a lighting fixture, an audio system, a fragrance diffuser, and a temperature regulation system.
In some embodiments, the system has a plurality of data channels for a plurality of different types of sensors for capturing different types of user data during the user session, each of the plurality of devices transmitting the captured different types of user data to the hardware server over the network to generate the product.
In some embodiments, the hardware server is configured to: determining emotional markers of one or more additional users; determining users with similar emotion marks; predicting connectivity between users with similar emotional markers; and generating the product using data corresponding to the user having similar emotional indicia.
In some embodiments, the interface may transmit another product request for the user session and provide additional visualizations of another product for the user session received in response to the other product request.
In some embodiments, the product includes a program for display or playback on a computing device, where the program includes two or more phases, each phase having a different content, intensity, or duration.
In some embodiments, the user data comprises personality type data, wherein the hardware server calculates the emotional marking metric by determining a personality type of the user based on the user data by comparing the personality type data with stored personality type data indicating a correlation between personality type and personality type data.
In some embodiments, the hardware server calculates as part of the emotion markup metric at least one of: one or more emotional states of the user, one or more attentive states of the user, one or more sociophilic states of the user, one or more motivational states of the user, one or more re-rating states of the user, and one or more insight states of the user.
In some embodiments, the interface is a coaching application for improving the health of the user based on the product and at least one of the body marking metric, the emotion marking metric, and a perceptual preference metric.
In one aspect, a method for providing an interface for generating a product is provided. The method involves storing in memory product measurement records, movement characteristics, perceptual preference characteristics, body and mood marker characteristics, user records and a affiliatable database that generates a design model; capturing user data of a user session over a period of time; in response to receiving a product request from the interface, selecting a product category and a product variable; extracting user attributes from the user data and associated with the product variables, the user attributes including at least one of a measurement metric, a movement metric, a perceptual preference metric, and a body and emotion marking metric; calculating a target parameter of a target sensory state of the user using the extracted user attribute; generating a product and associated manufacturing instructions by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database, wherein generating the product is based on emotional and physical indicia of the user; displaying a visualization of the product at the interface with a selectable purchase option; receiving purchase instructions for the product in response to selecting the selectable purchase option at the interface; transmitting manufacturing instructions of the product to a manufacturing queue to trigger production and delivery of the product; receiving feedback data regarding the product; and updating the attribution-capable database or user model based on the feedback data.
In some embodiments, the method involves receiving a modification request for the product at the interface; and updating the product and the associated manufacturing instructions based on the modification request.
In one aspect, a system for providing an interface with product recommendations is provided. The system involves a non-transitory memory storing a attributive database of at least one of product measurement records, movement characteristics, perceptual preference characteristics, physical and emotional marking characteristics, user records, and generation of design models. The system involves a hardware processor programmed with executable instructions for an interface to: user data for a user session is obtained over a period of time, a product request for the user session is transmitted, a product recommendation for the user session is provided in response to the product request, a selected product of the product recommendation is received, and feedback data regarding the selected product is received. The system involves a hardware server coupled to memory to access a homeable database. The hardware server is programmed with executable instructions to: selecting a product category and a product variable in response to receiving the product request from the interface; extracting user attributes from the user data of the user session and associated with the product variables, the user attributes including at least one of a measurement metric, a movement metric, a perceptual preference metric, and a body marker metric and an emotional marker metric; calculating a target parameter of a target sensory state of the user using the extracted user attribute; calculating a product recommendation using a recommendation system to process the extracted user attributes and the target parameters of the target sensory state, the product recommendation being calculated using emotional and physical markers; transmitting the product recommendation to the interface over a network; receiving a notification of the selected product from the interface; receiving feedback data regarding the selected product; and updating the affiliatable database based on the feedback data regarding the selected product. The system involves at least one data channel having: one or more sensors for capturing user data during the time period; and a transmitter for transmitting the captured user data to the interface of the hardware processor or the hardware server through the network to calculate the product recommendation.
In some embodiments, the user attributes include quantitative user attributes of body metrics and qualitative user attributes including the emotional marking metrics and/or perceived preferences of the user.
In some embodiments, the interface further comprises a voice interface for communicating the product recommendation and the product request.
In some embodiments, the hardware server generates the selected product by processing the extracted user attributes and the target parameters for the target sensory state using a generated design model and the attribution database.
In some embodiments, the hardware server receives personalization data to generate the selected product.
In some embodiments, the hardware server generates the selected product and associated code file by generating a bill of materials file.
In some embodiments, the hardware server generates the selected product and associated code file by compiling content files.
In some embodiments, the interface receives a modification request for the selected product, and wherein the hardware server updates the selected product and the associated code file based on the modification request.
In some embodiments, the hardware server is programmed with executable instructions to calculate activity metrics, cognitive emotion ability metrics, and social metrics using the user data and the user attributes of the user session by: for the image data and the data defining the physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user activity analysis; for the audio data, using voice analysis; and for the text input, using text analysis; calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric; calculating the emotional indicia metrics of the user based on the one or more states of the one or more cognitive emotional abilities of the user; and calculating the product recommendation based on the emotional indicia of the user, the activity metric, the product record, and the user record.
In some embodiments, the hardware server generates the measurement metrics using at least one of 3D scanning, machine learning prediction, user measurements, and garment measurements.
In some embodiments, the hardware server generates the movement metric based on Inertial Measurement Unit (IMU) data, computer vision, pressure data, radio frequency data.
In some embodiments, the hardware server extracts the user attributes from user data, the user attributes including at least one of purchase history, activity intent, interaction history, and comment data of the user.
In some embodiments, the hardware server generates the perceived preference metrics based on at least one of movement, touch, temperature, vision, smell, sound, taste, clothing sensation, preferred hand feel, thermal preference, and movement sensation.
In some embodiments, the hardware server generates the emotion markup metrics based on at least one of personality data, emotional state data, emotional fitness data, personal value perspective data, objective data, and physiological data.
In some embodiments, the hardware processor calculates the preferred sensory state as part of the extracted user attributes.
In some embodiments, the emotion marking metrics include social marking metrics, connectivity metrics, and/or resonance marking metrics.
In some embodiments, the user device is connected to or integrated with an immersive hardware device that captures audio data, the image data, and data defining physical or behavioral characteristics of the user as part of the user data.
In some embodiments, the selected product includes content for display or play on a computing device.
In some embodiments, the non-transitory memory has a content repository and the hardware server has a content syndication engine that generates content as part of the selected product and transmits the generated content to the interface.
In some embodiments, the hardware processor receives object identification data and calculates a preferred sensory state as part of the user data.
In some embodiments, the affiliation database includes simulated clothing records, wherein the hardware server generates simulated product options as part of the product recommendation, and wherein the interface displays a visualization of the simulated product options.
In some embodiments, the simulated product options include at least one of software physical simulation, hardware physical simulation, static 3D viewer, and AR/VR experience content.
In some embodiments, the user data includes image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user, and wherein the hardware server uses multi-modal feature extraction to extract the user attributes: for the image data and the data defining the physical or behavioral characteristics of the user, the multi-modal feature extraction implements at least one of: facial analysis; body analysis; eye tracking; behavioral analysis; social network or graph analysis; position analysis; user activity analysis; for the audio data, the multimodal feature extraction performs speech analysis; and for the text input, the multimodal feature extraction performs text analysis; and calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric.
In some embodiments, the hardware server extracts the user attributes by calculating emotion markup metrics based on one or more states of one or more cognitive emotion abilities of the user and social metrics of the user.
In some embodiments, the non-transitory memory stores a classifier for generating data defining physical or behavioral characteristics of the user, and the hardware server extracts the user attributes by calculating activity metrics, cognitive emotional ability metrics, and social metrics using the classifier.
In some embodiments, the non-transitory memory stores a user model corresponding to the user, and the hardware server uses the user model to calculate the emotion marking metric for the user.
In some embodiments, the system includes a plurality of user devices each having a different type of sensor for capturing different types of user data during the user session, each of the plurality of devices transmitting the captured different types of user data to the hardware server over the network to generate the product recommendation.
In some embodiments, the hardware server is configured to: determining emotional and physical indicia of one or more additional users; determining users with similar emotional or physical markers; predicting connectivity between users with similar emotional or physical markers; and generating the product recommendation using data corresponding to the user having similar emotional or physical indicia.
In some embodiments, the interface may transmit another product request for the user session and provide additional visualizations of other product recommendations for the user session received in response to the other product request.
In some embodiments, the product includes a program for display or playback on the hardware processor or the user device, wherein the program includes two or more phases, each phase having a different content, intensity, or duration.
In some embodiments, the user data includes personality type data, wherein the hardware server calculates the emotion markup metric by determining a personality type of the user based on the user data by comparing the personality type data with stored personality type data indicative of a correlation between personality type and personality type data.
In some embodiments, the hardware server calculates as part of the emotion markup metric at least one of: one or more emotional states of the user, one or more attentive states of the user, one or more sociophilic states of the user, one or more motivational states of the user, one or more re-rating states of the user, and one or more insight states of the user.
In some embodiments, the interface is a coaching application for improving the health of the user based on the product recommendation and the emotion markup metric.
This summary does not necessarily describe the full scope of all aspects. Other aspects, features, and advantages will become apparent to those of ordinary skill in the art upon review of the following description of specific embodiments.
Drawings
Embodiments of the present disclosure will now be described with reference to the accompanying drawings, in which:
FIG. 1 illustrates a system for generating a product or product recommendation for a user based on physical and emotional markers of the user, according to embodiments of the disclosure;
FIG. 2 illustrates a user device that may be used by a user of the system of FIG. 1, in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates example emotion markup data between user data, cognitive emotion state detection types, cognitive emotion capabilities, and personality types, in accordance with an embodiment of the present disclosure;
FIG. 4 illustrates an example process for generating a product according to an embodiment of this disclosure;
FIG. 5A illustrates a flowchart of a process for generating a product for a user based on physical and emotional markers of the user, according to embodiments of the disclosure;
FIG. 5B illustrates a flowchart of a process for generating a product for a user based on physical and emotional markers of the user, according to embodiments of the disclosure;
FIG. 6 illustrates a flowchart of a process for generating product recommendations for a user based on physical and emotional markers of the user, according to embodiments of the disclosure;
FIG. 7 illustrates a flowchart of a process for generating product recommendations for a user based on physical and emotional markers of the user, according to embodiments of the disclosure;
FIG. 8 illustrates an information capture process for generating a product or recommendation for a user based on physical and emotional markers of the user, according to embodiments of the disclosure;
FIG. 9 illustrates a process for manufacturing a product for a user according to an embodiment of the present disclosure;
FIG. 10 illustrates a content delivery process for a user according to an embodiment of the present disclosure;
FIG. 11 illustrates a process for capturing feedback data according to an embodiment of the present disclosure;
FIG. 12 illustrates a diagram of an example computing device;
FIG. 13 illustrates a system for generating a product or product recommendation for a user based on physical and emotional markers of the user, according to embodiments of the disclosure;
FIG. 14 illustrates an example interface providing product recommendations according to embodiments of the present disclosure; and is also provided with
Fig. 15 illustrates an example interface providing personalized products according to an embodiment of this disclosure.
Detailed Description
Embodiments relate to methods and systems for personalizing or recommending products for a user based on user data captured using sensors or other means, such as text data input received from an interactive questionnaire at an interface, calculated body markers and mood markers. While various embodiments of the present disclosure are described below, the present disclosure is not limited to these embodiments, and variations of these embodiments may fall entirely within the scope of the present disclosure, which is defined only by the appended claims. Embodiments described herein may be used for personalization or recommendation of products and/or services. The term product as used herein may refer to a product and/or service. Example products are garments and other wearable products such as shirts, pants, jackets, bras, undergarments, hats, gloves, scarves, etc., or other types of products such as yoga mats that challenge and support the user's exercise phase and are built for the user's unique body shape, activity preferences, physical characteristics such as height, etc. Additional examples include glasses, bags, fitness equipment, meal bags, foods, beverages, footwear, self-care products, nutritional supplements, and the like. An example service is video content, such as video content for a workout or activity. Additional examples include e-learning or coaching services with emotionally compatible content. The content may relate to activities, learning, business, romantic partners, coaches or instructors recommending persons who may be resonated based on social indicia. The services may relate to diet plans, training plans, and other self-care services that include hydrotherapy care.
Fig. 1 illustrates an embodiment of a system 100 for product personalization or recommendation using body data and mood data from a user. The system 100 includes a hardware server 10, a database 12 stored on non-transitory memory, a network 14, and user devices 16. User device 16 may be an immersive hardware device for interacting with a user and capturing user data. The server 10 has a hardware processor communicatively coupled to a database 12 stored on non-transitory memory and is operable to access data stored on the database 12. The server 10 is further communicatively coupled to a user device 16 via a network 14, such as the internet. Accordingly, data may be transferred between the server 10 and the user device 16 by transmitting the data using the network 14. The server 10 is coupled to one or more hardware processors 18 through an interface 32 and a non-transitory computer-readable storage medium storing instructions for configuring the processors 18. The server 100 is coupled to the user device 16 and one or more hardware processors 18 for collecting sensor data and exchanging data and commands with other components of the system 100.
The hardware processor 18 has an interface 32 for providing a visualization of products or recommendations generated based on user data and body and mood marker metrics. The server 10 may access user data stored in memory (as database 12) to determine physical and emotional markers of the user. The server 10 may use the user's physical and emotional markers to generate products or recommendations. The server 10 may generate a product or recommendation by accessing a non-transitory memory that stores a set of product records and user's body and emotion markup data in the database 12. The interface 32 may display visual elements corresponding to personalized products or recommendations generated based on the user's body and mood markers, or otherwise communicate product recommendations, such as through audio data or video data. The display of visual elements at interface 32 may be controlled by hardware processor 18 based on products identified at server 10 by the user's body and mood markers.
The server 10 may calculate the user's body and mood markers using the different components, dimensions and sub-dimensions extracted from the user data. Example components include physical parameters (e.g., size, volume, weight, altitude, acceleration data, heart rate), activity (high and low intensity), preference (e.g., sensation desired by a person), movement parameters (leg, arm, chest, body), sensory parameters (touch, taste, smell, sound, light). These metrics provide a person-to-person component of the mechanism. There may be a verified measure, such as peak chest acceleration of the bra product, based on the parameters and the criteria defined by the function.
The embodiments described herein provide improved methods and systems for interface 32 for product personalization or product recommendation for a user by server 10 classifying user attributes from captured user data, determining physical and emotional markers, determining product variables, and a target sensory state of the user. The product variables may be mapped to different body marker variables and emotion marker variables of the user. For example, the product records may contain electronic links to different body markers and mood markers. Server 10 may calculate body marker variables and emotion marker variables using different components, dimensions, and sub-dimensions extracted from user data.
The server 10 may associate product variables (weight, stretch, color, temperature, time, environment) with user feel/sensation state (feel/state). For example, there may be a data model for connecting variables. There may be a look-up table or logic gates. The correlation may be created by experimental study and encoded into a map by the server 10. The mapping may indicate that a variable associated with the sensory state is found. For example, if the user wants to feel 'integrated with nature', the server 10 may select an increased airflow and increased conductive material at the neck.
Generally, in accordance with embodiments of the present disclosure, methods and systems for product personalization or product recommendation based in part on determining a user's emotional indicia are described. The emotional markers may be composite measures derived from a combination of measures of the user's personality type (e.g., measures of, for example, the user's openness/intelligence, heart of responsibility, exotropy, randomness, and neuropsychiatric/emotional stability) and the level or state of cognitive emotional processes or abilities (e.g., attention, mood regulation, consciousness, homomorphism, etc.).
Generally, in accordance with embodiments of the present disclosure, methods and systems for product personalization or product recommendation based in part on determining body markers of a user are described. The body markers may be composite metrics derived from a combination of metrics of different components, dimensions, and sub-dimensions extracted from the user data. Example components include physical parameters (e.g., size, volume, weight, altitude, acceleration data, heart rate), activity (high and low intensity), preference (e.g., sensation desired by a person), movement parameters (leg, arm, chest, body), sensory parameters (touch, taste, smell, sound, light). These metrics provide a person-to-person component of the mechanism. There may be criteria defined by parameters and functions mapped to different body marker data points.
The server 10 may automate the personalization or recommendation of products based on determining the user's emotional and physical markers using different types of data and capturing methods.
For example, text data input may be received from an interactive questionnaire at interface 32. There may be many problems with the composition of emotional and physical markers. An example component of the emotion markup may be consciousness, reaction, and mood, and an interactive questionnaire may be provided and the answers rated according to the Likert scale (1-5, 1-7) and the scores of A, R, C added and normalized to a percentage of 100. These components may be subdivided into dimensions, which may be further subdivided into sub-dimensions, wherein the score of each sub-dimension is used to calculate the score of its corresponding dimension and component. For example, regulation may be subdivided into self-management and self-regulation, and self-management may in turn be subdivided into effectiveness and driving force. The following table shows examples of potential dimensions and sub-dimensions of the consciousness, reactivity and homonymy components of the emotion markup.
Various methods of translating these scores into user metrics include clustering into role types based on data trends, presenting overall scores, and presenting dominant components based on raw scores (e.g., a=highest). The metrics may be based on scores in a single dimension or sub-dimension, e.g., the metrics may distinguish users having the same consciousness score but different intent and attentiveness scores.
In some embodiments, the metrics may be determined by behavioral tasks for automatic assessment of A, R and C (e.g., respiratory exercises), passive assessment by measurement (HRV, face recognition, etc.), and inner circle assessment, so that the questionnaire is filled out by 1-5 people nearby you, and scores are calculated to reflect true consciousness versus current self consciousness (e.g., 360 degree comments).
These examples relate to different status capabilities using questionnaires adapted according to a study-validated scale. The interactive questionnaire may be 61 questions or more in length, and presents questions for components of consciousness, reaction, and mood. The answers are rated on a scale and score of A, R, C and summed and normalized. There may be different operations to translate these scores into user metrics. Examples include clustering into character types based on data trends, presenting overall scores, and presenting dominant components based on raw scores. The simple interactive questionnaire presents 14 questions (as an example) and the function is the same, only reducing the number of questions.
Examples of other interactive questionnaire and data capture operations include: the behavioral tasks of automatic assessment (e.g., respiratory exercise), passive assessment by measurement (HRV, face recognition) and inner circle assessment for A, R and C, so that the questionnaire is filled out by 1-5 people nearby you, and scores are calculated to reflect true consciousness compared to current self consciousness, providing a comprehensive assessment of the user.
Another example involves different trait capabilities. The trait data may be combined with the state data and the operations of collecting the trait data and calculating the trait metrics may be similar to the state metrics, such as including these variables in a clustering process or in determining dominant attributes.
In order for interface 32 to generate a visualization for personalization or recommendation of a product, user device 16 may use one or more sensors to capture user data related to a user. The sensors may include, for example, audio sensors (e.g., microphones), optical sensors (e.g., cameras), tactile sensors (e.g., user interfaces), biometric sensors (e.g., heart monitors, blood pressure monitors, skin wetness monitors, electroencephalogram (EEG) electrodes, etc.), position/location sensors (e.g., GPS), and motion detection or motion capture sensors (e.g., accelerometers) for obtaining user data. The user data may then be transmitted to the server 10 and processed (e.g., using any of a variety of facial and body modeling or analysis techniques) and compared to stored reference user data to determine the physical characteristics, character type, and state of cognitive emotion capabilities of the user. For example, the processed user data may be used to determine one or more of the current emotional states of the user, which in turn may help determine the user's personality type and state of cognitive emotion capabilities, or it may determine a movement profile of the body, for example, which in turn may help determine product attributes to create a personalized product. Server 10 may use the physical characteristics, character type, and cognitive affective state of the user to generate a personalization or recommendation of the product.
Additionally, by monitoring the body markers and emotional markers of an individual over time, the methods and systems described herein may determine whether the body markers and emotional markers are improving or deteriorating. The individual's "baseline" body markers and mood markers may be calculated over time, for example by collecting an average of the state or level of cognitive emotional ability of the individual. The level of these capabilities may be increased by repeated interventions over time. Thus, the baseline physical and emotional markers of the user may improve over time. The baseline body and mood markers may include a level or state of cognitive emotional ability of the user, averaged over a period of time in conjunction with the user's personality type, or may include a biomechanical profile, averaged over a period of time in conjunction with user sensory preferences.
Based on the diagnosis of physical or emotional states, there may be different interventions taken.
After determining the physical or emotional indicia of the user, personalized products may be generated by the server 10 or product recommendations may be generated by the server 10, including products for improving the physical or emotional indicia or achieving a target sensory state. For example, a personalized product or recommendation may be based on a recommendation that has been shown in conjunction with similar physical or emotional markers of other users to show improvements in the physical or emotional markers in response to the recommendation or product. The recommendation may be adjusted depending on the evolution of the physical or emotional markers of the user over time. For example, it has been demonstrated that improved products that would lead to emotional markers for the user may also be generated for different users that exhibit similar emotional markers.
Multiple users of system 100 may use interface 32 and/or user device 16 to exchange data and commands with server 10 in a manner described in further detail below. Although two user devices 16 are shown in fig. 1, the system 100 is suitable for use by any suitable number of user devices 16 and even a single user device 16. Further, while the system 100 shows two servers 10 and two databases 12, the system 100 extends to any suitable number of servers 10 and databases 12 (e.g., a single server communicatively coupled to a single database). Further, while system 100 shows interfaces 32, system 100 extends to any suitable number of interfaces 22 to provide visualization.
In some embodiments, the functionality of database 12 may be combined with the functionality of server 10 having non-transitory storage or memory. In other words, the server 10 may store user data located on the database 12 in internal memory and may additionally perform any of the data processing described herein. However, in the embodiment of FIG. 1, the server 10 is configured to remotely access the contents of the database 12 when needed.
The server 10 may receive purchase instructions from the interface 32 at the processor 18 and may transmit manufacturing instructions for the product to the manufacturing queue 34 in response to receiving the purchase instructions.
Thus, the server 10 may use the user's physical and emotional markers to provide product personalization. In addition, the server 10 may use the physical and emotional markers of the user to provide product recommendations.
The server 10 interfaces with the interface 32 for product personalization using the user's body and mood tags. The server 10 is connected to a non-transitory memory storing a attribution database 12 of at least one of product measurement records, movement characteristics, perceptual preference characteristics, physical marking characteristics, emotional marking characteristics, user records, product records, and generated design models.
The processor 18 is programmed with executable instructions for the interface 32 to obtain user data for a user session over a period of time and transmit a product request for the user session. The interface 32 displays a visualization of the product generated for the user session in response to the product request and receives quantitative and qualitative feedback data regarding the product. The server 10 is coupled to a memory to access a homeable database 12. The hardware server 10 is programmed with executable instructions to select a product category and a product variable in response to receiving the product request from the interface. The server 10 may extract user attributes from the user data of the user session and associate with the product variables. The user attributes may be measurement metrics, movement metrics, perceptual preference metrics, body marker metrics, emotion marker metrics, purchase history, and activity intent. The server 10 calculates a target parameter of a target sensory state of the user using the extracted user attribute. Server 10 generates the product and associated manufacturing instructions by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database. The server 10 transmits the visualization of the product to the interface and updates the attribution database or the user record based on the feedback data regarding the product. The server 10 is connected to a user device 16 having one or more sensors for capturing user data of a user session during said time period. The user device 16 has a transmitter for transmitting the captured user data over a network to the interface 32 of the hardware processor 18 or the hardware server 10 to generate a product of the user session.
In some embodiments, interface 32 receives purchase instructions for a product, and hardware server 10 transmits manufacturing instructions for the product in response to the received purchase instructions. In some embodiments, the hardware server 10 generates the product and the associated manufacturing instructions by generating a bill of materials file. In some embodiments, interface 32 receives a modification request for a product and hardware server 10 updates the product and associated manufacturing instructions based on the modification request.
In some embodiments, the product includes video content, and the hardware server 10 generates the product and associated code file by assembling content files for the video content.
In some embodiments, user device 16 captures user data from multiple channels 30. The user data may be image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user.
In some embodiments, hardware server 10 is programmed with executable instructions to calculate activity metrics, cognitive emotion ability metrics, and social metrics using user data and user attributes of a user session. The server 10 may do this by: for the image data and the data defining the physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user activity analysis; for the audio data, using voice analysis; and text analysis is used for the text input. Server 10 may calculate one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metrics and the social metrics. Server 10 may calculate an emotion markup metric for the user based on the one or more states of the one or more cognitive emotion capabilities of the user. Server 10 generates the product based on at least one of the emotional indicia of the user, the activity metrics, the product record, and/or the user record.
In some embodiments, the hardware server 10 generates the measurement metrics using at least one of 3D scanning, machine learning prediction, and user measurements.
In some embodiments, the product comprises clothing and the hardware server 10 uses clothing measurements to capture clothing data of the product data to generate the measurement metrics.
In some embodiments, the hardware server 10 generates the movement metric based on at least one of Inertial Measurement Unit (IMU) data, computer vision, pressure data, radio frequency data.
In some embodiments, hardware server 10 extracts user attributes from the user data, such as the user's purchase history, activity intent, interaction history, and comment data.
In some embodiments, the hardware server 10 generates the perceived preference metrics based on at least one of clothing feel, preferred hand feel, thermal preference, and movement feel. In some embodiments, the hardware server 10 generates the emotion markup metrics based on at least one of personality data, emotional state data, emotional fitness data, personal value perspective data, objective data, and physiological data. In some embodiments, hardware processor 10 calculates the preferred sensory state as part of the extracted user attributes. In some embodiments, the hardware server 10 calculates social indicia metrics, connectivity metrics, and/or resonance indicia metrics.
In some embodiments, the user device 16 is connected to or integrated with an immersive hardware device that captures audio data, the image data, and data defining physical or behavioral characteristics of the user as part of the user data.
In some embodiments, the database 12 has a content repository and the hardware server 10 has a content syndication engine that generates content as part of the product and transmits the generated content to the interface 32. In some embodiments, the product includes content for display or play on the hardware processor 18 or the user device 16. In some embodiments, the product includes a program for display or playback on a computing device, where the program includes two or more phases, each phase having a different content, intensity, or duration.
In some embodiments, the hardware processor 18 receives object identification data and calculates a preferred sensory state as part of the object identification data.
In some embodiments, the product relates to a garment, wherein the affiliation database comprises simulated garment records. Hardware server 10 generates simulated product options as part of the product and associated manufacturing instructions, and interface 32 displays a visualization of the simulated product options.
In some embodiments, the simulated product options include at least one of software physical simulation, hardware physical simulation, static 3D viewer, and AR/VR experience content.
In some embodiments, the hardware server 10 uses multi-modal feature extraction to extract product variables or attributes and categorize the product variables and attributes. In some embodiments, the hardware server 10 classifies different types of data streams for multi-modal feature extraction for the user data.
In some embodiments, the user data includes image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user. The hardware server 10 uses multi-modal feature extraction to extract the user attributes: for the image data and the data defining the physical or behavioral characteristics of the user, the multi-modal feature extraction implements at least one of: facial analysis; body analysis; eye tracking; behavioral analysis; social network or graph analysis; position analysis; user activity analysis; for the audio data, the multimodal feature extraction performs speech analysis; and for the text input, the multimodal feature extraction performs text analysis; and calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric.
In some embodiments, hardware server 10 extracts the user attributes by calculating emotion markup metrics based on one or more states of the user's one or more cognitive emotion abilities and the user's social metrics.
In some embodiments, the non-transitory memory stores a classifier for generating data defining physical or behavioral characteristics of the user, and the hardware server 10 extracts the user attributes by calculating activity metrics, cognitive emotional ability metrics, and social metrics using the classifier.
In some embodiments, the non-transitory memory stores a user model corresponding to the user, and the hardware server 10 uses the user model to calculate the emotion marking metric for the user.
In some embodiments, system 100 has one or more modulators in communication with one or more environmental fixtures to change an external sensory environment based on the product, the one or more modulators in communication with a hardware server to automatically modulate the external sensory environment of the user during the user session. In some embodiments, the one or more environmental fixtures include at least one of: a lighting fixture, an audio system, a fragrance diffuser, and a temperature regulation system.
In some embodiments, the system 100 has multiple data channels 30. Data channels corresponding to a plurality of different types of sensors for capturing different types of user data during a user session. The captured different types of user data are transmitted over a network to the hardware server 10 to generate each of the plurality of data channels 30 of the product.
In some embodiments, hardware server 10 is configured to determine emotional markers of one or more additional users, determine users having similar emotional markers, and predict a relevance between users having similar emotional markers. The server 10 generates a product using data corresponding to users having similar emotional markers.
In some embodiments, the interface 32 may transmit another product request for the user session and provide additional visualizations of another product for the user session received in response to the other product request.
In some embodiments, the user data includes personality type data, and the hardware server 10 calculates the emotion markup metric by determining a personality type of the user based on the user data by comparing the personality type data with stored personality type data indicative of a correlation between personality type and personality type data.
In some embodiments, hardware server 10 calculates at least one of the following as part of the emotion markup metrics: one or more emotional states of the user, one or more attentive states of the user, one or more sociophilic states of the user, one or more motivational states of the user, one or more re-rating states of the user, and one or more insight states of the user.
In some embodiments, the interface 32 is a coaching application for improving the health of the user based on the product and at least one of the body marking metric, the emotion marking metric, and a perceptual preference metric.
In one aspect, the system 100 provides the interface 32 with product recommendations. The system 100 involves a non-transitory memory storing a attributive database 12 of at least one of product measurement records, movement characteristics, perceptual preference characteristics, physical and emotional marking characteristics, user records, and generation of design models. The system 100 relates to a hardware processor 18 programmed with executable instructions for interfacing with: user data for a user session is obtained over a period of time, a product request for the user session is transmitted, a product recommendation for the user session is provided in response to the product request, a selected product of the product recommendation is received, and feedback data regarding the selected product is received. The system 100 involves a hardware server 10 coupled to memory to access a homeable database 12. The hardware server 10 is programmed with executable instructions to: selecting a product category and a product variable in response to receiving the product request from the interface; extracting user attributes from the user data of the user session and associated with the product variables, the user attributes including at least one of a measurement metric, a movement metric, a perceptual preference metric, and a body marker metric and an emotional marker metric; calculating a target parameter of a target sensory state of the user using the extracted user attribute; calculating a product recommendation using a recommendation engine to process the extracted user attributes and the target parameters of the target sensory state, the product recommendation being calculated using emotional and physical markers; transmitting the product recommendation to the interface over a network; receiving a notification of the selected product from the interface; receiving feedback data regarding the selected product; and updating the attribution-capable database 12 based on the feedback data regarding the selected product. System 100 relates to a channel 30 having: one or more sensors for capturing user data during the time period; and a transmitter for transmitting the captured user data to the interface 32 of the hardware processor 18 or the hardware server 10 through the network to calculate the product recommendation.
In some embodiments, the user attributes include quantitative user attributes of body metrics and qualitative user attributes including the emotional marking metrics and/or perceived preferences of the user.
In some embodiments, the interface 32 further comprises a voice interface for communicating the product recommendation and the product request. In some embodiments, the hardware server 10 generates the selected product by processing the extracted user attributes and the target parameters for the target sensory state using a generated design model and the attribution database. In some embodiments, the hardware server 10 receives personalization data to generate the selected product. In some embodiments, the hardware server 10 generates the selected product and associated code file by generating a bill of materials file. In some embodiments, the hardware server 10 generates the selected product and associated code file by compiling content files.
In some embodiments, the interface 32 receives a modification request for the selected product, and wherein the hardware server updates the selected product and the associated code file based on the modification request.
In some embodiments, the hardware server 10 is programmed with executable instructions to calculate activity metrics, cognitive emotion ability metrics, and social metrics using the user data and the user attributes of the user session by: for the image data and the data defining the physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user activity analysis; for the audio data, using voice analysis; and for the text input, using text analysis; calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric; calculating the emotional indicia metrics of the user based on the one or more states of the one or more cognitive emotional abilities of the user; and calculating the product recommendation based on the emotional indicia of the user, the activity metric, the product record, and the user record.
In some embodiments, the hardware server 10 generates the measurement metrics using at least one of 3D scanning, machine learning prediction, user measurements, and garment measurements. In some embodiments, the hardware server generates 10 the movement metric based on Inertial Measurement Unit (IMU) data, computer vision, pressure data, radio frequency data. In some embodiments, the hardware server 10 extracts the user attributes from user data, the user attributes including at least one of purchase history, activity intention, interaction history, and comment data of the user.
In some embodiments, the hardware server 10 generates the perceived preference metrics based on at least one of movement, touch, temperature, vision, smell, sound, taste, clothing feel, preferred hand feel, thermal preference, and movement feel. In some embodiments, the hardware server 10 generates the emotion markup metrics based on at least one of personality data, emotional state data, emotional fitness data, personal value perspective data, objective data, and physiological data.
In some embodiments, the hardware processor 18 calculates the preferred sensory state as part of the extracted user attributes. In some embodiments, the emotion marking metrics include social marking metrics, connectivity metrics, and/or resonance marking metrics.
In some embodiments, the user device 16 is connected to or integrated with an immersive hardware device that captures audio data, the image data, and data defining physical or behavioral characteristics of the user as part of the user data.
In some embodiments, the selected product includes content for display or play on a computing device. In some embodiments, the non-transitory memory has a content repository and the hardware server 10 has a content syndication engine that generates content as part of the selected product and transmits the generated content to the interface 32. In some embodiments, the product includes a program for display or playback on the hardware processor or the user device, wherein the program includes two or more phases, each phase having a different content, intensity, or duration.
In some embodiments, the hardware processor 18 receives object identification data and calculates a preferred sensory state as part of the user data.
In some embodiments, the affiliation database includes simulated clothing records, wherein the hardware server 10 generates simulated product options as part of the product recommendation, and wherein the interface displays a visualization of the simulated product options. In some embodiments, the simulated product options include at least one of software physical simulation, hardware physical simulation, static 3D viewer, and AR/VR experience content.
In some embodiments, the user data includes image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user, and the hardware server 10 uses multi-modal feature extraction to extract the user attributes: for the image data and the data defining the physical or behavioral characteristics of the user, the multi-modal feature extraction implements at least one of: facial analysis; body analysis; eye tracking; behavioral analysis; social network or graph analysis; position analysis; user activity analysis; for the audio data, the multimodal feature extraction performs speech analysis; and for the text input, the multimodal feature extraction performs text analysis; and calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric.
In some embodiments, hardware server 10 extracts the user attributes by calculating emotion markup metrics based on one or more states of the user's one or more cognitive emotion abilities and the user's social metrics.
In some embodiments, the non-transitory memory stores a classifier for generating data defining physical or behavioral characteristics of the user, and the hardware server 10 extracts the user attributes by calculating activity metrics, cognitive emotional ability metrics, and social metrics using the classifier.
In some embodiments, the non-transitory memory stores a user model corresponding to the user, and the hardware server 10 uses the user model to calculate the emotion marking metric for the user.
In some embodiments, the system 100 includes a plurality of user devices 16 each having a different type of sensor for capturing different types of user data during the user session, each of the plurality of devices transmitting the captured different types of user data to the hardware server over the network to generate the product recommendation.
In some embodiments, the hardware server 10 is configured to: determining emotional and physical indicia of one or more additional users; determining users with similar emotional or physical markers; predicting connectivity between users with similar emotional or physical markers; and generating the product recommendation using data corresponding to the user having similar emotional or physical indicia.
In some embodiments, the interface 32 may transmit another product request for the user session and provide additional visualizations of another product recommendation for the user session received in response to the other product request.
In some embodiments, the user data includes personality type data, wherein the hardware server calculates the emotion markup metric by determining a personality type of the user based on the user data by comparing the personality type data with stored personality type data indicative of a correlation between personality type and personality type data.
In some embodiments, hardware server 10 calculates at least one of the following as part of the emotion markup metrics: one or more emotional states of the user, one or more attentive states of the user, one or more sociophilic states of the user, one or more motivational states of the user, one or more re-rating states of the user, and one or more insight states of the user.
In some embodiments, the interface 32 is a coaching application for improving the health of the user based on the product recommendation and the emotion markup metrics.
Fig. 2 shows an embodiment of the user device 16 in more detail. The user device 16 includes a plurality of sensors, a hardware processor 22, and a computer readable medium 20, such as a suitable computer memory storing computer program code. The user device 16 has a user interface 24 that may implement one or more functions or operations described with respect to interface 32 in some embodiments. The sensors 26, 28 include a camera 26 and a microphone 28, but the present disclosure extends to other suitable sensors, such as biometric sensors (heart monitor, blood pressure monitor, skin moisture monitor, etc.), any position/location sensor, motion detection or motion capture sensor, etc. For example, camera 26 may capture video and image data. The processor 22 is in communication with each of the sensors 26, 28 and is configured to control operation of the sensors 26, 28 and to receive data from the sensors 26, 28 in response to instructions read by the processor 22 from the non-transitory memory 20. According to some embodiments, user device 16 is a mobile device such as a smart phone, but in other embodiments user device 16 may be any other suitable device that may be operated by and interacted with by a user. For example, user device 16 may include a laptop computer, a personal computer, a tablet device, a smart mirror, a smart display, a smart screen, a smart wearable device, or an exercise device.
The sensors 26, 28 of the user device 16 are configured to obtain user data related to the user. For example, the microphone 28 may detect speech from the user, and thus the processor 22 may convert the detected speech into voice data. A user may enter text or other data into the user device 16 through the user interface 24 so that the processor 22 may convert the user input into text data. Further, the camera 26 may capture images of the user, for example, when the user interacts with the user device 16. The camera 26 may convert the image into image data related to the user. The user interface 24 may send data collected from the different components of the user device 16 for transmission to the server 10 and storage in the database 12 as part of a data record stored with the identifier of the user device 16 and/or user. The processor 22 may implement speech-to-text translation and analyze specific word usage/variation using natural language processing.
The system 100 captures user data of one or more users during a user session using one or more sensors 26, 28. In some embodiments, the system 100 provides an interface that displays a visualization of a product generated for a user session in response to a product request. In some embodiments, the system 100 provides an interface that displays product recommendations generated for a user session in response to a product request. The interface may be the user interface 24 of the user device 16 in some embodiments, or may be an interface of a separate hardware device in some embodiments. For example, system 100 has a non-transitory memory that stores product measurement records, movement characteristics, perceptual preference characteristics, body marker characteristics, emotional marker characteristics, product records, generation of design models, body marker records, emotional marker records, and a attributive database of user records that store user data received from multiple channels 30 at server 10 and database 12.
The user data may relate to a series of data captured during a period of a user session (which may be combined with data from different user sessions and with data for different users). The user data may be image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user.
The system 100 has a hardware processor (which may be at the user device 16) programmed with executable instructions for an interface (which for this example may be the user interface 24) to obtain user data for a user session over a period of time. The processor server 10 transmits a product request for a user session and updates its interface to provide a visual product generated for the user session or a product recommendation for the user session received in response to the product request.
The system 100 has a hardware server 10 coupled to a non-transitory memory (or database 12) to access product records, body mark records, emotion mark records, and user records. The hardware server 10 is programmed with executable instructions to transmit product recommendations to the interface 32 over the network 14 in response to receiving a product request from the interface. The hardware server 10 is programmed with executable instructions to calculate product recommendations by: the activity metrics, physical metrics, cognitive affective capability metrics, and social metrics are calculated using user data and user records of the user session. The hardware server 10 may extract user attributes from the user data to represent physical metrics of the user and cognitive metrics of the user. Hardware server 10 may use both the physical metrics of the user and the cognitive metrics of the user to determine the physical and emotional markers of the user during the time period of the user session. Hardware server 10 may calculate a plurality of physical and emotional markers of the user at intervals during a time period of the user session. The hardware server 10 calculates a plurality of body markers and emotional markers that may trigger the calculation of updated products or recommendations and the visualization of the update of the interface. During a period of a user session, the physical indicia and the emotional indicia each use physical metrics of the user and cognitive metrics of the user to generate a product or recommendation.
The hardware server 10 may use user data captured during a user session and may also use user data captured during a previous user session or user data of a different user. Hardware server 10 may aggregate data from multiple channels to calculate a product or recommendation to trigger an update to interface 32 on user device 16 or, in some instances, to an interface on a separate hardware device.
The hardware server 10 may process different types of data by: for the image data and the data defining the physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user measurement analysis; for the audio data, using voice analysis; and text analysis is used for the text input.
Hardware server 10 may calculate one or more states of one or more physical characteristics of the user based on the physical metrics. The hardware server 10 may calculate a body marker of the user based on the one or more states of the one or more body characteristics of the user and using the body marker record. The body marker records may store data for different components, dimensions, and sub-dimensions of the user's body characteristics.
Hardware server 10 may calculate one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metrics and the social metrics. Hardware server 10 may calculate an emotional signature of the user based on the one or more states of the one or more cognitive affective abilities of the user and using the emotional signature record.
The hardware server 10 may calculate product recommendations based on the user's body and mood tags, measurement metrics, product records, and user records. The system has a user device comprising: one or more sensors for capturing user data during the time period; and a transmitter for transmitting the captured user data to the interface or the hardware server through the network to calculate the product recommendation.
In some embodiments, system 100 has one or more modulators in communication with one or more environmental fixtures to change an external sensory environment based on a generated product or recommendation, the one or more modulators in communication with hardware server 10 to automatically modulate the external sensory environment of the user. The change in sensory environment may be part of the product experience or integrated into the service. As described above, the term product as used herein may also include services. In some embodiments, the one or more environmental fixtures include at least one of: a lighting fixture, an audio system, a fragrance diffuser, and a temperature regulation system.
The system 100 has a plurality of user devices 16 and each user device may have a different type of sensor for capturing different types of user data during a user session. Each of the user devices 16 may be used to transmit the different types of user data captured to the hardware server 10 over the network 14 to calculate a product or recommendation.
In some embodiments, the system 100 has a plurality of user devices 16 for a user group. Each user device of the plurality of user devices 16 has an interface for obtaining user data for a corresponding user of the user group of user sessions over the period of time. The server 10 may provide product recommendations for a user session received in response to product requests from a plurality of user devices 16. The hardware server 10 transmits product recommendations to the corresponding user interface 24 (or interface 32 of fig. 1) in response to receiving a product request from the corresponding user interface 24 of the user device 16. The server 10 may calculate product recommendations for a user group and may recommend product recommendations for the user group. The same product recommendation may be suggested for all user groups or a group of users of the group having similar physical and emotional markers as determined by the system 100 using the similarity measurements.
In some embodiments, hardware server 10 is configured to determine body markers or emotional markers of one or more additional users and to determine users having similar body markers and emotional markers. The server 10 may predict the connectivity between users with similar physical or emotional markers and generate product recommendations for users with similar physical or emotional markers.
In some embodiments, the interface 32 (or the user interface 24 of the user device 16) may receive product feedback for the user session, transmitting the feedback to the hardware server 10 to update the generative model or recommendation engine. The feedback may be positive, indicating approval of the product. The feedback may be negative, indicating that the product is not approved. The server 10 may use the feedback for subsequent calculations of product recommendations. The server 10 may store the feedback in a record in the database 12.
In some embodiments, the interface 32 (or the user interface 24 of the user device 16) may transmit another product request for the user session, and the server 10 may provide additional products or recommendations for the user session in response to the another product request. The server 10 may transmit additional product recommendations for the user session to update the interface 32.
In some embodiments, the interface 32 (or the user interface 24 of the user device 16) obtains additional user data after providing the product recommendation for the user session, the additional user data being captured during the product recommendation by the user. The server 10 may recalculate the physical or emotional markers of the user for the further product recommendation using the further user data captured after providing the product recommendation for the user session.
In some embodiments, the interface 32 (or the user interface 24 of the user device 16) transmits another product request for another user session and provides an updated product or updated recommendation for the other user session received from the server 10 in response to the other product request. The updated product recommendation may be different from the initial product recommendation.
In some embodiments, the interface 32 is a tutorial application and the one or more recommended products may be part of a virtual tutorial for the user to improve the user's health.
In some embodiments, the product recommendation relates to a product category and may indicate a category selected from a set of categories stored in the product record. In some embodiments, the recommended product contains video content or programs with various content for the interface to guide the user's interaction or experience for an extended period of time. The content may be customized for the user. In some embodiments, the product recommendation includes a program with various content to guide the user's interaction or experience for an extended period of time. In some embodiments, the program includes two or more phases, each phase having a different content, intensity, or duration. The phase may customize the content for the user.
In some embodiments, server 10 may receive user data related to one or more additional users from user device 16 and determine one or more states of one or more physical or cognitive emotional abilities of the one or more additional users based on the processed user data. The server 10 may determine the physical and emotional markers of each of the one or more additional users and determine users having similar physical or emotional markers. Server 10 may use similar models or metrics stored in non-transitory memory to predict the connectivity between users with similar physical or emotional markers. The server may generate one or more products or recommendations for transmission to an interface of a user having similar physical or emotional indicia.
In some embodiments, the server 10 may determine a physical characteristic of the user based on the processed user data and determine a physical marker of the user based on the physical characteristic of the user. In some embodiments, the processed user data includes body characteristic data, and the server 10 may determine the body characteristic of the user by comparing the body characteristic data with stored body characteristic data that is indicative of or a correlation between the body characteristic and the body characteristic data.
In some embodiments, the processed user data includes physiological parameter data, such as heart rate data, blood pressure data, skin humidity data, or blood oxygen level data, and the server 10 may determine the one or more states of the one or more physiological parameters of the user by: the physiological data is compared to stored physiological parameter data, which indicates a correlation between the physiological parameter state and the physiological parameter data.
In some embodiments, server 10 may determine the personality type of the user based on the processed user data and determine the emotional indicia of the user based on the personality type of the user. In some embodiments, the processed user data includes character type data, and server 10 may determine the character type of the user by comparing the character type data with stored character type data indicating a correlation between character type and character type data.
In some embodiments, the processed user data includes cognitive emotion capability data, and server 10 may determine one or more states of one or more cognitive emotion capabilities of the user by: the cognitive emotion capability data is compared with stored cognitive emotion capability data indicative of a correlation between a state of cognitive emotion capability and cognitive emotion capability data.
In some embodiments, the server 10 may determine at least one of the following: one or more emotional states of the user, one or more attentive states of the user, one or more sociophilic states of the user, one or more motivational states of the user, one or more re-rating states of the user, and one or more insight states of the user. Server 10 may determine one or more states of one or more cognitive affective abilities of the user based on at least one of: one or more emotional states of the user, one or more attentive states of the user, one or more sociophilic states of the user, one or more motivational states of the user, one or more re-rating states of the user, and one or more insight states of the user.
FIG. 3 illustrates example relationships between different user data and how the example relationships relate to emotional marking metrics using cognitive emotional abilities and personality types. The framework may be stored at server 10 (e.g., in database 12) as code instructions and data records that map parameters of cognitive affective capabilities and personality types to user data and product variables. In line 32, techniques and analysis procedures for determining different forms of user data associated with a user are shown. In line 34, different types of cognitive emotional state detection methods based on the type of user data captured are shown. Server 10 may implement different types of cognitive affective state detection methods, detect the type of user data captured, and select the appropriate type of cognitive affective state detection method to process the user data. For example, eye tracking data may enable system 100 to sense a user's attention level, while 3D modeling and analysis of a user's face and body may enable system 100 to sense one or more emotions of the user. In rows 36 and 38, different types of cognitive emotional abilities are shown. In rows 31 and 33, different levels or states of different aspects of the user's personality type are shown. The user data also provides different types of data to calculate body markers for the user. There may be different components, dimensions and sub-dimensions for the body markers. Example body marker components may be body parameters (size, volume, weight, height); mobility parameters (leg, arm, chest, body); activity (high/low intensity); preference (sense of one's desire); sensory parameters (touch, taste, smell, sound, light). These break down into individual mechanical components.
Fig. 4 illustrates a method 400 of providing an interface 32 for product personalization.
The method 400 may be implemented by the server 10. For example, the server 10 may select a 'product' category (apparel, hardware, software, content, services). The server 10 may quantify all the variables of the product. Example variables include weight, stretch, color, temperature, time, environment, and the like. The server 10 may associate a variable with a user sensory state or sensory state. For example, for products and services involving taste, smell and color are variables that can be associated with taste. The server 10 may employ the identified variables and define a quantifiable goal based on the strongest determinants of the desired sensory state. For example, smell X and color y=sweet. The server 10 may obtain the identified variables and further prioritize the variables based on the desired variables (via threshold parameters) to have unique quantified targets based on user body markers or emotional markers, which may correspond to values representing individual body uniqueness and emotional uniqueness and their optimal sensory states. For example, the smell X may be the target of a general population, but the color Y may be personalized for the individual to obtain the best sensory state. As an example of a product or service involving taste, server 10 may calculate taste metrics or values, such as for person a: olfactory X and color Y (.7) =sweet. The server 10 calculates these product variables personalized to the user, which can be used as input data for the recommendation or creation process.
The server 10 may define quantifiable targets for different desired sensory states. The server 10 may use different forms of statistical analysis (i.e., principal component analysis) to define quantifiable targets to identify variables that have the greatest impact on the sensory state. These variables are then placed in an experimental protocol whereby the server 10 changes the 'quantifiable metrics' to see which number driven the sensation. Experimental data can be captured from the laboratory as a look-up table and logic gates. For example, the data may indicate that the skin humidity variable is the highest rated variable for the 'comfortable warm' sensation. The server 10 may then test different percentages of skin wetness to find skin wetness between 30-50% as a quantified goal for perceived comfort. A bra comfort variable is another example. The server 10 can define the peak acceleration of the chest as an important variable and can define a specific m/s for different chest volumes 2 Acceleration targets.
As another example, particular HRV data may be associated with emotional sensations such as calm or stress. For example, in the case of a bra, server 10 may define the bra structure as steel ring x+ material modulus Y, determine whether a given bra structure meets an acceleration target and thus provide a value for the comfort feel of the bra user. As a further example, in the case of pants, hip compression a and thigh compression B may provide the user with a feeling of confidence. As another example, the server 10 may define the smell X and the light color Y as calm feel targets.
The server 10 may further prioritize the variables based on individual physical uniqueness and emotional uniqueness and their best sensory state using unique quantization goals. The server 10 may update the product variables and adjust the weights for different target sensory states. The order or number of variables may be individually altered. For example, the comfort of a variable for person 1 may be defined as requiring 3-5m/s 2 And person 2 may need 7-9m/s 2 Peak acceleration of (2). Each individual may also have unique needs for under-belt compression or other components of the bra. Instead of defining the bra comfort level of all users as a peak acceleration of 5m/s/s and 13mmHg under-band compression, server 10 may generate a personalized product (bra) that may customize variable values or numbers for an individual. As another example, thermal comfort may be a unique variable for each user. The person 1 may have a specific requirement of a skin moisture of 30% and this may be combined with a specific material surface roughness or conductivity measure to customize the product for the user. In the positive space, the server 10 may normalize the range of HRVs to match the user's ability to manage stress. By way of example, person a with HRV x and person B with HRV y may feel equally calm due to their ability to manage stress on their own.
At 402, user device 16 captures user data for a user session over a period of time. The server 10 may also obtain user data from different sources or use predictive models to predict user data. For product recommendation or creation, the server 10 obtains user data to make measurements of the user or visitor. The user data may be used to make physical measurements of the user and additional data may be captured by the server 10 or the user device 16 through a predictive model. The server 10 may process user data captured by the user device 16 to extract and test the personalized metrics. For example, the user device 16 may measure a particular chest acceleration of the user. By prediction, the server 10 may use a data model, so the user device 16 may not need to physically measure all aspects of the user but may still obtain data representing those aspects of the user to generate a product. Server 10 may determine a known bra size, in conjunction with the user's fit preferences and activities, to input into the predictive model to accurately predict bra acceleration.
The user data may be quantitative (e.g., body measurements) or qualitative (e.g., perceived preferences). The user data will be processed by a generative model that will create a product or service to match the input user data. The interface 32 may display output to receive input commands to modify or purchase/use the generated product (or service). If a purchased product, manufacturing instructions may be sent to the manufacturing queue 34 to manufacture the product and deliver the product to the user. As described above, the term product as used herein extends to a service, and if a service is selected, the content may be electronically delivered to the user device 16. Once the user is able to use the product or service, the user can close the loop by providing feedback data, i.e., the transmitted and stored product/service, to improve the accuracy of the generated model. If a user receives a product, it can be returned to be broken down at the end of its life, thereby creating a new product.
For example, at 402, a sensor may capture user data of a user session over a period of time. The captured user data is processed at the server 10 to extract user attributes. The server 10 stores and updates in memory product measurement records, movement characteristics, perceptual preference characteristics, body marking characteristics, emotional marking characteristics, user records, and a affiliation database 12 that generates design models. Server 10 may extract user attributes from the user data and associate with the product variables, the user attributes including measurement metrics, movement metrics, perceptual preference metrics, body marker metrics, and emotion marker metrics.
The server 10 processes the user data of different data types to select different categories of products (or services) quantified by different product variables. For example, different product or service subcategories may be identified to determine the variables of the product. For example, the product may be a personalized brassiere or pants. Example services include video content and services for instruction and activity.
For personalized bras, server 10 captures movement data and sensory preferences as input data to generate a design model to create digital artwork or patterns to be laser cut, applied, or screen printed or other design processes that impart specific mechanical properties to achieve movement management and desired sensory states for user personalization. The server 10 may generate associated files for automated assembly and manufacture of the product. The product is assembled and shipped to the user. The movement data may be associated with parts of the user's body, such as chest movement data.
For personalized pants, the server 10 captures leg movement data including movement variability and gait, as well as user preferences for balance feel, speed feel, and flow feel, as input data to generate a design model to create digital artwork or patterns to be laser cut, applied, or screen printed or other design process that imparts specific mechanical characteristics to achieve movement management and desired sensory status for the user personalization. The server 10 may generate associated files for automated assembly and manufacture of the product. The product is assembled and shipped to the user.
For personalized service content, examples are exercise and nutrition programs involving different types of content. The server 10 may capture measurements, activity preferences, dietary restrictions, etc. via the user device 16. Server 10 may input this data into a generative model to generate exercise content compiled to match the input. A meal plan menu may be generated to match the input and the user may be provided with the planned content to follow and a meal plan or package may be sent according to the generated menu.
The server 10 uses the body marker metrics and the emotion marker metrics to generate a product or service. For example, the server 10 provides user content and products (diaries, meditation devices) based on personality trait values that may be calculated from user data captured along a personalized emotion-adapted itinerary. The server 10 classifies the body tag data, emotion tag data or social tag data to map to different variables of a product or service. For example, the server 10 may process the user data to determine that the user needs support to handle anxiety and deliver content and products that help control anxiety. As an example of body marker data, the server 10 may process user data to determine that the user has low flexibility and deliver content and products intended to improve flexibility. As an example of social tagging data, the server 10 may process the user data to determine that the user is inward, and the server 10 may generate a product that provides the user with more internally reflected content. The server 10 may process the user data to determine that the user is outward, and the server 10 may generate a product that provides the user group category or content based on the group discussion.
The method 400 may involve, for example, a user providing credentials to the user device 16 at the user interface 24 to trigger product personalization and real-time data capture to improve the user's overall physical or emotional health over a current time period based on real-time user data. For example, a user activates an emotional wellness application (not shown) stored on memory 20 on user device 16 to trigger user interface 24. The emotional wellness application invites the user to input user data to user device 16. User device 16 receives user data from user interface 24 that may be collected from different sensors 24, 26, 28 in real-time to provide input data for generating a product based on real-time data, based on (near) real-time calculations of emotional well-being metrics. For example, in response to activating the emotional wellness application, the user may be prompted to complete a series of exercises and/or questionnaires, and the user interface 24 collects real-time user data throughout the series of exercises or other prompts. For example, a questionnaire may be presented to the user on the user interface 24 and the user may be asked to answer one or more questions included in the questionnaire. Alternatively, the user may be prompted to speak aloud to discuss emotionally difficult events or how the user feels to others in life. The user interface 24 may collect the captured audio data for provision to the server 12. In other instances, where consent data is obtained from the user interface 24, various forms of biometric data, such as captured from different sensors 24, 26, 28 in real-time, may be passively recorded in the user's daily life. In addition, non-biometric data, such as positioning data associated with the user, may also be recorded at the user device 16. Such data may be processed to detect and quantify changes in the level of cognitive emotional ability, as well as any other information used to measure emotional markers of the user, as described in further detail below.
The user may provide an answer, for example, by text entry into the user interface 24, or alternatively may speak an answer. Microphone 28 may detect a spoken answer and processor 22 may convert the utterance into audio data. Before, during, or after completion of the questionnaire, the emotional well-being application may send control commands to cause the camera 26 to record images and/or video of the user. The image may include at least a portion of the user's body, at least a portion of the user's face, or a combination of at least a portion of the user's body and at least a portion of the user's face. The captured image is then converted into image data (which may include video data) that forms part of the overall user data received at the user device 16.
The combination of audio data, text data and image data, as well as any other data entered into the user device 16 and related to the user, may be referred to hereinafter as user data. Other suitable forms of data may be included in the user data. For example, the user data may include other observable data collected by one or more internet of things devices, social network data obtained by social network analysis, GPS or other positioning data, activity data (e.g., number of steps), heart rate data, heart rate variability data, data indicating a duration of time spent using the user device or one or more particular applications on the user device, data indicating a reaction time to notifications appearing on the user device, social graph data, phone log data, and call recipient data.
For example, the server 10 may store the user data in a record indexed by the user's identifier. The user device 16 may transmit the captured user data to the server 10 for storage in the database 12. In some embodiments, user device 16 may use an emotional well-being application to pre-process user data prior to transmission to server 16. For example, preprocessing by the emotional wellness application may involve extracting features from the raw data. For example, user device 16 may transmit the extracted features to server 10 instead of or in addition to the original data. For example, the extracted features may facilitate efficient transmission and reduce the amount of data transmitted between the user device 16 and the server 10.
According to some embodiments, in addition to user data captured by the sensors 24, 26, 28 of the user device 16, wearable sensors (e.g., heart rate monitors, blood pressure sensors) located on the user may provide additional data (such as the user's physical activity level) and may be input to the user device 16 and may form part of the user data received at the user device 16.
At 404, in response to receiving the product request from the interface 32, the server 10 generates a product and an associated code file. For example, server 10 generates a product and associated code file by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database. As part of the product generation, the server 10 selects a product category and a product variable. For example, the product may be a personalized bra and the associated code file may be a pattern and material file generated by a 3D modeling system coupled to or in communication with server 10.
For product generation, existing product or service types may be identified from the table and then customized for the user. The selected product may be used for product personalization based on user data. For example, a physical or emotional marker of the user may be used to select a product type. The components of the product are identified and personalized to the user data. The components of the product are assembled to produce the product. The server 10 may encode components that may be assembled together to produce a product. Multiple users generate the same product type with personalized components based on user data.
The server 10 may automatically generate a BOM file for the product. The server 10 may generate a list of components to create a product.
The server 10 may generate customized content and files for the assembler instructions. Inputs (e.g., time, activity preferences, device preferences, physical markers, emotional markers) are made and may be combined with pre-captured video to create output content. The server 10 may generate on-demand content with fully virtual facilitators and instructors based on the inputs. The server 10 may identify and arrange the subcomponents of the content as part of generating the product on demand.
At 406, the interface 32 displays a visualization of the selectable purchase options and the product together. In response, the interface 32 receives purchase instructions for the product in response to selection of a selectable purchase option displayed with the visualization of the product. The server 10 may generate data for product visualization using a rendering operation.
The server 10 may generate a visualization of the product generated by the design model. There may be coded instructions for the product and the rendering engine may display the product visualization as part of the system interface 32. The visualization data is linked to the product and generative model through a backend server. For the bra example, movement data from the chest sensor is captured and used to create a movement intervention in the product, and the visualization may have a visual movement trajectory ("butterfly") to show the user the effect of the user's data on the product. As another example, a visualization of user balance may show the user why the user should attend a recommended course to help improve their overall balance.
The server 10 may generate custom manufacturing instructions as part of the product generation process. The server 10 may convert the image into a braiding instruction, a 3D printing instruction, a laser cutting instruction, and encode the manufacturing data in an associated file.
At 408, the server 10 triggers a product manufacturing process in response to receiving a purchase instruction for a product. The server 10 transmits manufacturing instructions for the product to the manufacturing queue 34 to trigger production. The system 100 may coordinate delivery of the product. During the product lifecycle to the expiration of the product lifecycle, the server 10 may receive user feedback. As described above, the product may also contain content delivered by a content platform or service. The server 10 may receive data regarding the user's consumption of the content as feedback.
At 410, the server 10 receives user feedback from different sources and updates its database 12 over the product lifecycle. The server 10 generates the design model using the feedback update.
In some embodiments, the product is a service related to content. At 412, the server 10 generates content and delivers the content to the user or another device at the interface 32.
Fig. 5A and 5B illustrate in more detail the operation of a method 400 of providing an interface for product personalization. For example, the operations of 402, 404, 406, 408, 410 may be implemented as described with respect to fig. 4, 5A, 5B.
As described above, at 402, user device 16 captures user data for a user session over a period of time. Different capture techniques may be used to capture user data for a user session over a period of time. The captured user data is processed at the server 10 to extract user attributes or metrics. For example, the user attributes may include shape/measurement metrics, movement metrics, perceptual preference metrics, emotion markup metrics, social markup metrics, traditional CRM metrics, preferred sensory state metrics, device metrics, and the like. The server 10 may extract different user attributes from the user data and associate these metrics with the product variables. For example, the metrics may be associated with the product variables through a predefined lookup table or logic gate of the server 10.
User device 16 transmits user data to server 10 via network 14. The server 10 then uses different processing techniques to process the user data. For example, the server 10 may process the image data using any of a variety of facial and/or body analysis or modeling techniques known or to be discovered in the art. In addition, the server 10 may process the voice data using any voice analysis technique known or yet to be discovered in the art, including pitch analysis techniques. In addition, the server 10 may process user input data (which may include audio data or text data) using different voice, text, social networking, or behavioral analysis techniques (including pitch analysis techniques and semantic analysis techniques) to extract features or metrics that may be used to calculate a user's real-time emotion signature. The real-time emotion markup may be used for product generation (or recommendation) that is displayed as a visualization through the interface 32 or user interface 24 of the user device 16. The server 10 may process user input data (which may contain audio data or text data) using different voice, text, social networking, or behavioral analysis techniques (including pitch analysis techniques and semantic analysis techniques) to extract features or metrics that may be used to calculate real-time body markers for the user.
By processing the user data in this manner, server 10 is able to identify one or more emotion levels or states of the user. In addition to emotion sensing (e.g., determining a current emotional state of a user), server 10 is capable of performing operations to calculate different metrics corresponding to attention sensing (e.g., determining external attention allocation, internal attention allocation, etc. of a user), socially friendly sensing (e.g., determining emotional expressions and behaviors of a user and others, etc.), motivational state sensing, reevaluation state sensing, and insight state sensing. Such sensing techniques are examples of state detection sensing techniques that may be used to quantify the cognitive affective capability level of an individual user and to determine the personality type of the individual based on the collected data.
For example, metrics or data corresponding to attention sensing may be determined by processing eye tracking data and by 3D modeling of the user's face and/or body and the context or environment in which the user is located, and in addition to the object of interest (object of attention) or lack of such an object. Social sensing involves detecting positive/negative behaviors of a user on another person or on themselves (e.g., giving praise, conveying positive/negative emotions, such as smiles, mention of positive/negative behaviors that another person has taken, etc.).
Motivational sensing involves computation of metrics based on detection and differentiation of two motivational subsystems, referred to as proximity and avoidance systems, that typically guide user behavior based on rewards or penalties (e.g., identify motivations of a user by way of their way of describing the cause of their completion of a task, specific emotions displayed during target-oriented behavior, etc.). Such motivations may be determined by processing data input and activity data of the user.
Reevaluation state sensing involves computation of metrics based on the detection of a user's recall of events and their emotional associations, which are concurrently attenuated by either active or passive means (e.g., letting the user recall difficult events over time, and monitoring changes in emotional expression during recall). Resolution and re-consolidation may depend on many factors such as the level of treatment, emotional salience, the degree of attention paid to the stimulus, the anticipated or re-consolidation-mediated reinforcement at the time of encoding on how the memory will be evaluated later. Fading does not erase the original association, but rather a novel learning process that occurs when the memory (explicit or implicit) is retrieved and clusters of adjusted stimuli previously adjusted to elicit a particular behavior or set of behavioral responses are temporarily unstable and impair each other's association in an active or passive manner. Such recall may be determined by processing the user's data input, biometric data, and historical body and mood markers and associated recommendations.
Insight sensing involves computation based on metrics that achieve "no me" or are not performed, which is the distinction between the self-phenomenon experience and the mind, emotion, and sensation of a person who is "thing-like" and is described as "release from psychological gaze (release from mental fixations)". The insight sensing also involves decentration (decentration) that introduces "space between the perception and response of the person" allowing the individual to disengage or "walk out" the immediate experience of the person at the viewer's angle in order to gain insight and analyze the person's habitual emotion and behavioral pattern. Insight sensing can detect when an individual is related to their thought, feel, emotion, or physical sensation by how the individual describes their experience and other non-spoken cues, regardless of whom they are. Such sensing may be determined by processing the user's data input, biometric data, and historical body and mood markers and associated recommendations.
The personality type of the user may generally be estimated by metrics corresponding to values of one or more states or levels of any of a variety of different models of personality types, such as a five-factor model: open/intellectual, responsible heart, exotropism, concomitance and neuromorphic/emotional stability. The emotional state of the user may be determined by a calculated metric comprising one or more of the following: entertainment, anger, photophobia, boring, confusion, light thin, satisfaction, shy, desire, aversion, embarrassment, fear, feeling, happiness, interest, love, pain, proud, masturbation, sadness, pubic, surprise, homonymy, and victory. The cognitive affective capabilities of the user can include one or more of the following: intent and motivation, attention regulation, mood regulation, memory resolution and re-consolidation, sociality, non-execution and decentration.
Automatic detection/recognition of emotional characteristics of a person may be determined by processing user data to extract and evaluate features related to the emotional characteristics from the user data.
Based on the determined character type and state of cognitive emotion capabilities of the user, server 10 uses data received from user device 16 to determine an emotion signature for the user. According to some embodiments, user data stored in database 12, as captured (near) in real-time by user device 16, is accessed by server 10, and the emotion signature is a combination of data values corresponding to the determined character type and state of cognitive emotion capabilities of the user. The emotional markers may serve as unique metrics (or a combination of metrics) that identify the user's current overall emotional health.
At 502, the server 10 stores and updates the attribution-able database 12 in memory using the categorized metrics. The server 10 processes shape and measurement metrics, movement metrics, perceptual preference metrics, body marker metrics, emotion marker metrics, social marker metrics, traditional CRM metrics, preferred sensory state metrics, and device metrics to update the attribution database 12. The server 10 generates classified features corresponding to shape and measurement metrics, movement metrics, perceptual preference metrics, emotion markup metrics, social markup metrics, traditional CRM metrics, preferred sensory state metrics, and device metrics.
The server 10 may convert the user data into features of the attribution capable database 12. As an example of measurement data, server 10 maps the right bicep measurements under the right bicep attributes in attribution database 12. There may be some machine learning or classification that uses Natural Language Processing (NLP) to label and classify open text input.
At 404, the server 10 generates a product and associated code file. For example, server 10 generates a product and associated code file by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database. As part of the product generation, the server 10 selects a product category and a product variable. The server 10 selects a generated design model for product generation. The associated code file may contain a bill of materials (BOM) file, and generating the design model may involve generating the BOM file. The associated code file may contain a content assembly file, and generating the design model may involve generating the content assembly file.
Example generative design models contain models for 3D modeling software or plug-ins for generating geometric representations for a product as part of the personalization of the product. Example generative design models include software simulations. Generating the design model may also generate a BOM file and content assembly instructions. The resulting product may contain a BOM file and content assembly instructions. The manufacturing queue 34 may use the BOM file and content assembly instructions to manufacture personalized products.
At 406, the interface 32 displays a visualization of the selectable purchase options and the product together. The server 10 may generate a product and associated code file containing descriptive visualizations of the data as instructions for the interface 32 to display the product visualizations. The server 10 may also generate simulated product options. The associated code file may contain educational services and content. The associated code file may contain content that supports the generation of improvements to practice or habit. For example, the content may contain physical activity.
In an example embodiment, the product is a garment. The affiliation database 502 may have simulated garment records. At 406, hardware server 10 generates simulated product options as part of the product and associated code files. Interface 32 displays a visualization of the simulated product options. The simulated product options may include software physical simulation, hardware physical simulation, static 3D viewer, and AR/VR experience content.
In response, the interface 32 receives purchase instructions for the product in response to selection of a selectable purchase option displayed with the visualization of the product.
At 408, the server 10 triggers a product manufacturing process in response to receiving a purchase instruction for a product. The server 10 may support on-demand and on-site product manufacturing. For example, the server 10 may support DC level or vendor level on-demand manufacturing.
The system 10 may trigger instructions to deliver the product to the user and track the data of the delivery process. The tracking data may include location data, transportation data, and event retrieval data.
The server 10 transmits manufacturing instructions for the product to the manufacturing queue 34 to trigger production. The system 100 may coordinate delivery of the product. The system 100 may monitor the product lifecycle to receive data about the product until its end of life. The data may relate to a re-transaction or recycling of the product. The data for product recovery may include data for polymer decomposition, biodegradation, polymer stacking, bio-production, and the like.
The server 10 may receive different types of feedback regarding the product. As described above, the product may also contain content delivered by a content platform or service. The server 10 may receive data regarding the user's consumption of the content as feedback. The consumption data may include location data, family data, and event data. The data may also be analytical data regarding product participation, usage type data and duration, sensor data, and data indicating whether the product is shared with other users. The data may also be comment data and test data about the product.
At 410, the server 10 receives user feedback and generates a design model using the feedback data updates. The server 10 uses the feedback data to develop a design model over time. For example, the user may provide feedback data indicating that the user is not satisfied with the recommended product. The server may use this feedback data to redefine the user's preferred product attribute ranges in generating the design model to better conform to the user's preferences.
FIG. 6 illustrates a method 600 of providing an interface for product recommendation. Method 600 may involve operations similar to those of method 400 for generating a product. For example, the operations of 402 and 502 may be implemented as described with respect to fig. 4, 5A, 5B. The server 10 implements a method 600 for providing product recommendations to the interface 32.
The user data may be quantitative (e.g., body measurements, biometric data) or qualitative (e.g., perceptual preferences). The user data is processed by the server 10 (with recommendation engine) which will query the home database for all products and services to pick out the products and services that most closely match the input. Selected items from the database will be presented for selection by the user. Once the user selects and trials the product, the user can close the loop by providing feedback data regarding whether the recommendation meets the user's needs. Feedback is stored and used to improve the accuracy of the recommendation engine.
As described above, at 402, user device 16 captures user data for a user session over a period of time. For example, the hardware processor 18 may be programmed with executable instructions for the interface to obtain user data for a user session over a period of time. The processor 18 may transmit a product request for the user session. The interface 32 provides product recommendations for the user session in response to the product request and receives a selected product of the product recommendations. The interface 32 may receive feedback data regarding the selected product. The user device 16 has one or more sensors for capturing user data during the time period. The user device 16 has a transmitter for transmitting the captured user data over a network to the interface 32 of the hardware processor 18 or the hardware server 10 for calculating the product recommendation.
At 502, the hardware server 10 is coupled to memory to access a attribution-capable database. The server 10 uses the categorized metrics to store and update the affiliate database 12 in memory. The server processes the shape/measurement metrics, movement metrics, perception preference metrics, body marker metrics, emotion marker metrics, social marker metrics, traditional CRM metrics, preferred sensory state metrics, and device metrics to generate categorized features.
At 602, the server 10 generates a product recommendation. The server 10 selects a product category and a product variable in response to receiving the product request from the interface 32. The server 10 calculates a target parameter of a target sensory state of the user using the extracted user attribute.
The server 10 calculates product recommendations using a recommendation engine to process the extracted user attributes and target parameters of the target sensory state.
In some instances, instead of creating a product from components based on input, server 10 may use a recommendation engine to search product database 12 for a best match to user input. The recommended products may then be personalized using the method 400 described herein. The server 10 may implement an intermediate step between recommendation and full personalization.
Different features may be extracted from the user data for product recommendation. Not every product/service recommendation requires all the data captured, but different products or services may access different data sets to generate product or service recommendations.
Recommended products may be displayed for selection within interface 32. The recommended products may be moved in a 3D display, such as rotated on a turntable.
At 604, the server 10 transmits the product recommendation to the interface 32 over a network. In an example embodiment, the product is a garment. The affiliation database 502 may have simulated garment records. At 604, hardware server 10 generates simulated product options as part of the recommended product and associated code file. Interface 32 displays a visualization of simulated product recommendations. The simulated product options may include software physical simulation, hardware physical simulation, static 3D viewer, and AR/VR experience content.
The interface 32 displays product recommendations. The server 10 receives a notification of the selected product from the interface 32.
At 606, the server 10 receives feedback data regarding the selected product. The server 10 may update the recommendation engine based on feedback data regarding the selected product. The server 10 may also use the feedback data to update the information capture process.
In some embodiments, the user attributes comprise quantitative and qualitative user attributes of the body metrics. The qualitative user attributes comprise emotional marking metrics of the user.
Fig. 7 illustrates in more detail the operation of a method 600 of providing an interface for product personalization. For example, the operations of 402, 502, 602, 604, 606 may be implemented as described with respect to fig. 4, 5A, 5B, 6.
At 402, user device 16 captures user data for a user session over a period of time. The captured user data is processed at the server 10 to extract user attributes or metrics. For example, the user attributes may include shape/measurement metrics, movement metrics, perceptual preference metrics, emotion markup metrics, social markup metrics, traditional CRM metrics, preferred sensory state metrics, device metrics, and the like. The server 10 may extract different user attributes from the user data and associate these metrics with the product variables.
At 502, the server 10 stores and updates the attribution-able database 12 in memory using the categorized metrics. The server processes the shape/measurement metrics, movement metrics, perception preference metrics, body marker metrics, emotion marker metrics, social marker metrics, traditional CRM metrics, preferred sensory state metrics, and device metrics to generate categorized features.
At 602, the server 10 generates a product recommendation. The server 10 selects a product category and a product variable in response to receiving the product request from the interface 32. The server 10 calculates a target parameter of a target sensory state of the user using the extracted user attribute. For example, the server 10 generates a product recommendation by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database. As part of the product recommendation, the server 10 selects a product category and a product variable.
In some embodiments, generating the product recommendation involves the server 10 generating the product and associated code file. The associated code file may contain a BOM file, and generating the design model may involve generating the BOM file. The associated code file may contain a content assembly file, and generating the design model may involve generating the content assembly file.
At 604, the server 10 transmits the product recommendation to the interface 32 to display a visualization of the selectable purchase options and the product recommendation together. The server 10 may generate product recommendations containing descriptive visualizations of the recommended products as instructions for the interface 32 to display visualizations of the product recommendations. The server 10 may also generate simulated product options. The server 10 may also generate associated content such as educational services and practices. Product recommendations may contain content that supports the generation of improvements to practice or habits. For example, the content may contain physical activity. The interface 32 may provide selectable purchase options for recommended products.
The interface 32 receives purchase instructions for the product in response to selection of a selectable purchase option displayed with the visualization of the product.
At 606, the server 10 receives user feedback on the product recommendation and updates the recommendation engine with the feedback data. The server 10 uses the feedback data to improve the recommendation engine over time. The feedback data may be comment data and test data about the product.
The method 600 for providing an interface for product recommendation may involve operations similar to the method 400 for providing an interface for product personalization. In some embodiments, method 600 involves generating product recommendations for a user based on physical or emotional markers of the user, and providing the recommendations through interface 32. In some embodiments, method 400 involves generating a product for a user based on a physical or emotional marker of the user, and providing a visualization of the generated product through interface 32.
Fig. 8 illustrates an example operation of capturing user data in more detail. As described above, at 402, user device 16 captures user data for a user session over a period of time. Different types of user data for a user session over a period of time may be captured using different capture techniques. Different capture techniques may be implemented and different types of user data captured using different types of user devices 16. In some embodiments, the server 10 may process the user data to generate a product. In some embodiment, the server 10 may process the user data to recommend a product. For example, recommended products may also be generated by the server 10.
The captured user data may include shape/measurement data, movement metrics, perception preference data, body marker data, emotion marker data, social marker data, resonance data, connectivity data, CRM data, device metrics, and the like. The server 10 may extract different user attributes from the user data and associate these metrics with the product variables as part of the product generation.
Shape/measurement data may be captured by different capture techniques such as 3D scanning, machine learning predicted shape or measurement output, manual measurement data, garment measurement data, and the like.
The movement metric may be captured by different capture techniques, such as inertial measurement unit-based devices, computer vision devices, pressure-based devices, and the like. The sensory preference data may comprise different types of garment preference data, such as pant sensory data, preferred hand feel data, thermal preference data (qualitative or quantitative), and movement sensory data (qualitative). The body marker data may contain different types of physiological data, which may contain different types of quantitative or qualitative data such as preference data (e.g., a preferred exercise type), objective data, advantage and disadvantage data, medical history data (e.g., past injuries), physiological data (e.g., blood pressure), and the like. The emotion markup data may include different types of qualitative or quantitative data such as personality data, emotional state data, emotional fitness data, personal value data, objective data, advantage and disadvantage data (e.g., cycle of life data), physiological data (HRV, respiration), and the like. The CRM data may include purchase history data, activity intention data, interaction history data, comment data, and the like. Device data may be collected using radio frequency techniques such as NFC, RFID, UWB.
The server 10 may derive the biometric data through computer vision, such as heart rate variability by processing facial videos using photo-capacitive pulse wave imaging. The server 10 may derive metrics by understanding details related to classification of language features or emotions of limbs.
The captured user data is processed at the server 10 to extract user attributes or metrics. For example, user attributes may include shape/measurement metrics, movement metrics, perceptual preference metrics, body marker metrics, emotion marker metrics, social marker metrics, traditional CRM metrics, preferred sensory state metrics, device metrics, and the like. The server 10 may extract different user attributes from the user data and associate these metrics with the product variables.
User device 16 transmits user data to server 10 via network 14. The server 10 then uses different processing techniques to process the user data. For example, the server 10 may process the image data using any of a variety of facial and/or body analysis or modeling techniques known or to be discovered in the art. In addition, the server 10 may process the voice data using any voice analysis technique known or yet to be discovered in the art, including pitch analysis techniques. In addition, the server 10 may process user input data (which may include audio data or text data) using different voice, text, social networking, or behavioral analysis techniques (including pitch analysis techniques and semantic analysis techniques) to extract features or metrics that may be used to calculate a user's real-time emotion signature. The server 10 may use voice, text, and social networking data to calculate body marker data. The real-time emotion markup may be used for product generation (or recommendation) that is displayed as a visualization through the interface 32 or user interface 24 of the user device 16.
In some embodiments, the server 10 may generate a product. Further, for example, recommended products may also be generated by the server 10.
Fig. 9 illustrates example operations of the product manufacturing process in more detail. As described above, at 408, the server 10 triggers the product manufacturing process in response to receiving a purchase instruction for the product. As shown, the server 10 may support on-demand and on-site product manufacturing. For example, the server 10 may support DC level or vendor level on-demand manufacturing. The product manufacturing process may involve different operations such as 3D printing, cutting and sewing, laser and bonding, braiding, growing biological materials. The product manufacturing process may involve job preparation and queuing data.
In some embodiments, the product is a service related to content. At 412, the server 10 generates content and delivers the content to the user. Fig. 10 illustrates example operations of content generation and delivery in more detail. Content delivery may be done in the field, at home or at an event. The content may be delivered to the interface 32 or another device such as an immersive hardware device. The content may be for a community experience or an individual experience.
Fig. 11 illustrates an example operation of the feedback data in more detail. At 410, the server 10 receives user feedback from different sources and updates its database 12 over the product lifecycle. The server 10 generates the design model using the feedback update. Feedback may be collected during activities involving the product. The feedback data may be extracted from the interactive database, which may involve deleting tags from the user content. Feedback data may be extracted from the comment data using natural language processing of the text. The feedback data may be from engagement data captured as user data, such as eye tracking data related to the content, interaction tracking data, viewing time of the content, repeated viewing of the content, and so forth. The feedback data may be from the user type and from duration data of the device or equipment statistics. The feedback data may be from sensor data such as position data.
The server 10 generates one or more personalized products or recommendations in response to calculating data for the user's physical or emotional markers. The server 10 may generate personalized products or product recommendations. For example, the recommendation may include accessing or using specific content, coaches, events, or other experience recommendations to improve the physical marking of the user. For example, the physical markers may indicate that the user has difficulty performing cardiovascular exercises for long periods of time due to low oxygen intake. In response, the product recommendation may contain consumed content (e.g., video, one-to-one personal training) aimed at improving cardiovascular endurance and maximum VO 2. As another example, the recommendation may include a recommendation to access or use specific content, coaches, events, groups, parquet/romantic matches, or other social or emotional learning experience to improve the user's emotional indicia. For example, the emotional markers may indicate that the user is difficult to disturb the negative psychological rumination due to low levels of decentration and non-execution. In response, the product recommendation may contain usage content (e.g., video, audio, one-to-one therapy) aimed at teaching a particular meditation focused on decentration.
The personalized product or product recommendation generated by the system 100 and output to the interface 32 may take the form of a training program executed by the user. For example, the training program may include one or more microcycle phases (daily-weekly plan), one or more midcycle phases (2-6 week plan), and one or more large cycle phases (yearly plan). The intensity and volume of the training session may vary linearly or non-linearly. Although the level of cognitive affective ability may vary over time, the level is typically trainable. Thus, by repeating interventions (e.g. meditation), the tendency of a person to handle e.g. emotional stimuli in a negative or positive way may vary based on training duration and consistency.
The personalized product or product recommendation generated by the system 100 and output to the interface 32 may take the form of a recommendation based on the physical fitness and skill level of the user. For example, a user interested in yoga lessons may receive a recommendation of a primary lesson that is more appropriate for the user's skill level than a medium or high grade lesson based on little history of similar physical exercises. As the user's physical fitness and skills improve, the system 100 may recommend more challenging courses.
The personalized product or product recommendation generated by the system 100 and output to the interface 32 may take the form of a recommendation based on the sensory state results desired by the user. For example, a user who expresses a desire to feel energetic may receive a recommendation for high energy physical activity, while a user who expresses a desire to feel calm may receive a recommendation for directed meditation aimed at relaxing and calm.
The system 100 can store data of product courses or product recommendations in the database 12 and server 10 and generate personalized products or recommendations by identifying one or more product recommendations from the stored data. For example, the recommendation may be generated based on known recommendations stored in association with known character types and states of cognitive emotion capabilities or based on known recommendations stored in association with known physical characteristics. Such associations may be stored, for example, in database 12 and may be accessed by server 10.
Over time, by repeatedly collecting user data, the server 10 may track or monitor the user's body markers and/or emotional markers. The system 100 will continue to receive user data from the user device 16 in real-time to recalculate the user's body markers and/or mood markers based on the updated user data. The system 100 may continuously collect user data and recalculate body markers and/or mood markers. For example, after generating a personalized product or product recommendation, the user may repeatedly or periodically interface with the user device 16 to obtain or capture additional user data for calculating updated body or mood indicia. The server 10 may compare the updated body or emotion signature with the last known or calculated body or emotion signature of the user. If the updated body or mood indicia shows improvement, the user's selection of a particular product recommendation for purchase may be understood to be beneficial to any other user having similar body or mood indicia, as the case may be.
If, for example, the physical characteristics in the user's body markers have advantageously changed, for example if the user's resting heart rate has decreased, the server 10 may determine that the body markers exhibit improvement. On the other hand, the body marker may show deterioration, for example, if the body characteristics in the body marker of the user have changed negatively, for example, the blood pressure of the user has risen.
For example, if the level of cognitive emotional ability in the user's emotional markers has been beneficially increased, e.g., if the user's emotional state is repeatedly assessed as positive, server 10 may determine that the emotional markers exhibit improvement. On the other hand, for example, an emotional marker may show deterioration if the level of cognitive emotional ability included in the emotional marker of the user has been negatively reduced, e.g., if the emotional state of the user is repeatedly assessed as negative.
Deterioration of the user's physical or emotional indicia may indicate that the recommendation performed by the user is not effective in improving the user's overall health, and that alternative recommendations may be desired. In this case, the server 10 may adjust the recommendation generated in response to determining the updated body or emotional indicia of the user and determining that the updated body or emotional indicia has deteriorated relative to the last known body or emotional indicia of the user.
Thus, particular physical or emotional markers may be associated with particular personalized products or recommendations that have been shown to improve those physical or emotional markers over time. Such associations, or data indicative of such associations, may be stored, for example, in database 12 for future use, and may be accessed by server 10 when determined to be a user-generated recommendation. Thus, when a new physical or emotional marker of a user session or a new user session is established for a user of the product system 100, the server 10 may access the database 12 to identify one or more product recommendations that have been displayed that result in an improvement in similar physical or emotional markers for other users of the system 100.
As an example, user Jane decides to use product system 100 to determine the emotional indicia of the user by providing user data to server 10 via user device 16. Based on the information provided by Jane to its user device 16, and based on an analysis of the user data, including user data representing Jane's facial expression, body language, voice tones, measured biometric characteristics, and behavioral patterns (based on text input provided by Jane in response to questions posed by the emotional wellness application), system 100 determines that Jane's emotional signature is similar to Alice's (another user's) emotional signature. The system 100 recently (e.g., in a previous user session or as part of the same user session) recommended to Alice that she would be supported to spend more time outdoors (e.g., related to a naturally recommended product) because Alice's mood tag indicates a positive correlation between her mood and how much time she spends outdoors in nature. Over time, by repeatedly interacting with the server 10 to provide updated user data, alice's emotional indicia calculated by the server 10 show improvement as it takes more time outdoors. Thus, server 10 makes the same product recommendation to Jane, assuming that his emotional signature is similar to Alice's emotional signature.
As another example, the user Paul decides to use the product system 100 to determine the user's body markers by providing user data to the server 10 through the user device 16. Based on the information provided by Paul, including data representing Paul's physical activity, measured biometric, energy adaptation targets, and behavioral patterns (based on text input provided by Paul in response to questions on the nutritional application), system 100 determines that Paul's body markers are similar to that of Andy (another user). The system 100 has recently recommended to Andy a product that will support his consumption of leafy vegetables because Andy's body markers indicate that there is a positive correlation between his fitness target progress and how many leafy vegetables he is eating. Over time, by repeatedly interacting with server 10 to provide updated user data, the body markers of Andy calculated by server 10 show improvement due to the eating of leafy vegetables. Thus, server 10 makes the same product recommendation to Paul, assuming that his emotional signature is similar to that of Andy.
By generating and monitoring body or mood tags for each user, the system 100 is able to construct a dataset of body or mood tags (stored as body or mood tag records) and corresponding products or recommendations of body or mood tags that may improve the individual.
Additionally, the system 100 may enable individual users with similar physical and/or emotional markers to contact each other, for example, by providing access to related contact information. According to some embodiments, the system 100 may be used by a team leader, such as a manager, to form a suitable team. For example, the system 100 may be used to identify individuals that have similar emotional markers and may therefore work more effectively or cooperate better when placed on the same team, or the system 100 may be used to identify individuals that have similar physical markers and may be well suited to become fitness partners of each other. For example, system 100 may establish communication sessions between a plurality of user devices 16.
According to some embodiments, the system 100 may be configured to match a person based on their physical and/or emotional indicia such that the matching person may develop profound and meaningful romantic or friendship relationships, or the system 100 may be used to match the person with coaches or matching content. The system 100 may be used to identify individuals that have similar emotional markers and thus may be connected in a profound and meaningful manner. For example, based on the user's input data (facial analysis, voice analysis, body analysis, text input, activity input, biometric input, etc.), as well as the level of cognitive emotional ability of the user and the personality type of the individual, the system 100 may identify that the contact or recommendation that may become a multi-year relationship relates to activities of compatible communities or coaches with a high probability of persistent contact between users and improved health.
The server 10 generates personalized products or product recommendations by matching the user with certain (recommended) products to improve the user's health. The server 10 aggregates and processes user data across multiple channels to extract metrics for determining body or emotional markers to provide improved product recommendation and trigger effects for the user's environment by actuating sensory actuators to affect the user's sensory environment. The server 10 is connected to the interface 32 to display recommendations derived based on user data, activity metrics, and physical or emotional markers of the user calculated by a hardware processor accessing memory storing the user data and the extracted metrics. The server 10 receives user data from a plurality of channels, such as different hardware devices, digital communities, events, live streaming (live streaming), etc. Server 10 has a hardware processor that can implement different data processing operations to extract activity metrics, cognitive emotion capability metrics, and social metrics by processing user data from different channels.
The server 10 receives the user data and processes the user data to determine body markers or emotional markers of such users according to the methods previously described herein. The server 10 may exchange data and receive output data for generating personalized products or recommendations, or determine body or mood tags for generating personalized products.
The server 10 may receive input data from different data sources or channels, such as different content providers (i.e., coaches, advisors, influencers). The server 10 may aggregate and store the content in a content center. When new input data is collected over an updated period of time, the server 10 may recalculate the updated body markers and/or mood markers. Based on the physical and/or emotional indicia of the user, the server 10 may recommend products that, for example, help improve the user's health and/or achieve his/her goals. During use of the product, server 10 may receive data indicative of user performance from a data stream from an immersive hardware device (channel), such as a smart watch, a smart phone, a smart mirror, or any other smart exercise machine (e.g., a connected stationary bicycle), as well as any other sensor, such as sensors 24-26. Based on the collected data and the user's physical and/or emotional indicia, the server 10 may dynamically adjust personalized products or product recommendations. In one embodiment, the recommendations generated by the server 10 may take the form of a program to guide or shape matching pairing/community interactions or experiences. For example, the program may include one or more phases (daily, weekly, monthly, yearly planning). A program may be a series of activities that may be mapped to time segments or intervals during a time period of a user session. Different activities and sessions may be recommended based on the phases. The server 10 may map the activity data to phases. The intensity and volume of the recommended conversation and product may vary linearly or non-linearly. Over time, updated user data is captured and transmitted to the server 10 through repeated interactions of the user with the user device 16, and the physical and emotional markers of each user may be tracked or monitored based on the updated user data collected over time. The server 10 may change the recommendation based on the current physical and emotional markers of the matching person to maintain a profound meaningful connection between the matching users. The server 10 may calculate updated body markers and emotion markers at different intervals during the time period of the user session.
The body markers and emotional markers may be data structures of values (stored as records in non-transitory memory accessible by the hardware processor) that the system 1000 may use, for example, different similarity metrics or functions to compare with other data structures of values representing other body markers or emotional markers. Different similarity measures may be used to identify similar body markers or emotional markers.
In some embodiments, the methods and systems described herein may use physical or emotional markers of a user to make personalized products or product recommendations. The user group may increase social connections by sharing the product experience. Thus, the server 10 may be used to identify individuals with similar physical or emotional markers and connect the individuals by recommending the same or similar products. The server 10 may also generate social metrics for users to make personalized products or recommendations.
In some embodiments, the server 10 may manipulate the external sensory environment (e.g., sound, lighting, smell, temperature, airflow in a room) to alter the internal perceptibility of an individual (or group of individuals) to deliver greater physiological and psychological benefits during the product experience. Server 10 may manipulate the external sensory environment based on activity inputs received at user device 16 (e.g., type of activity, content, intensity of the lesson, duration of the lesson), biometric inputs of the user measured in real-time during the lesson using user device 16, and individual physical or emotional indicia of the user calculated by server 10 during previous sessions. For example, based on the emotional indicia of a user or group of users, the server 10 may recommend products to such users or groups of users, and then may change the products to match the characteristics of the users or groups of users. Based on the recommended product and the emotional indicia of the user or group of users, the server 10 may dynamically change the external sensory environment during the duration of the activity or experience to match the order/intensity of the activity/experience and the user biometric or visual or audio cues/inputs.
The server 10 may use different data processing techniques to generate the body markers and the emotion markers. For example, the server 10 may receive a data set (e.g., a data set that may be extracted from an aggregated data source), extract metrics from the aggregated data source, and use the extracted insights to generate body markers and mood markers for improving health. The server 10 may transmit the body markers and the emotion markers to other components of the system 100. Interface 32 may be connected to server 10 to display visual effects based on body markers and mood markers. The interface 32 may be connected to the server 10 to display the generated product or recommendation, or to trigger an update to the interface based on the recommendation.
The server 10 monitors one or more users during a user session using one or more sensors. In some embodiments, the server 10 is connected with an interface 32 that provides product recommendations for a user session. The server 10 has a non-transitory memory storing product records, body mark records, emotion mark records, and user records storing user data received from multiple channels, for example.
The user data may relate to a series of data captured during a period of a user session (which may be combined with data from different user sessions and with data for different users). The user data may be image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user.
User device 16 may be programmed with executable instructions for a user interface to obtain user data for a user session over a period of time. The user device 16 transmits a product request for the user session to the server 10 and updates its interface to provide product recommendations for the user session received in response to the product request.
The server 10 may be coupled to non-transitory memory to access product records, body mark records, emotion mark records, and user records.
The server 10 is programmed with executable instructions to transmit product recommendations to the interface 32 over the network in response to receiving a product request. In this example embodiment, the server 10 is programmed with executable instructions to calculate product recommendations based on metrics extracted from received user data. Server 10 may use the user data and user records of the user session to calculate activity metrics, cognitive affective metrics, and social metrics. The server 10 may extract metrics from the user data to represent physical metrics of the user and cognitive metrics of the user. The server 10 may use both the physical metrics of the user and the cognitive metrics of the user to determine the physical or emotional indicia of the user during the time period of the user session. The server 10 may calculate a plurality of physical and emotional markers of the user at certain time intervals during a period of the user session. The server 10 calculates a plurality of body markers and/or emotional markers that may trigger the calculation of updated product recommendations and updates to the interface. The physical and emotional markers use both the physical metrics of the user and the cognitive metrics of the user during the time period of the user session.
For example, server 10 may transmit the calculated body and emotion markers to interface 32 in response to the request. The server 10 may use the user data captured during the user session and may also use the user data captured during a previous user session or the user data of a different user. The server 10 may aggregate data from multiple channels to calculate product recommendations to trigger an update to the interface 32 or, in some instances, to an interface on the immersive hardware device.
The server 10 may process different types of data by: for the image data and the data defining the physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user activity analysis; for the audio data, using voice analysis; and text analysis is used for the text input.
Server 10 may calculate one or more physical characteristics or one or more states of cognitive emotion capabilities of the user based on the physical metrics, the cognitive emotion capability metrics, and the social metrics. Server 10 may calculate a physical or emotional marker for the user based on the one or more physical characteristics of the user, the one or more states of cognitive emotion capabilities, and using the physical or emotional marker record. The server 10 may calculate product recommendations based on the user's body markers, the user's mood markers, activity metrics, product records, and user records.
FIG. 12 illustrates an example schematic diagram of a computing device 1200, such as user device 16, server 10, database 12, system 100, or aspects or components of interface 32, in which aspects of an embodiment may be implemented. As depicted, device 1200 includes at least one hardware processor 1202, non-transitory memory 1204, and at least one I/O interface 1206 and at least one network interface 1208 for exchanging data. the/O interface 1206 and the at least one network interface 1208 may comprise a transmitter, a receiver, and other hardware for data communication. For example, I/O interface 1206 may capture user data for transmission to another device via network interface 1208.
The server 10 monitors one or more users during a user session using the user device 16 with the sensor. In some embodiments, interface 32 displays product recommendations for a user session. The server 10 has a non-transitory memory storing product records, body mark records, emotion mark records, and user records storing user data received from multiple channels, for example.
The user data may relate to a series of data captured during a period of a user session (which may be combined with data from different user sessions and with data for different users). The user data may be image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user.
The interface 32 resides on a hardware processor (which may be at the user device 16 or a separate computing device) programmed with executable instructions to obtain user data for a user session over a period of time. The interface 32 transmits a product request for the user session to the server 10 and updates with the personalized product or product recommendation for the user session received in response to the product request.
The server 10 is programmed with executable instructions to transmit a personalized product or product recommendation to the interface 32 over a network in response to receiving a product request from the interface 32. The server 10 is programmed with executable instructions to calculate product recommendations by: the activity metrics, cognitive emotion capability metrics, and social metrics are calculated using user data and user records of the user session. The server 10 may extract metrics from the user data to represent physical metrics of the user and cognitive metrics of the user. The server 10 may use both the physical metrics of the user and the cognitive metrics of the user to determine the physical and/or emotional markers of the user during the time period of the user session. The server 10 may calculate a plurality of physical and/or emotional markers of the user at intervals during a period of the user session. The server 10 calculates a plurality of body markers and or mood markers that may trigger the calculation of updated product recommendations and the updating of the interface 32. The physical and emotional markers use both the physical metrics of the user and the cognitive metrics of the user during the time period of the user session.
The server 10 may use the user data captured during the user session and may also use the user data captured during a previous user session or the user data of a different user. The server 10 may aggregate data from multiple channels to calculate personalized products or product recommendations to trigger an update to the interface.
The server 10 may process different types of data by: for the image data and the data defining the physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user activity analysis; for the audio data, using voice analysis; and text analysis is used for the text input. The data may be used to generate a personalized product.
The server 10 may calculate one or more states of one or more physical characteristics of the user based on the physical metrics. The server 10 may calculate a body marker of the user based on the one or more states of the one or more body characteristics of the user and using the body marker record.
Server 10 may calculate one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metrics and the social metrics. Server 10 may calculate an emotional marker for the user based on the one or more states of the one or more cognitive affective abilities of the user and using the emotional marker record.
The server 10 may calculate personalized products or product recommendations based on the user's body markers, the user's mood markers, measurement and movement metrics, product records, and user records. The system 100 has a user device 16 comprising: one or more sensors for capturing user data during the time period; and a transmitter for transmitting the captured user data to the server 10 through the network to calculate the personalized product or product recommendation.
The interface 32 receives the product request, transmits the request to the server 10, and updates to provide the personalized product or product recommendation in response to the request. Interface 32 is a means for providing a visualization of a product derived based on user data, activity metrics, and physical or emotional markers of the user.
The product request may relate to a time period and the product recommendation generated in response to the request may relate to the same time period. In some embodiments, the server 10 may determine a product recommendation. The interface 32 may display or otherwise provide product recommendations, such as via audio or video data. In this example, interface 32 is shown on a computing device having a hardware processor.
In some embodiments, the interface 32 may transmit a product request to the server 10 to determine a recommendation or to generate a personalized product. The interface 32 may transmit additional data related to the product request, such as a time period, a user identifier, an application identifier, or captured user data, to the server 10 to receive product recommendations in response to the request.
The server 10 may process the user data to determine body or mood tags for such users, or the server 10 may communicate with other components of the system 100 to calculate body or mood tags. Server 10 may use the physical or emotional markers of the user over the period of time to generate recommendations for interface 32.
For example, in some embodiments, interface 32 may determine a physical or emotional marker of the user for the period of time and send the physical or emotional marker for the period of time to server 10 along with the product request. Interface 32 may store instructions in memory to determine a physical or emotional marker of the user over a period of time. The interface 32 is shown on a computing device having non-transitory memory and a hardware processor that executes instructions to obtain user data and display product recommendations. For example, interface 32 may obtain user data through a connection to user device 16 along with sensors 24-28 that collect user data for a period of time. The interface 32 may be connected to a separate hardware server 10 to exchange data and receive output data for generating recommendations or determining body markers or emotional markers.
Interface 32 may obtain user data from multiple channels 1040 or collect user data from user device 16 (with sensors) in order to calculate body markers or emotional markers. In other embodiments, server 10 determines the physical or emotional indicia of the user for the period of time in response to receiving a product request from interface 32. For example, using server 10 to calculate a user's physical or emotional marker over the period of time may offload the calculation of the user's physical or emotional marker over the period of time (and the processing resources required) to server 10, which may have more processing resources than interface 32. For example, the server 10 may have a secure communication path to different sources to aggregate user data captured from the different sources, thereby offloading data aggregation operations to the server 10, which may have more processing resources than the interface 32.
In some embodiments, interface 32 may capture user data (via I/O hardware or sensors of the computing device) for determining physical or emotional markers and measurement or movement data of the user over the period of time. In some embodiments, one or more user devices 16 capture user data for use in determining personalized products or product recommendations. In some embodiments, interface 32 may reside on user device 16, or interface 32 may reside on a computing device separate from user device 16.
In some embodiments, interface 32 may transmit the captured user data to server 10 as part of or in connection with a product request. In some embodiments, interface 32 extracts measurement metrics, movement or activity metrics, physical metrics, cognitive affective capability metrics, and social metrics by processing the captured user data. The captured user data may be distributed across different devices and components of the system 100. Server 10 may receive and aggregate user data captured from a plurality of sources including channels, content centers, user devices 16, and server 10. In some embodiments, interface 32 may extract activity metrics, physical metrics, cognitive emotion capacity metrics, and social metrics by processing user data from multiple sources, and provide the extracted metrics to server 10 to calculate physical or emotional markers and personalized products or product recommendations.
In some embodiments, in response to receiving a request from interface 32, server 10 may extract activity metrics, physical metrics, cognitive emotion ability metrics, and social metrics by processing user data captured over the period of time. The server 10 may register a different interface 32 to link the application identifier to the user identifier. In some embodiments, the server 10 may extract the application identifier from the request to locate the user identifier to retrieve the relevant record.
The server 10 may receive and aggregate user data captured from a plurality of sources including channels, content centers, user devices 16, and applications. In response to receiving the request from interface 32, server 10 may request additional captured user data related to the time period from a different source. Server 10 may use the aggregated user data from multiple sources to extract activity metrics, physical metrics, cognitive affective metrics, and social metrics by processing the user data captured over the period of time. User data from multiple sources may be indexed by identifiers (e.g., user identification) such that server 10 may identify user data related to a particular user, for example, across different data sets. Server 10 has a hardware processor that can implement different data processing operations to extract activity metrics, body metrics, cognitive emotion ability metrics, and social metrics by processing user data from different channels, content centers, user devices 16, interfaces 32. The server 10 has a database or user record, a body mark record, an emotion mark record and a product record. For example, the user records may store extracted activity metrics, physical metrics, cognitive emotion ability metrics, and social metrics of the user across different time periods. For example, the user records may store product recommendations for users of different time periods based on the extracted activity metrics, physical metrics, cognitive affective metrics, and social metrics of the different time periods.
The server 10 uses the extracted activity metrics, physical metrics, cognitive affective metrics, and social metrics to determine a personalized product or product recommendation for the period of time. Server 10 may extract the measurement metrics, activity metrics, physical metrics, cognitive affective capability metrics, and social metrics, or may receive the extracted activity metrics, physical metrics, cognitive affective capability metrics, and social metrics from, for example, interface 32 (or different channels 1040, content center 1020, user device 16), or a combination thereof. Server 10 may aggregate the extracted product metrics, body metrics, cognitive affective capability metrics, and social metrics of the user over the period of time to determine the user's body markers or emotional markers and product recommendations.
In some embodiments, server 10 aggregates user data from multiple sources (channel 1040, user devices 16, content center 1020) to utilize a distributed computing device such that interface 32 does not have to collect all user data from all different sources. The channel, user device 16, content center may have different hardware components to enable collection of different types of data. In some embodiments, the system 100 distributes the collection of user data across these different sources to effectively collect different types of data from the different sources. The server 10 may have secure communication paths to different sources to aggregate user data captured from the different sources in a secure manner at, for example, a central repository. User data captured from multiple sources may contain sensitive data and server 10 may provide a secure data store. This may alleviate the need to store user data captured from multiple sources (with sensitive data) locally on different devices, which may create security issues, for example. For example, this may offload data aggregation operations to server 10, which may have more processing resources than interface 32.
In some embodiments, the server 10 calculates a physical or emotional marker of the user for the period of time. Interface 32 exchanges data with server 10 to calculate body markers or emotional markers. As described above, the server 10 may send requests for updated user data, receive updated user data from multiple channels in response, and aggregate user data from multiple channels, such as different hardware devices, digital communities, events, live broadcasts, etc., for computing body markers or emotion markers. For example, the server 10 may store the aggregated user data in a user record.
When the server 10 (or interface 32, channel, user device 16, content center) collects new input data over an updated time period, the server 10 may calculate the user's physical or emotional markers over the updated time period. If the server 10 receives a new product request for an updated time period, the server 10 may calculate a physical or emotional sign of the user for the updated time period. The body or mood tags for the initial time period may be different from the body or mood tags for the updated time period. The body markers or mood markers for the updated time period are used to determine product recommendations. Thus, an updated body marker or emotional marker of an updated time period may trigger a different product recommendation than a product recommendation determined based on a body marker or emotional marker of a previous time period.
In some embodiments, interface 32 sends a request to server 10 to calculate a physical or emotional marker for the updated time period. In response, the server 10 may calculate a new body marker or emotional marker for the updated time period, and may also determine a new product recommendation based on the emotional marker for the updated time period. In response, server 10 may send data of the body markers or emotion markers for the updated time period to interface 32, and may also send new product recommendations based on the body markers or emotion markers for the updated time period. The computation using the server 10 may offload processing requirements to a separate hardware processor of the server 10.
The server 10 stores the data of the body markers and the emotion markers in a database of body marker records and emotion marker records. For example, each body marker record and emotion marker record may be indexed by a user identifier. For example, each body marker record and emotion marker record may indicate a time period, a value corresponding to a calculated body marker or emotion marker within the time period, and an extracted metric. The body marker record and the emotional marker record may also store any product recommendations for the time period. The body marker record and the emotional marker record may contain historical data regarding previous body marker determinations and emotional marker determinations of the user at different time periods. The body marker record and the emotional marker record may contain historical data regarding previous body marker determinations and emotional marker determinations of all users of the system. The historical data of the body marker record and the emotion marker record may include time data corresponding to a period of time for calculating user data of the body marker and the emotion marker. Thus, the server 10 may calculate the physical or emotional markers of the user over a period of time and store the calculated physical or emotional markers in the database of physical marker records and emotional marker records together with the user identifier, the calculated values of the physical or emotional markers and the period of time. The body markers or emotional markers may be data structures of values. The server 10 may define parameters of a data structure for values that may be used to calculate values for body markers or mood markers based on user data captured during the time period. For example, server 10 may use different similarity metrics to compare with other data structures representing values of other body markers or emotional markers. Different similarity measures may be used to identify similar body markers or emotional markers. The server 10 maps body markers and emotion markers (data structures of values) to user records and product records.
In some embodiments, the server 10 has a database of user records with user identifiers and user data. For example, each user record may be indexed by a user identifier. The server 10 may identify a set of body marker records and emotional marker records based on the user identifier, for example, to identify a body marker or emotional marker determined for a particular user or to compare body markers or emotional markers of a particular user over different time periods.
The server 10 stores the data of the product recommendations in a database of product records. For example, each product recommendation may be indexed by a product identifier. The product records may define different products, parameters or variables of the products, identifiers of the products, and other data. The product record may contain historical data regarding previous product recommendations for the user, as well as previous product recommendations for all users of the system 100. The historical data of the product record may contain time data that may be mapped to time periods of the body markers or the emotional markers. The user record and/or body or mood marker record may also indicate an activity identifier to connect the user record and/or body or mood marker to a particular product record. For example, the server 10 may calculate a physical or emotional marker of the user over a period of time based on the user data and determine a product recommendation for the user over the period of time. The product recommendations may correspond to product recommendation records indexed by product identifiers. The user record may store a product identifier and a time period that connects the user record to a particular product record. The body marker record or the emotional marker record may also indicate a user identifier or an emotional marker identifier. The user record may also indicate a body mark identifier or an emotion mark identifier to connect the user record, the specific product record, and the body mark record or emotion mark record. The body or emotion marker record may also indicate parameters for calculating different types of body or emotion markers using different types of data. The body marker record or emotion marker record may also have a model for calculating the body marker or emotion marker for a certain period of time. The body mark record or the emotion mark record may also indicate different product identifiers to connect the body emotion mark to the product recommendation record.
Based on the user's physical or emotional indicia, the server 10 may transmit data to the application to update the interface 32. For example, the data may be a visualization for displaying a product (or recommended product) on the interface 32 or instructions for generating audio or video data at the interface 32.
The server 10 and interface 32 may be connected using an Application Programming Interface (API) and exchange commands (including product requests) and data using the API. The interface 32 may receive instructions from the server 10 to provide product recommendations. For example, interface 32 may provide a virtual coaching interface that provides product recommendations over a period of time to help improve the user's health and/or achieve his/her goals. The interface 32 may use the API to exchange commands and data with the server 10 to receive product recommendations and automatically update the virtual trainer interface to automatically provide the product recommendations. Interface 32 may use a virtual trainer interface to prompt the user for data and may use an API to transmit the collected user data to server 10.
The interface 32 may automatically update the displayed visualization to provide product recommendations for the time period. The interface 32 may continue to monitor the user (by collecting user data) during use of the product to collect feedback data, which may be referred to as user data. The interface 32 may receive positive or negative feedback regarding product recommendations for the time period. For example, the interface 32 updates to provide the first product recommendation for the time period and receives negative feedback regarding the first product recommendation for the time period. The interface 32 may exchange commands and data with the server 10 using the API to receive the second product recommendation for the period of time and communicate negative feedback. For example, the server 10 may store negative feedback in the user record along with the activity identifier of the first product recommendation for the period of time, or otherwise store negative feedback associated with the first product recommendation.
During an activity, system 100 may receive data indicative of a user's performance from a data stream from a different channel, such as an immersive hardware device (e.g., user device 16), such as a smart watch, a smart phone, a smart mirror, or any other smart exercise machine (e.g., a connected stationary bicycle), as well as any other sensor, such as sensors 24-26. For example, the user device 16 may be a smart mirror with a camera and sensor to capture user data. The user device 16, which is a smart mirror, may also have an interface 32, for example, to provide the user with product recommendations over the period of time. The application 1010 may send a product request to the server 10 using the API along with user data captured from the user device 16 (smart mirror) to receive product recommendations over the period of time to update the interface. Thus, the interface 32 may provide product recommendations for different time periods and may also have sensors that capture user data for the time periods.
Based on the collected data and the user's physical or emotional indicia, the server 10 may dynamically adapt by providing updated product recommendations over different time periods or over the same time period based on feedback from the interface of previous product recommendations. In one embodiment, the recommendations generated by the server 10 may take the form of a program of multiple product recommendations over a period of time (or time segment) to guide or shape matching pairing/community interactions or experiences. For example, the program may include product recommendations for one or more phases over different time periods (daily, weekly, monthly, yearly programs). The server 10 may calculate different product recommendations and sessions based on the phase and the current time period. Over time, through repeated interactions of the user with the emotional interface 32 on his user device 16, the interface 32 captures updated user data and sends it to the server 10 for tracking and storage. Over time, server 10 may track and monitor the physical or emotional markers of each user based on updated user data collected over time. The server 10 may define the program as a set of product recommendations. The server 10 may change the set of product recommendations. The server 10 may change the product based on the current body or emotional indicia of the matching person to adjust the set of product recommendations to help maintain a deep meaningful connection between the matching users.
The server 10 may use body markers or emotional markers to make product recommendations for a group of users. The server 10 may generate the same product recommendation for each user in the group, for example, based on the physical or emotional markers calculated for each user in the group. Group exercise improves individual health and increases social connections by sharing emotion and movement. Thus, the server 10 may be used to identify individuals with similar physical or emotional markers and connect them by generating the same product recommendations for a group of identified users or peers that are relevant to their community exercises (or other community activities). Each user in the group may be linked to the interface 32 and the server 10 may send the same product recommendation for the set of identified users to each of the interfaces 32 and continue to monitor the user data of the set of identified users by capturing additional user data after the same product recommendation is provided. The server 10 may also generate social metrics for the user to make recommendations for the set of identified users.
In some embodiments, the server 10 may manipulate the external sensory environment as part of the product experience by controlling connected sensory actuators (e.g., sound, lighting, smell, temperature, airflow in the room). The sensory actuator may be part of a building automation system, for example, to control components of the building system to coordinate with content delivered to the interface 32 or the user device 16. The server 10 may transmit control commands to the sensory actuator as part of the process of generating product recommendations, calculating body or mood markers, or user monitoring by capturing additional user data.
The server 10 may control the connected sensory actuators to alter the user's (or group of users) internal sensory capabilities to deliver greater physiological and psychological benefits during the course/experience. Server 10 may manipulate the connected sensory actuators based on product recommendations (e.g., activity type, content, intensity of lessons, duration of lessons), feedback received at user device 16 or interface 32, user biometric input measured in real-time during the lesson using user device 16, and individual body or emotional markers of the user calculated by system 100 during previous sessions.
For example, based on the physical or emotional indicia of the user or group of users, the server 10 may generate a product recommendation for such user or group of users, and then may change the sound tempo or volume associated with the product recommendation to match the recommendation of the sensory actuator controlled by the server 10. Based on the recommended products and physical or emotional indicia of the user or group of users, the server 10 may dynamically change the external sensory environment during the duration of the activity or experience to match the order/intensity of the activity/experience and the user biometric or visual or audio cues/inputs.
Interface 32 or server 10 may use different data processing techniques to generate body markers or emotional markers. For example, the interface 32 or the server 10 may receive a data set (e.g., a data set that may be extracted from an aggregated data source), extract metrics from the aggregated data source, and use the extracted metrics to generate body markers or mood markers for improving health.
In some embodiments, interface 32 may transmit the body markers or emotional markers to server 10 along with the product request. In response, interface 32 updates its interface to display visual effects based on the body markers or emotional markers and also based on the product recommendations received by server 10. The interface 32 may connect to the server 10 to display the generated recommendations at the interface, or trigger other updates to the interface based on the recommendations (e.g., changing the visualization provided by the interface 32).
Although in the above embodiments the processing of user data, the determination of body markers or mood markers and the generation of personalized products or recommendations have been described as being performed by the hardware server 10, in other embodiments such steps may be performed by the user device 16 provided that the user device 16 has access to the required instructions, techniques and processing power. The server 10 may have access to greater processing power and resources than the user device 16 and, thus, may be more suitable for relatively resource intensive processing of user data obtained by the user device 16 and obtained across channels.
In some embodiments, the server 10 stores a classifier for generating data defining physical or behavioral characteristics of the user. Server 10 may use the classifiers and features extracted from the multimodal feature extraction to calculate activity metrics, physical metrics, cognitive affective power metrics, and social metrics. Multimodal feature extraction features may be extracted from image data, video data, text data, and the like. The classifier may be a model that generates output data corresponding to different activity metrics, physical metrics, cognitive emotion ability metrics, and social metrics. The classifier may be trained based on user data and may be updated in-between feedback data.
In some embodiments, the server 10 stores a user model corresponding to a user. The server 10 may retrieve a user model corresponding to the user and use the user model to calculate a physical or emotional marker for the user. The server 10 may use the user model to calculate a recommendation or personalization of the product based on the user's physical or emotional markers.
In some embodiments, user device 16 is connected to or integrated with an immersive hardware device that captures audio data, image data, and data defining physical or behavioral characteristics of the user. The user device 16 may transmit the captured data to the server 10 for processing to calculate a recommendation or personalization of the product. User device 16 connects to the immersive hardware device using bluetooth or other communication protocol.
In some embodiments, the server 10 stores a content repository and has a content syndication engine that maps product recommendations to recommended content and transmits the recommended content to the interface 32.
In some embodiments, interface 32 further includes a voice interface for communicating product recommendations for a user session received in response to a product request. The voice interface may use speech/text processing, natural language understanding, and natural language generation to convey product recommendations and capture user data. The interface 32 may implement speech-to-text translation and process text data regarding specific word usage and changes using natural language processing.
In some embodiments, interface 32 may access a memory storing an emotion classifier to capture data defining physical or behavioral characteristics of a user.
In some embodiments, the server 10 uses classifiers to calculate activity metrics, cognitive emotion capability metrics, and social metrics using user data and user records of user sessions and multimodal feature extraction that processes data from multiple modalities. The server 10 uses multimodal feature extraction to extract features and correlations on image data, data defining physical or behavioral characteristics of the user, audio data, and text input. For example, multi-modal signal processing analyzes user data and extracts features from the processed data by several types of metrics or modalities, such as facial analysis; body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis; user activity analysis; voice analysis; and (5) text analysis.
In some embodiments, the non-transitory memory stores classifiers for generating data defining physical or behavioral characteristics of the user, and the server 10 uses the classifiers and features extracted through multimodal feature extraction to calculate activity metrics, physical metrics, cognitive emotional ability metrics, and social metrics.
The interface 32 has an interface that receives a product request and provides product recommendations or personalized products in response to the request. Interface 32 provides recommendations derived based on user data, activity metrics, and physical or emotional markers of the user. The interface 32 may provide product recommendations for different user sessions that may be defined by a time period. The server 10 may process the user data based on different user sessions defined by the time periods. For example, interface 32 may send a product request to server 10 to start a user session for a period of time. The user session is mapped to the user by means of a user identifier. The user session may define a set of captured user data (including captured real-time data), one or more physical or emotional markers, and one or more product recommendations. In some examples, the user session links the user group. Each user session may have a product request and corresponding one or more product recommendations. The system 100 may identify each user session using a session identifier stored in a record of the database 12. The product request may indicate the session identifier, or the server 10 may generate and assign the session identifier in response to receiving the product request. For example, the server 10 and the interface 32 may exchange session identifiers through an API. The server 10 may store the extracted metrics associated with the session identifier to map the data values to the user session. The server 10 may use the data values from the previous user session to calculate body or mood tags and product recommendations for the new user session. The previous user session may involve the same user or a different user.
The interface 32 may provide product recommendations or personalized products for the user session. The user device 16 may also have a sensor to capture (near) real-time user data during (or near) a time period of the user session to determine a physical or emotional marker of the user during the time period. A user session may be defined by one or more time periods or segments of time periods. The user session may be mapped to a user identifier or a plurality of user identifiers.
The server 10 or hardware server 10 receives input data from different data sources, such as content center 1020, user devices 16, and channel 1040, to calculate different metrics for calculating body markers and emotion markers. The server 10 or hardware server 10 uses the captured (near) real-time user data as well as other user data to calculate a physical or emotional marker of the user over the period of the user session. For example, the server 10 may access records in the database 12. The server 10 may calculate a similarity measure across records for calculating physical or emotional markers of the user over a period of the user session.
The product request may relate to a time period of the user session and the product recommendation or personalized product generated in response to the request may relate to the same time period. The system 1000 may store the product in a record along with the session identifier. In some embodiments, interface 32 may determine a product or body or mood marker. In some embodiments, interface 32 may extract metrics from the captured user data and transmit the extracted metrics to server 10. The interface 32 may display or otherwise provide product recommendations, such as via audio or video data.
By way of illustration, there may be a plurality of user devices 16 having sensors. User device 16 may connect to server 10 to exchange data for a user session. The server 10 or hardware server 10 may aggregate or aggregate data from multiple user devices 16 and send product recommendations to the interface 32. The server 10 may coordinate the timing of real-time data collection from a group of users corresponding to a group of user devices 16 and may coordinate the timing and content of product recommendations for the interface 32 of each user in the group of users. For example, user groups may be assigned to user sessions to coordinate data and messages. For example, the server 10 may generate the same product recommendation for each user in the group of users of the user session for transmission to the interface 32. The interface 32 may be linked to the user by a user identifier that may be provided as credentials at the interface or generated using data retrieved by the user device 16. The user identifier may be mapped to a user record in database 12. The session identifier may also be mapped to one or more user identifiers in the database 12. For example, during the registration process, interface 32 may exchange user identifiers with server 10 or hardware server 10 through an API.
The examples may involve the server 10 exchanging data between multiple interfaces 32 and multiple user devices 16 with sensors. The server 10 or hardware server 10 may have increased computing power to efficiently calculate data values from the aggregated user data. Each user device 16 does not have to store aggregated user data and does not have to process similarity measures across user groups. Each user device 16 does not have to exchange data with all user devices 16 in order to access the benefits of data aggregation. Instead, user device 16 may exchange data with server 10. Server 10 may store the aggregated user data and process the similarity measure across user groups and exchange data with user devices 16 based on the results of the computations of the user devices. The user device 16 may capture real-time user data during a user session of the server 10 or the hardware server 10, or may use the real-time user data and data received from the server 10 to perform calculations of the user session. The user device 16 may extract metrics from the captured user data and transmit the extracted metrics to the server 10. User device 16 may exchange data and commands with server 10 during a user session using an API. For example, the extracted metrics may correspond to parameters of the API. The user device 16 may use the API to transmit the extracted metrics to the server 10. The user device 16 may extract metrics from the captured user data such that the metrics may not reveal all sensitive user data. In some embodiments, user device 16 may use an API to transmit metrics to server 10 instead of all sensitive user data.
The server 10 or hardware server 10 may serve a large number of user devices 16 and interfaces 32 to extend the system 100 to collect a corresponding large amount of data for computation. For example, the system 100 may have multiple hardware servers 10 to serve multiple groups of user devices 16 and provide increased processing power and data redundancy.
The server 10 may receive user data related to a user of a user session from a plurality of channels. User data relates to different types of data, such as image data related to a user, text input related to a user, data defining physical or behavioral characteristics of a user, and audio data related to a user. The server 10 may perform preprocessing on the raw data received from the different channels. Examples include: importing a database; data cleaning or checking missing values/data; smoothing or removing noise data and outliers; data integration; converting data; and normalization and aggregation of data.
The interface 32 may use the API to exchange data and commands with the server 10, such as metrics extracted from the captured user data. Interface 32 or server 10 may generate activity metrics, cognitive emotion capability metrics, and social metrics by processing user data using one or more hardware processors configured to process user data from the plurality of channels. This includes captured user data for a time period given product recommendation corresponding to the time period. The captured user data for the time period is used to calculate a physical or emotional marker of the user during the time period.
The activity metrics, cognitive affective capabilities metrics, and social metrics define "body" metrics and "cognitive" metrics of the system 100. Raw data is ingested by the system 100 from different channels and mapped by the system 100 to these definitions of "body" metrics and "cognitive" metrics. The metrics may have corresponding values based on the processed user data. The system 100 uses the "body" metrics and "cognition" metrics to extract from the raw user data to provide an improved way to calculate the user's body and emotion marker values over the period of time.
For example, the system 100 uses sensors (accelerometer, heart rate monitor, respiration rate monitor) to measure the physiological condition of the user to capture real-time user data, and processes the user data by assigning values to different metrics to measure the physiological condition (e.g., measure heart rate, heart rate variability). The system 100 may define a "body" metric or mobility score during an exercise activity that may be calculated using user data captured, for example, using physiological sensors of the user device 16 with or without a camera. As another example, system 100 may use heart rate and heart rate variability during exercise activities involving a product to define a measure of connectivity.
For example, the system 100 uses definitions based on the following to measure cognitive metrics: text input (free text answer with predefined answer and with predefined feature extracted for predefined question); daily activities (e.g., extracted from user device 16, such as application usage, music consumption, number of outgoing calls); speech (the power spectrum of speech signals can correlate emotion such as neutrality, anger, happiness, sadness, etc.); body language extracted from image data (specific positions and orientations of joints such as gestures, wrists, and hands may correlate emotions such as happiness, sadness, surprise, fear, anger, aversion, neutrality, etc.); eye movement (saccade duration, fixation duration, pupil diameter may be related to positive, neutral or negative emotional states). Another example is brain activity data (e.g., N400 response).
As another example, the system 100 may use a higher level of state definition to measure cognitive metrics such as intent/consciousness, attention, motivation, emotional accommodation, transposed thinking/insight, self-homonym, and homonym to others.
The system 100 may measure body metrics and cognitive metrics from the captured user data for the user session and then calculate body markers or emotional markers of the user session using the body metrics and cognitive metrics to generate product recommendations or personalized products. The system 100 may measure physical metrics and cognitive metrics from the text-based interactions and the free text responses and extract features from the free responses that are used to calculate physical or emotional markers. The user may be located in front of a mirror device with a camera to capture images of the user's conversational gestures and audio data of the speech, which may be used to calculate additional metrics such as tone or body posture.
The system 100 may measure a physical metric as a status metric, such as "happy" may detect a smile in the image data or a gesture or tone from the audio data. The system 1000 may measure a trait metric or a more constant trait of character. For example, to measure the attention or concentration level a user has at a given time, the interface may prompt a predefined question: "how much attention you feel at present? "reply with a Likert scale response of level 1-7. Interface 32 may ask specific or general questions and extract any features related to the sensation of concentration by the free text response. The system 100 may also consider communication messages between users, such as text conversation data between two users, and extract features related to users describing the feeling of concentration. The system 100 may also consider the reaction time to a digital interaction (e.g., button click) on a phone or other device. The system 100 may also consider the device 16 usage data to measure how much time during the day the user is concentrating or focusing, or distracting and not focusing. The system 100 may use visual eye tracking to measure attention and concentration to a particular task.
The interface 32 or the server 10 may extract metrics from the image data and data defining the physical or behavioral characteristics of the user using at least one of: facial analysis; body analysis; eye tracking; behavioral analysis; social network or graph analysis; position analysis; user activity analysis. Examples of different facial feature extraction techniques and image processing techniques include observation techniques, such as those based on facial motion coding systems, wherein observable activity of a particular muscle group is marked and coded as a motion unit by a human encoder, recording muscle activity with facial electromyography; facial expression coding System (FACES) developed by Berkeley (https:// esilab. Berkeley. Edu/wp-content/uploads/2017/12/Kring-Sloan-2007. Pdf), the entire contents of which are hereby incorporated by reference.
The interface 32 or the server 10 may use different speech processing techniques to extract metrics from the audio data. For example, the metric may be a value of a non-verbal interaction based on an emotional state (e.g., laugh, sigh).
Interface 32 or server 10 may use text analysis and different natural language understanding techniques to extract metrics from text input to extract features from text data, including meaning and emotion analysis.
Interface 32 or server 10 may calculate activity metrics, cognitive affective metrics, and social metrics. Interface 32 or server 10 may determine one or more states of one or more cognitive affective capabilities of the user based on the cognitive affective capability metrics and the social metrics generated from the processed user data. Examples of status classifications are happy, sad, aversive, insignia (moment of insight), giving, homonymous, forced help, jealousy, energetic, concentration, surprise, fear, anger, curiosity, consciousness, unconsciousness. Interface 32 or server 10 may define a plurality of states and select a state for a user session or period of time. For example, the definition of the state may relate to "readiness increase (readiness to grow)".
Interface 32 or server 10 may calculate a physical or emotional marker for the user of the user session based on one or more states of one or more cognitive emotional abilities of the user. The system 100 may map the status to a body marker value or an emotion marker value or parameter. Taking physical health as an example, years of training contribute to general health levels that do not change rapidly. If the user has recently performed hard training, the user may be truly tired the next day after the training session, and thus the user's readiness for training may be low. The system 100 may consider metrics for a user calculated based on data captured before or prior to a time period of a user session, along with metrics for a user calculated based on data captured during the time period of the user session. The system 100 may use the weights or ratios of the metrics to calculate physical or emotional markers or additional metrics for the conversation. Emotion markup can be calculated using measures of emotion in different dimensions, such as emotion awareness, accommodation, homography (ARC) dimensions. Within each dimension, there are different states that can be detected by the system 100 that will be attributed to that dimension. For awareness, the system 100 may define sub-dimensions such as reflectivity, positive notion, and purposefulness. User device 16 may display an initial questionnaire to receive input data for the user session to measure as a trait level metric. However, with different real-time data inputs, the system 100 may measure discrete states at different time intervals (using data corresponding to different time intervals) during a period of time or across different user sessions. For example, a user will be in a reflective state when the user is marking a current or past experience and expressing what they have produced in the experience, either in spoken or written language. To detect such a state, a person's spoken or written language may be processed and features related to emotional expressions about the event extracted.
Interface 32 or server 10 may define a body marker or emotion marker as a function or set of variables or values. The emotion markup definition can model the ARC dimension and treat the values of the measure of ARC dimension as a profile (measure 1, measure 2, measure 3) with different versions or combinations of values, depending on the values that can be assigned. An example is a profile of values (A, R, C), where each value may be high or low, with different versions of the profile, such as: (high-high) (high-low) (high-low-high) (high-low) ((high-low)) #. Low-high) (low-low) (low-high) (low-high). Different versions of the profile may map to different emotional markers. For example, the profile may be stored in a record in database 12.
For example, interface 32 or server 10 may select a physical or emotional marker from a set of physical or emotional markers using confidence scores or distribution rules. As an example, a rule may correspond to a default value (default) that best represents the demographic workings or profiles of most users, such as lower self-homonyms or higher flexibility. The interface may also prompt more information to capture additional user data (e.g., a digital or virtual coach or conversation agent style interface) to select a physical or emotional marker from the group of physical or emotional markers.
The server 10 may automatically generate one or more product recommendations or personalized products for display at the interface 32 based on the user's physical or emotional markers and activity metrics. The server 10 transmits product recommendations to the interface 32 in response to the product request. The recommendation may be based on a score threshold from a predefined question and a rickettsial response. The recommendation may be based on advanced data points and complex data collection and analysis methods.
The interface 32 may provide an automated guidance application to use physical and cognitive metrics extracted from the captured user data to provide automated product recommendations or personalized products for the user session.
The interface 32 may be a mobile companion application (resident in a computer device) of a separate hardware device for capturing user data. The separate hardware device may also have an interface that may deliver recommendations or products in coordination with interface 32. Within the companion application, the interface 32 has a dialog agent interface to provide product recommendations. The system 1000 may have a combination of hardware devices with sensors for capturing user data with a companion mobile interface 32 on a separate hardware device to exchange data with the server 10. The hardware device with companion mobile interface 32 may trigger a digital coaching session to recommend different products to supplement styles and types of mental training activities (e.g., centralized meditation, open monitoring meditation, homonymy meditation), physical activities (yoga, walking, rotating, etc.), companion coaching activities (e.g., discussion of various topics of emotional development, mirrored or eye gaze, practicing listening to the other party without speaking), etc.
Server 10 may implement a state-based personality metric for a physical marker or an emotional marker. Status-based personality is a measurement that changes over a period of time based on the collected data. Initially, the server 10 may collect a brief trait measure. Then, over time, through the collection of states, the server 10 may dynamically recalculate the body or emotional markers over the period of time of the user session (e.g., at certain intervals, upon detection of an event) such that the body or emotional markers will dynamically change during each user session based on the states over time. The server 10 may use a rolling average, for example, based on the measured state.
The interface 32 may implement natural language generation techniques for conveying product recommendations or outputs received from the server 10. The interface 32 may use advanced data points and user preferences, various types of psychological and demographic data, transaction data regarding products related to various health activities (running, yoga, etc.), and other contextual information regarding goals of life and value. The interface 32 may use this data to further sight the output received from the server 10 in order to develop a customized interface experience for the user.
FIG. 13 illustrates an example system 100 for providing personalized products or product recommendations.
The system 100 has one or more computing devices 1302 (with hardware processors 18), one or more hardware devices 1306, a data channel 30, one or more hardware servers 10, and a manufacturing queue 34 communicatively coupled to each other through a network 14. The manufacturing queue 34 is communicatively coupled to the manufacturing device 1320 through another network 1314 such that other elements of the system 100 can only communicate with the manufacturing device 1320 through the manufacturing queue 34 and not directly.
The one or more computing devices 1302 have a hardware processor 18 with computer-readable non-transitory memory storing an application program that provides the interface 32. The user may interact with the interface, for example, to provide user data or request product recommendations. Each computing device 1032 has one or more sensors 1304. These sensors may be used to collect user data, such as physiological parameter data or cognitive affective capability data. The sensor 1304 may be, for example, a camera, a microphone, a biometric sensor (heart monitor, blood pressure monitor, skin moisture monitor), a position or location sensor, a motion detection or motion capture sensor.
The one or more hardware devices 1306 may be used to collect user data. Hardware device 1306 may be, for example, a smart watch, a smart phone, a smart mirror, or any other smart exercise machine equipped to collect user data (e.g., a connected stationary bicycle). For example, the hardware device 1306 may have cameras and sensors to capture user data. The hardware device 1306 is communicatively coupled to the computing device 1302, such as by bluetooth, RF, or any other suitable communication protocol.
The data channel 30 may receive data from different data sources, such as a product database or different content providers (i.e., coaches, advisors, influencers). Different channels 30 may provide different kinds of data. For example, there may be channels providing CRM data, biometric data, speech data, text data, input data, activity data, material data, measurement data, BOM data, and content data.
The one or more hardware servers 10 each have a hardware process in which a computer readable non-transitory memory stores a data processing system (e.g., database 12 on non-transitory memory). The data processing system may receive data from computing device 1302 and data channel 30 and store this data in product records, user records, and content records. The data processing system has multi-modal feature extraction software for extracting data from the product records, user records, and content records and storing the data in an attribute database. The hardware server 10 has a recommendation/personalization system with a user model, a generative model, a content syndication/creation engine, and a production syndication/creation engine. Attributes from the attribute database may be used as input to a user model that is used with the generative model to power the content and product syndication/creation engines of the recommendation/personalization system.
In response to a product or recommendation request provided by a user at interface 32, the recommendation/personalization system uses the attribute database, the user model, and the generated design model to aggregate or create a product recommendation or personalized product. The product recommendation or personalized product will be displayed to the user at interface 32. If the product recommendation or personalization product is a product that can be manufactured, such as a garment, the user may request that the product be manufactured. The manufacturing request is sent to the manufacturing queue 34 along with any data from the data channel 30 or the hardware server 10 that is required to complete the request. For example, in the case of apparel, the manufacturing queue may require apparel material data provided by the data channel 30, product data stored in a product record of the hardware server 10, and measurements and shipping information of the user stored in a user record of the hardware server 10. The manufacturing queue 34 provides instructions to the manufacturing device 1304 to manufacture a product.
FIG. 14 illustrates an example of a user interface 1400 for product recommendation. The user interface 1400 also has selectable indicia 1420, including selectable indicia for recommending one or more product trigger product requests to update the user interface 1400 with a visualization of one or more product recommendations 1410. For example, upon selection of selectable flag 1420, user interface 1400 may transmit a product request to server 10, for example. In response, the user interface 1400 receives data for generating a visualization of one or more product recommendations 1410 (which may contain associated recommended content), and updates the user interface 1400 to display or communicate the product recommendations 1410 or associated recommended content. The product recommendations may contain content that may be provided to the user as messages, images, or videos displayed in the user interface 1400. The message transfer may also request additional input data (to be captured at the user interface 1400) before generating the product recommendation 1410. Product recommendations 1410 are automatically generated, for example, by server 10. Selectable indicia 1420 may include selectable purchase options to select recommended products for purchase. This may trigger an instruction to be transmitted to the manufacturing queue 34 to manufacture the selected product of the product recommendation 1410. Selectable indicia 1420 may include selectable feedback options to provide feedback data regarding recommended products 1410 and/or purchased products. The user interface 1400 may be stored on a non-transitory computer readable medium and executable by a hardware processor to implement the operations described herein.
Fig. 15 illustrates an example of a user interface 1500 for product personalization. The user interface 1500 also has selectable indicia 1520 that includes a selectable indicia for generating one or more personalized products to trigger a product request to update the user interface 1500 with a visualization of the one or more personalized products 1530. For example, upon selection of selectable indicia 1520, user interface 1500 may transmit a product request to server 10, for example. In response, the user interface 1500 receives data for generating a visualization of one or more personalized products 1530 (which may contain associated personalized content), and updates the user interface 1500 to display or communicate the one or more personalized products 1530 or associated recommended content. The product may contain content that may be provided to the user as a message, image, or video that is displayed in the user interface 1500. The message transfer may also request additional input data (to be captured at the user interface 1500) before generating the one or more personalized products 1530. The one or more personalized products 1530 are generated automatically, for example, by the server 10. Selectable indicia 1520 may include selectable purchase options to select recommended products for purchase. This may trigger the instructions to be transferred to the manufacturing queue 34 to manufacture the personalized product 1530. Selectable indicia 1520 may contain selectable feedback options to provide feedback data regarding personalized product 1530. The user interface 1500 may be stored on a non-transitory computer readable medium and executable by a hardware processor to implement the operations described herein.
In one aspect, a method for providing an interface for generating a product is provided. The method involves storing in memory product measurement records, movement characteristics, perceptual preference characteristics, body and mood marker characteristics, user records and a affiliatable database that generates a design model; capturing user data of a user session over a period of time; in response to receiving a product request from the interface, selecting a product category and a product variable; extracting user attributes from the user data and associated with the product variables, the user attributes including at least one of a measurement metric, a movement metric, a perceptual preference metric, and a body and emotion marking metric; calculating a target parameter of a target sensory state of the user using the extracted user attribute; generating a product and associated manufacturing instructions by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database, wherein generating the product is based on emotional and physical indicia of the user; displaying a visualization of the product at the interface with a selectable purchase option; receiving purchase instructions for the product in response to selecting the selectable purchase option at the interface; transmitting manufacturing instructions of the product to a manufacturing queue to trigger production and delivery of the product; receiving feedback data regarding the product; and updating the attribution-capable database or user model based on the feedback data.
In some embodiments, the method involves receiving a modification request for the product at the interface; and updating the product and the associated manufacturing instructions based on the modification request.
The terms "a" or "an" when used in conjunction with the terms "comprising" or "including" in the claims and/or specification may mean "one" or "one," but it is also consistent with the meaning of "one or more" or "more", "at least one" and "one or more/one or more than one (one or more than one)" unless the context clearly dictates otherwise. Similarly, the word "another" may mean at least a second or more, unless the context clearly dictates otherwise.
The terms "coupled", "coupled" or "connected" as used herein may have several different meanings, depending on the context in which the terms are used. For example, the terms coupled, or connected may have a mechanical or electrical meaning. For example, as used herein, the terms coupled, or connected may refer to two elements or devices being connected to each other directly or through one or more intervening elements or devices via electrical, or mechanical elements, depending on the particular context. When used in connection with a list of items, the term "and/or" herein means any one or more of the items comprising the list.
As used herein, reference to "about" or "approximately" a number or "substantially" equal to a number means within +/-10% of the number.
While the present disclosure has been described in connection with specific embodiments, it is to be understood that the present disclosure is not limited to those embodiments, and that alterations, modifications and variations may be made to these embodiments by those skilled in the art without departing from the scope of the present disclosure.
It is also contemplated that any portion of any aspect or embodiment discussed in this specification may be implemented or combined with any portion of any other aspect or embodiment discussed in this specification.

Claims (71)

1. A system for providing an interface for product personalization using body and mood tags of a user, the system comprising:
a non-transitory memory storing a affiliatable database of at least one of product measurement records, movement features, perceived preference features, body marker features, emotion marker features, user records, product records, and generated design models;
a hardware processor programmed with executable instructions for an interface to: obtaining user data for a user session over a period of time, transmitting a product request for the user session, displaying a visualization of a product generated for the user session in response to the product request, and receiving quantitative and qualitative feedback data regarding the product;
A hardware server coupled to the memory to access the affiliate database, the hardware server programmed with executable instructions to:
in response to receiving the product request from the interface, selecting a product category and a product variable;
extracting user attributes from the user data of the user session and associated with the product variables, the user attributes including at least one of a measurement metric, a movement metric, a perceptual preference metric, a body marker metric, an emotion marker metric, a purchase history, and an activity intent;
calculating a target parameter of a target sensory state of the user using the extracted user attribute;
generating the product and associated manufacturing instructions by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database;
transmitting the visualization of the product to the interface;
updating the attribution database or the user record based on the feedback data regarding the product;
a user device, the user device comprising: one or more sensors for capturing the user data of the user session during the time period; and a transmitter for transmitting the captured user data to the interface of the hardware processor or the hardware server over a network to generate the product for the user session.
2. The system of claim 1, wherein the interface receives purchase instructions for the product, and wherein the hardware server transmits manufacturing instructions for the product in response to receiving purchase instructions.
3. The system of claim 1, wherein the hardware server generates the product and the associated manufacturing instructions by generating a bill of materials file.
4. The system of claim 1, wherein the product comprises video content, and wherein the hardware server generates the product and associated code file by assembling a content file for the video content.
5. The system of claim 1, wherein the interface receives a modification request for the product, and wherein the hardware server updates the product and the associated manufacturing instructions based on the modification request.
6. The system of claim 1, wherein the user device captures the user data from a plurality of channels, wherein the user data comprises at least one of image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user.
7. The system of claim 6, wherein the hardware server is programmed with executable instructions to calculate activity metrics, cognitive emotion capability metrics, and social metrics using the user data and the user attributes of the user session by: for the image data and the data defining the physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user activity analysis; for the audio data, using voice analysis; and for the text input, using text analysis; calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric; calculating the emotional indicia metrics of the user based on the one or more states of the one or more cognitive emotional abilities of the user; and generating the product based on at least one of the emotional indicia of the user, the activity metric, the product record, and/or the user record.
8. The system of claim 1, wherein the hardware server generates the measurement metrics using at least one of 3D scanning, machine learning prediction, and user measurements.
9. The system of claim 8, wherein the product comprises a garment, and wherein the hardware server uses garment measurements to capture garment data of the product data to generate the measurement metrics.
10. The system of claim 1, wherein the hardware server generates the movement metric based on at least one of Inertial Measurement Unit (IMU) data, computer vision, pressure data, radio frequency data.
11. The system of claim 1, wherein the hardware server extracts the user attributes from user data, the user attributes including at least one of purchase history, activity intent, interaction history, and comment data of the user.
12. The system of claim 1, wherein the hardware server generates the perceived preference metrics based on at least one of clothing feel, preferred hand feel, thermal preference, and movement feel.
13. The system of claim 1, wherein the hardware server generates the emotional marking metrics based on at least one of personality data, affective state data, mood adaptation data, personal value view data, objective data, and physiological data.
14. The system of claim 1, wherein the hardware processor calculates a preferred sensory state as part of the extracted user attributes.
15. The system of claim 1, wherein the hardware server calculates social marker metrics, connectivity metrics, and/or resonance marker metrics.
16. The system of claim 1, wherein the user device is connected to or integrated with an immersive hardware device that captures audio data, the image data, and data defining physical or behavioral characteristics of the user as part of the user data.
17. The system of claim 1, wherein the non-transitory memory has a content repository and the hardware server has a content syndication engine that generates content as part of the product and transmits the generated content to the interface.
18. The system of claim 1, wherein the hardware processor receives object identification data and calculates a preferred sensory state as part of the object identification data.
19. The system of claim 1, wherein the product comprises content for display or play on the hardware processor or the user device.
20. The system of claim 1, wherein the product relates to apparel, wherein the affiliation database includes simulated apparel records, wherein the hardware server generates simulated product options as part of the product and the associated manufacturing instructions, and wherein the interface displays visualizations of the simulated product options.
21. The system of claim 20, wherein the simulated product options include at least one of software physical simulation, hardware physical simulation, static 3D viewer, and AR/VR experience content.
22. The system of claim 1, wherein the hardware server uses multi-modal feature extraction to extract product variables or attributes and categorize the product variables and attributes.
23. The system of claim 1, wherein the hardware server classifies different types of data streams for the user data for multimodal feature extraction.
24. The system of claim 1, wherein the user data comprises image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user, and wherein the hardware server uses multi-modal feature extraction to extract the user attributes:
For the image data and the data defining the physical or behavioral characteristics of the user, the multi-modal feature extraction implements at least one of: facial analysis; body analysis; eye tracking; behavioral analysis; social network or graph analysis; position analysis; user activity analysis;
for the audio data, the multimodal feature extraction performs speech analysis;
for the text input, the multimodal feature extraction performs text analysis; and is also provided with
Calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric;
25. the system of claim 1, wherein a hardware server extracts the user attributes by calculating the emotion markup metrics based on one or more states of one or more cognitive emotional abilities of the user and social metrics of the user.
26. The system of claim 1, wherein the non-transitory memory stores a classifier for generating data defining physical or behavioral characteristics of the user, and the hardware server extracts the user attributes by calculating activity metrics, cognitive emotional ability metrics, and social metrics using the classifier.
27. The system of claim 1, wherein the non-transitory memory stores a user model corresponding to the user, and the hardware server uses the user model to calculate the emotion markup metric for the user.
28. The system of claim 1, further comprising one or more modulators in communication with one or more environmental fixtures to change an external sensory environment based on the product, the one or more modulators in communication with the hardware server to automatically modulate the external sensory environment of the user during the user session.
29. The system of claim 28, wherein the one or more environmental fixtures comprise at least one of: a lighting fixture, an audio system, a fragrance diffuser, and a temperature regulation system.
30. The system of claim 1, further comprising a plurality of data channels for a plurality of different types of sensors for capturing different types of user data during the user session, each device of the plurality of devices transmitting the captured different types of user data to the hardware server over the network to generate the product.
31. The system of claim 1, wherein the hardware server is configured to: determining emotional markers of one or more additional users; determining users with similar emotion marks; predicting connectivity between users with similar emotional markers; and generating the product using data corresponding to the user having similar emotional indicia.
32. The system of claim 31, wherein the interface is capable of transmitting another product request for the user session and providing additional visualizations of another product for the user session received in response to the another product request.
33. The system of claim 1, wherein the product comprises a program for display or playback on a computing device, wherein the program comprises two or more phases, each phase having a different content, intensity, or duration.
34. The system of claim 1, wherein the user data comprises personality type data, wherein the hardware server calculates the emotional marking metric by determining a personality type of the user based on the user data by comparing the personality type data with stored personality type data indicative of a correlation between personality types and personality type data.
35. The system of claim 1, wherein the hardware server calculates at least one of the following as part of the emotion markup metric: one or more emotional states of the user, one or more attentive states of the user, one or more sociophilic states of the user, one or more motivational states of the user, one or more re-rating states of the user, and one or more insight states of the user.
36. The system of claim 1, wherein the interface is a coaching application for improving the health of the user based on the product and at least one of the body marking metric, the emotion marking metric, and a perceptual preference metric
37. A method for providing an interface for generating a product, the method comprising:
storing in memory product measurement records, movement characteristics, perceptual preference characteristics, physical and emotional marking characteristics, user records, and a attributive database that generates a design model;
capturing user data of a user session over a period of time;
in response to receiving a product request from the interface, selecting a product category and a product variable;
Extracting user attributes from the user data and associated with the product variables, the user attributes including at least one of a measurement metric, a movement metric, a perceptual preference metric, and a body and emotion marking metric;
calculating a target parameter of a target sensory state of the user using the extracted user attribute;
generating a product and associated manufacturing instructions by processing the extracted user attributes and the target parameters of the target sensory state using the generated design model and the attribution database, wherein generating the product is based on emotional and physical indicia of the user;
displaying a visualization of the product at the interface with a selectable purchase option;
receiving purchase instructions for the product in response to selecting the selectable purchase option at the interface;
transmitting manufacturing instructions of the product to a manufacturing queue to trigger production and delivery of the product;
receiving feedback data regarding the product; and
updating the attribution-capable database or user model based on the feedback data.
38. The method of claim 37, further comprising:
Receiving a modification request for the product at the interface; and
updating the product and the associated manufacturing instructions based on the modification request.
39. A system for providing an interface with product recommendations, the system comprising:
a non-transitory memory storing a attributive database of at least one of product measurement records, movement characteristics, perceived preference characteristics, body and emotion marking characteristics, user records, and generated design models;
a hardware processor programmed with executable instructions for an interface to: obtaining user data for a user session over a period of time, transmitting a product request for the user session, providing a product recommendation for the user session in response to the product request, receiving a selected product of the product recommendation, and receiving feedback data regarding the selected product;
a hardware server coupled to the memory to access the affiliate database, the hardware server programmed with executable instructions to:
in response to receiving the product request from the interface, selecting a product category and a product variable;
Extracting user attributes from the user data of the user session and associated with the product variables, the user attributes including at least one of a measurement metric, a movement metric, a perceptual preference metric, and a body marker metric and an emotional marker metric;
calculating a target parameter of a target sensory state of the user using the extracted user attribute;
calculating a product recommendation using a recommendation system to process the extracted user attributes and the target parameters of the target sensory state, the product recommendation being calculated using emotional and physical markers;
transmitting the product recommendation to the interface over a network;
receiving a notification of the selected product from the interface;
receiving feedback data regarding the selected product; and is also provided with
Updating the affiliatable database based on the feedback data regarding the selected product;
at least one data channel, the at least one data channel having: one or more sensors for capturing user data during the time period; and a transmitter for transmitting the captured user data to the interface of the hardware processor or the hardware server through the network to calculate the product recommendation.
40. The system of claim 39 wherein the user attributes include quantitative and qualitative user attributes of physical metrics, the qualitative user attributes including the emotional marking metrics and/or perceived preferences of the user.
41. The system of any of claim 39, wherein the interface further comprises a voice interface for communicating the product recommendation and the product request.
42. The system of claim 39, wherein the hardware server generates the selected product by processing the extracted user attributes and the target parameters of the target sensory state using a generative design model and the attribution database.
43. The system of claim 42, wherein the hardware server receives personalization data to generate the selected product.
44. The system of claim 42, wherein the hardware server generates the selected product and associated code file by generating a bill of materials file.
45. The system of claim 42, wherein the hardware server generates the selected product and associated code file by compiling content files.
46. The system of claim 39, wherein the interface receives a modification request for the selected product, and wherein the hardware server updates the selected product and the associated code file based on the modification request.
47. The system of claim 39, wherein the hardware server is programmed with executable instructions to calculate activity metrics, cognitive emotion capability metrics, and social metrics using the user data and the user attributes of the user session by: for image data and data defining physical or behavioral characteristics of the user, at least one of the following is used: facial analysis, body analysis, eye tracking, behavioral analysis, social network or chart analysis, location analysis, user activity analysis; for audio data, voice analysis is used; and for text input, text analysis is used; calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric; calculating the emotional indicia metrics of the user based on the one or more states of the one or more cognitive emotional abilities of the user; and calculating the product recommendation based on the emotional indicia of the user, the activity metric, the product record, and the user record.
48. The system of claim 39, wherein the hardware server generates the measurement metrics using at least one of 3D scanning, machine learning prediction, user measurements, and garment measurements.
49. The system of claim 39, wherein the hardware server generates the movement metric based on Inertial Measurement Unit (IMU) data, computer vision, pressure data, radio frequency data.
50. The system of claim 39, wherein the hardware server extracts the user attributes from user data, the user attributes including at least one of purchase history, activity intent, interaction history, and comment data of the user.
51. The system of claim 39, wherein the hardware server generates the perceived preference metrics based on at least one of movement, touch, temperature, vision, smell, sound, taste, clothing feel, preferred hand feel, thermal preference, and movement feel.
52. The system of claim 39 wherein the hardware server generates the emotional indicia metrics based on at least one of personality data, affective state data, mood adaptation data, personal value data, objective data, and physiological data.
53. The system of claim 39, wherein the hardware processor calculates a preferred sensory state as part of the extracted user attributes.
54. The system of claim 39, wherein the emotion marking metrics include social marking metrics, connectivity metrics, and/or resonance marking metrics.
55. The system of claim 39, wherein a user device is connected to or integrated with an immersive hardware device that captures audio data, the image data, and data defining physical or behavioral characteristics of the user as part of the user data.
56. The system of claim 39, wherein the selected product comprises content for display or play on a computing device.
57. The system of claim 56, wherein the non-transitory memory has a content store and the hardware server has a content syndication engine that generates content as part of the selected product and transmits the generated content to the interface.
58. The system of claim 39, wherein the hardware processor receives object identification data and calculates a preferred sensory state as part of the user data.
59. The system of claim 39, wherein the affiliation database includes simulated clothing records, wherein the hardware server generates simulated product options as part of the product recommendation, and wherein the interface displays a visualization of the simulated product options.
60. The system of claim 59, wherein the simulated product options include at least one of software physical simulation, hardware physical simulation, static 3D viewer, and AR/VR experience content.
61. The system of claim 39, wherein the user data comprises image data related to the user, text input related to the user, data defining physical or behavioral characteristics of the user, and audio data related to the user, and wherein the hardware server uses multi-modal feature extraction to extract the user attributes:
for the image data and the data defining the physical or behavioral characteristics of the user, the multi-modal feature extraction implements at least one of: facial analysis; body analysis; eye tracking; behavioral analysis; social network or graph analysis; position analysis; user activity analysis;
For the audio data, the multimodal feature extraction performs speech analysis; and is also provided with
For the text input, the multimodal feature extraction performs text analysis,
calculating one or more states of one or more cognitive affective abilities of the user based on the cognitive affective ability metric and the social metric;
62. the system of claim 39 wherein a hardware server extracts the user attributes by calculating the emotional metric based on one or more states of one or more cognitive emotional abilities of the user and social metrics of the user.
63. The system of claim 39, wherein the non-transitory memory stores a classifier for generating data defining physical or behavioral characteristics of the user, and the hardware server extracts the user attributes by calculating activity metrics, cognitive emotional ability metrics, and social metrics using the classifier.
64. The system of claim 39 wherein the non-transitory memory stores a user model corresponding to the user and the hardware server uses the user model to calculate the emotion markup metrics for the user.
65. The system of claim 39, further comprising a plurality of user devices each having a different type of sensor for capturing different types of user data during the user session, each device of the plurality of devices transmitting the captured different types of user data to the hardware server over the network to generate the product recommendation.
66. The system of claim 39, wherein the hardware server is configured to: determining emotional and physical indicia of one or more additional users; determining users with similar emotional or physical markers; predicting connectivity between users with similar emotional or physical markers; and generating the product recommendation using data corresponding to the user having similar emotional or physical indicia.
67. The system of claim 39, wherein the interface is capable of transmitting another product request for the user session and providing additional visualizations of other product recommendations for the user session received in response to the another product request.
68. The system of claim 39, wherein the product comprises a program for display or playback on the hardware processor or the user device, wherein the program comprises two or more phases, each phase having a different content, intensity, or duration.
69. The system of claim 39 wherein the user data comprises personality type data, wherein the hardware server calculates the emotional marking metric by determining a personality type of the user based on the user data by comparing the personality type data with stored personality type data indicative of a correlation between personality types and personality type data.
70. The system of claim 39 wherein the hardware server calculates as part of the emotion markup metric at least one of: one or more emotional states of the user, one or more attentive states of the user, one or more sociophilic states of the user, one or more motivational states of the user, one or more re-rating states of the user, and one or more insight states of the user.
71. The system of claim 39, wherein the interface is a coaching application for improving the user's health based on the product recommendation and the emotion markup measure.
CN202180055866.0A 2020-07-16 2021-03-03 Method and system for interface for product personalization or recommendation Pending CN116529750A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202063052836P 2020-07-16 2020-07-16
US63/052,836 2020-07-16
CAPCT/CA2020/051454 2020-10-29
PCT/CA2021/050282 WO2022011448A1 (en) 2020-07-16 2021-03-03 Method and system for an interface for personalization or recommendation of products

Publications (1)

Publication Number Publication Date
CN116529750A true CN116529750A (en) 2023-08-01

Family

ID=79555886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180055866.0A Pending CN116529750A (en) 2020-07-16 2021-03-03 Method and system for interface for product personalization or recommendation

Country Status (4)

Country Link
EP (1) EP4182875A1 (en)
CN (1) CN116529750A (en)
CA (1) CA3189350A1 (en)
WO (1) WO2022011448A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116738072B (en) * 2023-08-15 2023-11-14 深圳大学 Multidimensional recommendation method combining human factor information
CN116744063B (en) * 2023-08-15 2023-11-03 四川中电启明星信息技术有限公司 Short video push system integrating social attribute information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013196553A (en) * 2012-03-22 2013-09-30 Dainippon Printing Co Ltd Clothing management and recommendation device
US10726465B2 (en) * 2016-03-24 2020-07-28 International Business Machines Corporation System, method and computer program product providing eye tracking based cognitive filtering and product recommendations
KR102520627B1 (en) * 2017-02-01 2023-04-12 삼성전자주식회사 Apparatus and method and for recommending products

Also Published As

Publication number Publication date
WO2022011448A1 (en) 2022-01-20
CA3189350A1 (en) 2022-01-20
EP4182875A1 (en) 2023-05-24

Similar Documents

Publication Publication Date Title
US20210248656A1 (en) Method and system for an interface for personalization or recommendation of products
US20220270738A1 (en) Computerized systems and methods for military operations where sensitive information is securely transmitted to assigned users based on ai/ml determinations of user capabilities
US11839473B2 (en) Systems and methods for estimating and predicting emotional states and affects and providing real time feedback
Aranha et al. Adapting software with affective computing: a systematic review
Liu et al. Technology-facilitated diagnosis and treatment of individuals with autism spectrum disorder: An engineering perspective
Lottridge et al. Affective interaction: Understanding, evaluating, and designing for human emotion
US20180096738A1 (en) Method for providing health therapeutic interventions to a user
EP4348665A1 (en) System and method for generating treatment plans to enhance patient recovery based on specific occupations
CN109479110A (en) The system and method that dynamic creation individualizes exercise videos
US10504379B2 (en) System and method for generating an adaptive embodied conversational agent configured to provide interactive virtual coaching to a subject
US11986300B2 (en) Systems and methods for estimating and predicting emotional states and affects and providing real time feedback
Guthier et al. Affective computing in games
Lindner Molecular politics, wearables, and the aretaic shift in biopolitical governance
CN116529750A (en) Method and system for interface for product personalization or recommendation
Novak et al. Linking recognition accuracy and user experience in an affective feedback loop
Betances et al. On the convergence of affective and persuasive technologies in computer-mediated health-care systems
WO2023159305A1 (en) Method and system to provide individualized interventions based on a wellness model
AU2022361223A1 (en) Mental health intervention using a virtual environment
US20210327559A1 (en) System for Optimizing Behavioral Changes of a User to Improve the User's Wellbeing
Albraikan InHarmony: A Digital Twin for emotional well-being
US20220137992A1 (en) Virtual agent team
US11783723B1 (en) Method and system for music and dance recommendations
US20240081689A1 (en) Method and system for respiration and movement
Karolus Proficiency-aware systems: designing for user skill and expertise
Taheri Multimodal Multisensor attention modelling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination