WO2021061699A1 - Interface adaptative pour des interactions basées sur un écran - Google Patents

Interface adaptative pour des interactions basées sur un écran Download PDF

Info

Publication number
WO2021061699A1
WO2021061699A1 PCT/US2020/052103 US2020052103W WO2021061699A1 WO 2021061699 A1 WO2021061699 A1 WO 2021061699A1 US 2020052103 W US2020052103 W US 2020052103W WO 2021061699 A1 WO2021061699 A1 WO 2021061699A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
data
change
output
adaptive interface
Prior art date
Application number
PCT/US2020/052103
Other languages
English (en)
Inventor
Hannes BENDFELDT
Original Assignee
Bendfeldt Hannes
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/579,747 external-priority patent/US11561806B2/en
Application filed by Bendfeldt Hannes filed Critical Bendfeldt Hannes
Publication of WO2021061699A1 publication Critical patent/WO2021061699A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/038Indexing scheme relating to G06F3/038
    • G06F2203/0381Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • Machine learning processes sensor data, and outputs dynamic screen and audio-based interaction, e.g., graphical user interface (GUI), computer generated images (CGI), visual and/or audio environment, and user experience design (UX). Screen and/or audio work environment and interactions are modified and adapted to the data from the user using machine learning based on input from sensor.”
  • GUI graphical user interface
  • CGI computer generated images
  • UX user experience design
  • Tiie present disclosure relates generally to data processing and, more particularly, to customizing output data on an electronic device associated with a user based on biological data of the user.
  • a digital device may be configured to automatically adjust display parameters to suit the environmental conditions and the content a user is currently viewing.
  • a digital device may have ‘an adaptive display’ feature that may enable the digital device to automatically adjust a color range, contrast, and sharpness of a display according to the current usage of the digital device by the user.
  • the digital device may sense the environmental conditions, determine the type of content the user is viewing, determine a particular application the user is using, and analyze all collected data to select parameters for optimizing the viewing experience of the user.
  • a machine learning system for customizing output based on user data may include at least one sensor, at least one computing resource, and an adaptive interface.
  • the at least one sensor may be configured to continuously capture the user data associated with a user during perception of output data by the user.
  • the at least one computing resource may include a first processor and a first memory and may be configured to analyze the user data received from the at least one sensor and, based on the analysis, determine dependencies between the user data and the output data.
  • the at least one computing resource may be further configured to determine, based on predetermined criteria, that an amount of the user data and the dependencies is sufficient to customize the output data.
  • the adaptive interface may include a second processor and a second memory and configured to continuously customize the output data using at least one machine learning technique based on the analysis of the user data and the dependencies.
  • the customized output data may be intended to elicit a personalized change.
  • a method for customizing an output based on user data may commence with continuously capturing, by at least one sensor, the user data associated with a user during perception of output data by the user. The method may then continue with analyzing, by at least one computing resource, the user data received from the at least one sensor. The method may further include determining dependencies between the user data and the output data based on the analysis. The method may then continue with determining, based on predetermined criteria, that an amount of the user data and the dependencies is sufficient to customize the output data. The method may further include continuously customizing, by an adaptive interface, the output data using at least one machine learning technique based on the analysis of the user data and the dependencies. The customized output data may be intended to elicit a personalized change.
  • FIG.1 illustrates an environment within which systems and methods for customizing output based on user data can be implemented, in accordance with some embodiments.
  • FIG.2 is a block diagram showing various modules of a machine learning system for customizing output based on user data, in accordance with certain embodiments.
  • FIG.3 is a flow chart illustrating a method for customizing output based on user data, in accordance with some example embodiments.
  • FIG.4 illustrates a further example environment within which systems and methods for customizing an output based on user data may be implemented, in accordance with some example embodiments.
  • FIG.5 illustrates a further environment within which systems and methods for customizing output based on user data can be implemented, in accordance with some example embodiments.
  • FIG.6 is a schematic diagram that illustrates operations performed by components of a machine learning system for customizing output based on user data, in accordance with some example embodiments.
  • FIG.7 is a schematic diagram that illustrates operations performed by an adaptive interface to customize output on a user device based on user data, in accordance with some example embodiments.
  • FIG.8 is a flow chart illustrating customization of output data of a user device based on user data, in accordance with some example embodiments.
  • FIG.9 is a schematic diagram showing customization of output data on a user device based on biological data of a user, according to an example embodiment.
  • FIG.10 is a schematic diagram illustrating processing data from a sensor using machine learning processing, according to an example embodiment.
  • FIG.11 is a flow chart illustrating continuous customization of output based on user data, according to an example embodiment.
  • FIG.12 is a schematic diagram showing operations performed by an adaptive interface to continuously customize output data using machine learning techniques, according to an example embodiment.
  • FIG.13 is a schematic diagram showing operations performed by an adaptive interface to continuously customize output data using machine learning techniques, according to an example embodiment.
  • FIG.14 is a block diagram illustrating continuous personalization of a brightness level on a user device based on data related to respiration or heart rate of a user, according to an example embodiment.
  • FIG.15 is a block diagram illustrating continuous personalization of a volume level on a user device based on data related to respiration or heart rate of a user, according to an example embodiment.
  • FIG.16 is a block diagram illustrating continuous personalization of an odorant level on a user device based on data related to respiration or heart rate of a user, according to an example embodiment.
  • FIG.17 is a schematic diagram showing a user interface of a mobile device customized by a machine learning system for customizing output based on user data, according to an example embodiment.
  • FIG.18 is a schematic diagram showing output data of headphones customized by a machine learning system for customizing output based on user data, according to an example embodiment.
  • FIG.19 is a schematic diagram showing output data of an artificial olfactory device customized by a machine learning system for customizing output based on user data, according to an example embodiment.
  • FIG.20 is a schematic diagram showing customizing output of a user device based on user data captured by a digital camera of the user device, according to an example embodiment.
  • FIG.21 is a schematic diagram showing an analysis of captured user data by an adaptive interface, according to an example embodiment.
  • FIG.22 is a schematic diagram showing output data continuously adapted by an adaptive interface, according to an example embodiment.
  • FIG.23 is a flow chart showing a method for customizing output based on user data, according to an example embodiment.
  • FIG.24 shows a computing system that can be used to implement a method for customizing output based on user data, according to an example embodiment.
  • the system for customizing output based on user data of the present disclosure may continuously customize an output provided by a digital device associated with a user to elicit a personalized change of biological parameters of the user.
  • the system may collect user data, such as biological data of the user or other users, historical data of the user or other users, ambient data, and the like.
  • the biological data may include data related to physical parameters of the user, e.g., a heart rate, body temperature, a blood oxidation level, presence of a substance in a blood, a blood glucose level, and the like.
  • the user data may be collected by sensors affixed to the user, such as a heart-rate monitor; sensor located in proximity to the user, such as a thermal imaging camera and a digital camera; sensors embedded into the digital device of the user; and the like.
  • the system may analyze the collected user data. Based on the analysis, the system may determine dependencies between the user data and the output data.
  • the system includes an adaptive interface unit, also referred herein to as an adaptive interface, that uses the results of the analysis to continuously customize output data on the digital device of the user, also referred to herein as a user device.
  • the adaptive interface applies machine learning techniques to process the results of analysis of the collected user data and the dependencies between the user data and the output data and select changes to a graphics output and/or an audio output on the user device to cause the change in biological data of the user.
  • the adaptive interface may continuously analyze the relationship between the biological data of the user, such as a heart rate, and the graphics and audio the user experiences when using the digital device.
  • the results of continuous analysis of dependencies between the biological data of the user and the output data provided to the user on the user device during perception of output data by the user via the user device may be stored in a database in a form of historic data associated with the user.
  • data on dependencies between biological data of a plurality of users and output data on digital devices of the plurality of users may be stored in the database (for example, in a form of historic data associated with the plurality of users).
  • the system may continuously collect the user data via the sensor without a change in output data for a period of time. Specifically, the system may continuously analyze the collected user data and the determined dependencies and determine whether an amount of the collected user data and the dependencies is sufficient to customize the output data. The determination of whether the amount of the collected user data and the dependencies is sufficient may be performed based on predetermined criteria.
  • the predetermined criteria may include a predetermined amount of collected user data, a predetermined amount of determined dependencies, a predetermined period of time, and other criteria associated with machine learning techniques.
  • the determination of whether the amount of the collected user data and the dependencies is sufficient may be performed using machine learning techniques, such as deep learning techniques.
  • the system may continuously collect user data and continuously determine dependencies between the biological data of the user and the output data provided to the user on the user device to collect a predetermined amount of data sets.
  • the system may determine, e.g., by using deep learning techniques, that the collected data sets are sufficient for the adaptive interface to determine how to customize the output data to elicit an automatic and immediate personalized change of a biological response of the user to perception of the customized output data.
  • the determination that the amount of the user data and the dependencies is sufficient to customize the output data may trigger customization of the output data by the adaptive interface.
  • the adaptive interface may determine, based on the analysis of the collected user data, that the user has an increased heart rate at the current moment of time, and adapt the output data on the interface of the digital device to elicit reduction of the heart rate of the user.
  • the adaptive interface may determine, based on the historic data, that the heart rate of the user typically changes in response to change of the volume of the audio output and brightness of the video output. Based on such determination, the adaptive interface may reduce the volume of the audio output and decrease the brightness of a display of the user device to cause the reduction of the heart rate of the user.
  • the adaptive interface may relate to bio-adaptive technology and may perform the adaptation of an output of the user device based on biological parameters of the user. The adaptation of the output of the user device may be directed to eliciting the change of the biological parameters of the user in case the biological parameters of the user do not correspond to predetermined ranges or values.
  • FIG.1 illustrates an environment 100 within which systems and methods for customizing output based on user data can be implemented, in accordance with some embodiments.
  • the environment 100 may include a frontend 101 and a backend 103.
  • the frontend 101 may include a sensor 106 and a user device 104 associated with a user 102.
  • the backend 103 may include a machine learning system 200 for customizing output based on user data (also referred to as a system 200), a server 108, a data network shown as a network 110 (e.g., the Internet or a computing cloud), and a database 112.
  • the user device 104, the system 200, the server 108, the sensor 106, and the database 112 may be connected via the network 110.
  • the user 102 may be associated with the user device 104.
  • the user device 104 may include a smartphone 114, a laptop 116, headphones 118, a retinal implant 120, an artificial olfaction device 122, and so forth.
  • the artificial olfaction device 122 may include an electronic system operating as an electronic nose of the user 102.
  • the network 110 may include a computing cloud, the Internet, or any other network capable of communicating data between devices.
  • Suitable networks may include or interface with any one or more of, for instance, a local intranet, a corporate data network, a data center network, a home data network, a Personal Area Network, a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network, a virtual private network, a storage area network, a frame relay connection, an Advanced Intelligent Network connection, a synchronous optical network connection, a digital T1, T3, E1 or E3 line, Digital Data Service connection, Digital Subscriber Line connection, an Ethernet connection, an Integrated Services Digital Network line, a dial-up port such as a V.90, V.34 or V.34bis analog modem connection, a cable modem, an Asynchronous Transfer Mode connection, or a Fiber Distributed Data Interface or Copper Distributed Data Interface connection.
  • a local intranet a corporate data network, a data center network, a home data network, a Personal Area Network, a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network, a virtual private network,
  • communications may also include links to any of a variety of wireless networks, including Wireless Application Protocol, General Packet Radio Service, Global System for Mobile Communication, Code Division Multiple Access or Time Division Multiple Access, cellular phone networks, Global Positioning System, cellular digital packet data, Research in Motion, Limited duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network.
  • the data network can further include or interface with any one or more of a Recommended Standard 232 (RS-232) serial connection, an IEEE-1394 (FireWire) connection, a Fiber Channel connection, an IrDA (infrared) port, a Small Computer Systems Interface connection, a Universal Serial Bus (USB) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
  • the sensor 106 may be affixed to any body part of the user 102. Alternatively, the sensor 106 may be located in proximity to the user 102. In a further example embodiment, the sensor 106 may be integrated into the user device 104.
  • the sensor 106 may include a biological sensor (e.g., a heart-rate monitor), a thermal imaging camera, a breath sensor, a radar sensor, and the like.
  • the sensor 106 may collect user data 124 and provide the collected user data 124 to the user device 104.
  • the user device 104 may provide the user data 124 to the system 200.
  • the system 200 may be running on the user device 104 or in the computing cloud.
  • the system 200 may have an access to output data reproduced by the user device 104, such as graphics and audio.
  • the system 200 may include a computing resource 204 and an adaptive interface 206.
  • the computing resource 204 of the system 200 may analyze the user data 124.
  • the adaptive interface 206 may apply machine learning techniques 126 to the results of the analysis to customize the output data of the user device 104 so as to cause changing of the biological data of the user 102.
  • the adaptive interface 206 may include a combination of sensors, machine learning algorithms, processing units, and computing resources.
  • the adaptive interface 206 may reside in the user device 104 or remotely to the user device, e.g., in the computing cloud.
  • the adaptive interface 206 may also send the data obtained based on processing of the user data using machine learning algorithms to the server 108 to update the data of an application running on the user device 102.
  • the server 108 can include at least one controller and/or at least one processor.
  • An alternate implementation of the server 108 can include an application or software running on the user device 104.
  • the server 108 can update and improve code associated with the application using data associated with the plurality of individual users.
  • the server 108 can then send the updated output data associated with the application to the adaptive interface 106 via the network 110 for further displaying on the user device 104.
  • FIG.2 is a block diagram showing various modules of a machine learning system 200 for customizing output based on user data, in accordance with certain embodiments.
  • the system 200 may include at least one sensor shown as sensor 202, at least one computing resource shown as a computing resource 204, an adaptive interface unit shown as an adaptive interface 206, and optionally a database 208.
  • the database 208 may include computer-readable instructions for execution by the computing resource 204 and the adaptive interface 206.
  • each of the computing resource 204 and the adaptive interface 206 may be implemented as one or more processors.
  • the processor may include a programmable processor, such as a microcontroller, a central processing unit (CPU), and so forth.
  • the processor may include an application-specific integrated circuit or programmable logic array, such as a field programmable gate array, designed to implement the functions performed by the system 200.
  • the system 200 may be installed on a user device or may be provided as a cloud service residing in a cloud storage.
  • the sensor 202 may be affixed to a user, integrated into the user device, or located in proximity to the user.
  • the sensor 202 may include at least one of the following: a thermal imaging camera, a digital camera, a breath sensor, a depth sensor, a radar sensor, a gyroscope, and so forth.
  • the sensor 202 may include a biological sensor.
  • the sensor 202 may be configured to continuously capture the user data continuously.
  • the user data may include at least one of the following: biological data of a user, biological data of a plurality of users, historical data of the user, historical data of the plurality of users, ambient data, and so forth.
  • the biological data may include at least one of the following: a respiratory rate, a heart rate, a heart rate variability, an electroencephalography, an electrocardiography, an electromyography, an electrodermal activity, a mechanomyography, a haptic interaction, a motion, a gesture, pupil movement, a biological analyte, a biological structure, a microorganism, a color of skin of the user, a blood glucose level, blood oxygenation, blood pressure, and so forth.
  • the ambient data may be associated with at least one of the following: light, heat, motion, moisture, pressure, and so forth.
  • the computing resource 204 may be configured to analyze the user data received from the sensor 202.
  • the computing resource 204 may include at least one of the following: an application programming interface (API), a server, a cloud computing resource, a database, a network, a blockchain, and so forth.
  • API application programming interface
  • the at least one computing resource that may be implemented as the user device associated with the user may include one of the following: a smartphone, a tablet computer, a phablet computer, a laptop computer, a desktop computer, an augmented reality device, a virtual reality device, a mixed reality device, a retinal implant, an artificial olfaction device, headphones, an audio output device, and so forth.
  • the computing resource 204 may include one of a CPU, a graphics processing unit (GPU), and a neural processing unit (NPU). Based on the analysis, the computing resource 204 may determine dependencies between the user data and the output data.
  • the computing resource 204 may determine, based on predetermined criteria, that an amount of the user data and the dependencies is sufficient to customize the output data.
  • the predetermined criteria may include a predetermined amount of collected user data, a predetermined amount of determined dependencies, a predetermined period of time, and other criteria needed by machine learning techniques.
  • the determination that the amount of the user data and the dependencies is sufficient to customize the output data may be performed based on the analysis of the user data and the dependencies by the machine learning techniques, such as a deep learning technique.
  • the adaptive interface 206 may be triggered to customize the output data.
  • the adaptive interface 206 may be configured to continuously customize the output data of the user device using at least one machine learning technique based on the analysis of the user data.
  • the at least one machine learning technique may include one or more of the following: an artificial neural network, a convolutional neural network, a Bayesian neural network, a supervised machine learning algorithm, a semi-supervised machine learning algorithm, an unsupervised machine learning algorithm, a reinforcement learning, a deep learning, and so forth.
  • the customized output data may be intended to elicit a personalized change.
  • the personalized change may include an automatic and immediate personalized change of a biological response of the user to perception of the customized output data by the user without requiring the user to take an action in response to the customized output data.
  • the automatic and immediate personalized change of the biological response may include at least a change in the biological data of the user.
  • the personalized change in the user data may include at least one of the following: a change of perception time, a change of a respiratory rate, a change of a breathing rate, a change of a heart rate, a change of a heart rate variability, a change of a haptic interaction, a change of an electroencephalographic signal, a change of an electrocardiographic signal, a change of an electromyographic signal, a change of a mechanomyographic signal, a change of an electrodermal activity, a change of a motion, a change of a gesture, a change of a pupil movement, a change of a biological structure, a change of a microorganism, a change of a color of skin of the user, a change of blood glucose levels, a change of a blood oxygenation, a change of a blood pressure, a change of a biological analyte, a change of
  • the customized output data may be associated with the user device of the user, such as a smartphone, a laptop, a retinal implant, and so forth.
  • the customized output data may include an audio output and/or a graphics output.
  • the audio output and/or the graphics output may be associated with an application running on a user device.
  • the output data of the application running on the user device can be customized.
  • FIG.3 is a flow chart illustrating a method 300 for customizing an output based on user data, in accordance with some example embodiments. In some embodiments, the operations may be combined, performed in parallel, or performed in a different order. The method 300 may also include additional or fewer operations than those illustrated.
  • the method 300 may be performed by processing logic that may comprise hardware (e.g., decision making logic, dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both.
  • the method 300 may commence at operation 302 with continuously capturing, by at least one sensor, the user data associated with a user during perception of output data by the user.
  • the sensor may be configured to continuously capture the user data in real-time (e.g., when the user is awake and asleep), capture the user data during the usage of the user device by the user, or capture the user data at predetermined times.
  • the at least one sensor may include a thermal imaging camera, a digital camera, a breath sensor, a depth sensor, a radar sensor, a gyroscope, a thermal imaging camera, and so forth.
  • the at least one sensor may include a device for analyzing electronic signals emitted by a user.
  • the method 300 may further include extracting, by the device for analyzing electronic signals, one of a physiological parameter of the user and an activity associated with the user.
  • the method 300 may continue at operation 304 with analyzing, by at least one computing resource, the user data received from the at least one sensor.
  • the method 300 may include determining dependencies between the user data and the output data based on the analysis of the user data.
  • the method 300 may include determining, based on predetermined criteria, that an amount of the user data and the dependencies is sufficient to customize the output data. [0058] At operation 310, upon determining that that amount of the user data and the dependencies is sufficient to customize the output data, the method 300 may proceed with continuously customizing, by an adaptive interface, the output data using at least one machine learning technique based on the analysis of the user data and the dependencies.
  • the customized output data may be intended to elicit a personalized change.
  • the personalized change may include a change of biological parameters of the user.
  • the continuous customizing of the output data may include at least one of the following: changing a color, playing audio-perceived stimuli, providing a haptic feedback, changing a font, changing a shape of the font, changing a brightness, changing a contrast, changing an illuminance (e.g., changing values of lux of light), changing warmth, changing a saturation, changing a fade, changing a shadow, changing a sharpness, changing a structure, generating computer images, changing a tone, changing a bass, changing a volume, changing a pitch of a sound, changing a treble, changing a balance, changing a GUI, changing a UX, and so forth.
  • changing a color playing audio-perceived stimuli, providing a haptic feedback
  • changing a font changing a shape of the font
  • changing a brightness changing a contrast
  • changing an illuminance e.g., changing values of lux of light
  • changing warmth changing a saturation
  • changing a fade changing a shadow
  • the method 300 may optionally include an operation 312, at which the at least one computing resource may aggregate further user data associated with a plurality of users into federated user data.
  • the at least one computing resource may analyze the federated user data using collaborative machine learning.
  • the method 300 may further include adapting, by the at least one computing resource, the at least one machine learning technique for individual users based on the results of the analysis of the federated user data at optional operation 316.
  • the method 300 may further include continuously adapting, by the adaptive interface, a media output based on user interactions with the adaptive interface.
  • FIG.4 illustrates a further example environment 400 in which systems and methods for customizing an output based on user data may be implemented, according to an example embodiment.
  • the environment 400 includes a client side, shown as a frontend 101, and a backend 103.
  • the frontend 101 can include a sensor 106 and a user device 104.
  • the backend 103 can include a system 200, machine learning techniques 126, and a blockchain 402.
  • the system 200 may include an API 404 to communicate with the user device 104, an adaptive interface 206, and a computing resource 204.
  • the user device 104 may include a smartphone 114, a laptop 116, headphones 118, a retinal implant 120, and an artificial olfaction device 122, as well as a tablet computer, a phablet computer, a desktop computer, an augmented reality device, a virtual reality device, a mixed reality device, an audio output device, and so forth.
  • the sensor 106 can detect biological data of the user 102. Though two sensors 106 are shown on FIG.4, any number of sensors, e.g., one, two, or more, may be attached to the user 102, integrated into the user device 104, or located in proximity to the user 102. In an example embodiment, the sensor 106 may detect a number of breaths per minute of the user 102. In other example embodiments, the sensor 106 may detect any other biological activity of the user 102, such as a heart rate, heart rate variability, electroencephalography, electromyography, mechanomyography, and so forth.
  • the senor 106 may be a device that analyzes electronic signals emitted by the user 102, i.e., frequencies emitted by a body of the user 102, and depicts the analyzed electronic signals as a biometric parameter or activity of the user 102.
  • the sensor 106 can be a thermal imaging camera.
  • the adaptive interface 206 may use deep learning algorithms of machine learning techniques 126 to analyze the heart rate and breathing rates of the user 102 based on data collected by the thermal imaging camera.
  • the sensor 106 may act as a passive sensor or an active sensor. When acting as a passive sensor, the sensor 106 may sense data emitted by the user 102, such as emitted thermal wavelengths.
  • the thermal wavelengths may be analyzed by the adaptive interface 206 using deep learning algorithms of machine learning techniques 126 to determine a breathing pattern or a heart rate of the user 102.
  • the sensor 106 may send towards the user 102 and receive back ultrasound or radar waves.
  • the waves received upon being reflected from the body of the user 102 can be analyzed by the sensor 106 to detect the physiological state of the user 102.
  • the API 404 may include a Representational State Transfer (REST) API, O API, a set of subroutine definitions, protocols, and tools for receiving data from a server (such as a server 108 shown on FIG.1).
  • the API 404 may provide graphics and audio on the user device 104 based on data received from the adaptive interface 206.
  • the API 404 may be associated with one or more of the following: a web-based system, an operating system, a database system, a computer hardware, and so forth.
  • the adaptive interface 206 may apply machine learning techniques 126 including artificial neural network, convolutional neural network, Bayesian neural network, or other machine learning techniques to enable the automatic feature learning, the machine learning inference process, and deep learning training of the adaptive interface 206.
  • the adaptive interface 206 may receive data from the sensor 106, the user device 104, the API 404, a computing resource 204, and the network 110.
  • the computing resource 204 may be implemented as a component of the user device 104.
  • the adaptive interface 206 may communicate with and transfer data to the user device 104 for data processing to use the processing units such as a GPU and CPU in the user device 104, and may apply predictive modeling and machine learning processes.
  • the machine learning techniques 126 applied by the adaptive interface 206 may include supervised machine learning, semi-supervised machine learning, unsupervised machine learning, federated machine learning, collaborative machine learning, and so forth.
  • the supervised machine learning in the adaptive interface 206 is based on a training dataset with labeled data, already installed in the adaptive interface 206 and/or sent from the API 404 and/or network 110. For the supervised machine learning in the adaptive interface 206, the data are labeled and the algorithms learn to predict the output from the input data, namely user data 124.
  • the semi-supervised learning in the adaptive interface 206 uses a large amount of user data 124, data of the user device 104, and/or API 404, and only some of preinstalled data and/or data from network 110 by using a mixture of supervised and unsupervised machine learning techniques.
  • unsupervised machine learning in the adaptive interface 122 all data are unlabeled and the algorithms learn to inherit structure from the user data 124, data of the user device 104, and/or API 404.
  • the adaptive interface 206 collaboratively learns a shared prediction model while keeping all the training data on the user device 104 and sensor 106, decoupling the ability to do machine learning from the need to store the data in the network 110.
  • the user device 104 downloads the current model, improves it via adaptive interface 206 by learning from user data 124 related to the interaction of the user 102 with the user device 104 and user data 124 received from the sensor 106, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remain on the user device 104 and adaptive interface 206, and no individual updates are stored in the cloud. For the federated and collaborative machine learning setting, the data is distributed across millions of devices in a highly uneven fashion.
  • the user data from the sensor 106 related to biological parameters of the user 102 and user interaction with the user device 104 can be communicated to the network 110 in all machine learning models, but can also remain as personalized and customized data sets in the adaptive interface 206 and user device 104.
  • the adaptive interface 206 learns about the specifications for routines, data structures, object classes, variables and programming of APIs, and computing resource 204, network 110, user device 104, and sensor 106.
  • the adaptive interface 206 processes data about the user interaction with an application running on the user device 104, such as graphics and audio from the user device 104, and the user data 124, such as biological data, from the sensor 106.
  • the adaptive interface 206 learns how the biometric data and activity data of the user 102 are changing in real-time when the user 102 interacts with the user device 104.
  • the adaptive interface 206 dynamically customizes the output data from API 404 using the machine learning techniques 126, and sends the customized output data back to the API 404.
  • the API 404 sends the adapted and customized output data in real-time to the user device 104.
  • the adaptive interface 206 may also collect data associated with the user 102 over time by receiving the biological data of the user 102 and/or data on interaction of the user 102 with the user device 104 and analyzes the collected data using deep learning techniques or other trained learning techniques. The results of the analysis of the adaptive interface 206 and the biological data of the user 102 can be processed and applied to the output data of the API 404 and the usage of the output data of the API 404 in the user device 104 to customize the graphics and audio provided on the user device 104 to the user 102. [0071] The adaptive interface 206 can customize the output data based on the biological data of the user 102 (for example, for faster perception time of the graphics and audio in the user device 104 by the user 102).
  • the adaptive interface 206 can also customize the graphics and audio to the physiological state of the user 102 detected and analyzed based on the biological data of the user 102. For example, a heart rate may be sensed by the sensor 106, and the adaptive interface 206 may customize the graphics and audio on the user device 104 to decrease or increase the heart rate of the user 102 in real- time while the uses interacts with the user device 104. [0072] If the sensor 106 is a breathing sensor, the adaptive interface 206 may employ thermal imaging and machine learning techniques to adapt and customize the graphics and audio to elicit slower or longer inhales and exhales. [0073] The adaptive interface 206 may also send the data of the machine learning to the network 110.
  • the network 110 can send the data to the server 108 to update the data of the application running on the user device 104.
  • the server 108 can update and improve a code associated with the application with data associated with the plurality of individual users.
  • the server 108 can then send the updated output data to the adaptive interface 106 via the network 110.
  • the adaptive interface 206 can also interact with code storing networks such as blockchain 402 or cloud computing (not shown) as alternate implementations of the server 108 shown on FIG.1.
  • the adaptive interface 206 may send the data of the machine learning to the blockchain 402 for further processing.
  • the adaptive interface 206 may also add customization and adaption of the output data in the user device 104 via the user data 124 of the user 102.
  • the customization and adaption of the adaptive interface 206 can enable faster processing speeds for user interaction with the application state (graphics, UX, GUI, and audio) as in the user device 104.
  • the adaptive interface 206 can also add stress-reduction and changes of personal biological data via the adapted and customized output data in the user device 104.
  • the adaptive interface 206 can also improve the network speed between adaptive interface 206, API 404, and user 102 via customized and adapted information processing.
  • the adaptive interface 206 may use different types of machine learning techniques to achieve smarter models, lower latency, and less power consumption to potentially ensure privacy when data of user 102 remain on the user device 104 and in adaptive interface 206.
  • This approach has another immediate benefit: in addition to providing an update to the shared model to the network 110 and customized output data, the improved and updated output data on the user device 104 can also be used immediately in real-time to provide user experience personalized based on the way the user 102 uses the user device 104.
  • the adaptive interface 206 can also replace the API 404 with the API generated based on the output data.
  • the adaptive interface 206 can feature all the programming tasks and steps employed in API and replace the API by connecting sensors, user devices, and networks for customized and improved applications, performances, and experiences.
  • FIG.5 illustrates an environment 500 within which systems and methods for customizing output based on user data can be implemented, in accordance with some embodiments.
  • the system 200 may be in communication with the blockchain 402 and provide the user data 124 to the blockchain 402.
  • the blockchain 402 may be in communication with a developer community 502 and may provide results of analysis of the user data 124 to the developer community 502.
  • the developer community 502 may use the analysis of the user data 124 to develop further machine learning models for processing the user data 124 by the blockchain 402.
  • FIG.6 is a schematic diagram 600 that illustrates operations performed by components of a machine learning system for customizing output based on user data, according to an example embodiment.
  • the user device 104 may display graphics and play audio to a user 102.
  • the user device 104 may include a user interface 602, a processing unit 604, an NPU 606, and a displaying unit 608.
  • An adaptive interface 206 of the machine learning system 200 for customizing output based on user data may be located remotely with respect to the user device 104 (e.g., in a computing cloud).
  • the NPU 606 may be not the component of the user device 104, but may be located remotely with respect to the user device 104 (e.g., in the computing cloud).
  • the user 102 may perceive the output in a form of visual data and/or audio data provided by the user device 104 via the user interface 602, UX, CGI, common gateway interface, a work environment (e.g., a social media platform), and other forms of graphics and potential audio files or audio-perceived frequencies.
  • a screen of the user device 104 may display visually-perceived stimuli, for instance, when the user 104 device uses a retinal implant technology.
  • the user device 104 can be configured in a form of a computing and processing unit and may be further configured to provide visual stimuli and play audio-perceived stimuli.
  • the user 102 may interact with the user device 104.
  • the interaction can be in any manner, for example, by perceiving, visually or audibly, an application running on the user device 104, by changing the visuals or audio on the user device 104, for example, by haptically interacting with the user device 104.
  • the user 102 can interact with the user device 104 by pressing buttons displayed on the user interface 602 of the user device 104 with fingers.
  • pupil movements of the user 102 or other forms of interaction of the user 102 may be tracked.
  • a sensor 106 may be affixed to the user 102 or may be located in proximity to the user 102 and may sense data related to physical parameters of the user 102 and convert the data into an electrical signal. The sensed data may be considered to be an input from the user 102.
  • the input may include light, heat, motion, moisture, pressure, or any other physical parameters of the body of the user 102 that can be sensed by the sensor 106.
  • the sensor 106 that detects changes of the physical parameters of the user 102 may be a biosensor configured to detect the presence or concentration of a biological analyte, such as a biomolecule, a biological structure, or a microorganism in/at the body of the user 102.
  • the sensor 106 in a form of the biosensor may include three parts: a component that recognizes the analyte and produces a signal, a signal transducer, and a reader device.
  • the sensor 106 may provide an output in a form of a signal that may be transmitted electronically to the adaptive interface 206 for reading and further processing.
  • the sensor 106 may further be a camera configured to detect changes of physical parameters of the user 102, such as the color of the skin of the user 102.
  • the images can be analyzed using machine learning algorithms, e.g., using the NPU 606 or the adaptive interface 206, to evaluate the changes of biological parameters of the user 102 (for example, a heart rate of the user 102).
  • Some types of the sensor 106 may require the use of learning algorithms and machine learning techniques in the adaptive interface 206, the NPU 606, and/or processing unit 604.
  • the sensor 106 may also be configured on a form of a thermal imaging camera to detect a stress level, a breathing rate, a heart rate, a blood oxygen level, and other biological parameters of the user 102.
  • the adaptive interface 206 may use machine learning algorithms and neural networks to analyze thermal imagery and detect the stress level, the breathing rate, the heart rate, the blood oxygen level, and other parameters of the user 102.
  • the user device 104 may communicate with the sensor 106 to obtain the time of interaction of the user 102 with the user device 104.
  • some types of sensor 106 may use the processing unit 604 in the user device 104 for applying the machine learning techniques and performing the analysis.
  • the biological parameters, the time of the detection of biological parameters of the user by the sensor 106, and data related to the user interaction with the user device 104 may be sent by the sensor 106 and the user device 104 to the adaptive interface 206.
  • the adaptive interface 206 may customize the data displayed by the user interface 602 based on data received from the sensor 106 and/or the user device 104.
  • the adaptive interface 206 may use the processing units of the user device 104, such as a CPU, a GPU, and/or the NPU 606 of the user device 104.
  • the adaptive interface 206 may use different types of machine learning techniques, such as supervised machine learning, semi-supervised machine learning, unsupervised machine learning, federated and/or collaborative machine learning, to customize the output data of the user device 104, such as graphics and audio, for the user 102 based on the biological data received from the sensor 106 and data on the user interaction from the user device 104.
  • the adaptive interface 206 may send the adapted and customized output data to the user interface 602.
  • the user interface 602 may display the adapted and customized data on a screen of the user device 104.
  • the user device 104 may use the displaying unit 608 to display the output data in a customized format provided by the adaptive interface 206.
  • the cycle of customizing of the output data of the user device 104 repeats in real-time so the continuously collected user data and data on user interaction with the user device 104 are used to update the graphics and audio of the user device by the adaptive interface 206.
  • the analysis of the user data by using machine learning of the adaptive interface 206 to adapt the output data in order to elicit the change of the biological parameters of the user 102 may result in a faster processing of the user data and customized user experience for the user 102 using the user device 140.
  • the user data may be captured during a loading time of the application.
  • the predetermined criteria for determining that the amount of the collected user data and determined dependencies is sufficient to customize the output data can be an expiration of the loading time of the application.
  • the loading of the application starts and loading wheels, graphs, and/or advertisements are usually shown on a screen of the user device when the application loads.
  • the loading time of the application can be used to capture user data via a sensor, for example, via a camera of the user device.
  • output data such as visual and/or audio data
  • output data can be provided to the user via the user device during the loading time of the application.
  • several types of visual and/or audio data may be presented to the user, such as various pictures, sounds, animated media, and so forth, and the user data that the user has when viewing and/or listening to each type of visual and/or audio can be collected. These various types of visual and/or audio data can be used to analyze how the user data change in response to perceiving various types of visual and/or audio data by the user.
  • the user data collected during the loading time can be used to determine dependencies between the user data and the output data perceived by the user.
  • the adaptive interface can continuously customize the output data related to the application, such as a video and audio output, based on the collected user data and determined dependencies.
  • General-purpose approximate computing explores a third dimension—error— and trades the accuracy of computation for gains in both energy and performance. Techniques to harvest large savings from small errors have proven elusive.
  • the present disclosure describes an approach that uses machine learning-based transformations to accelerate approximation-tolerant programs.
  • the core idea is to train a learning model how an approximable region of a code — a code that can produce imprecise but acceptable results — behaves and replace the original code region with an efficient computation of the learned model.
  • Neural networks are used to learn code behavior and approximate the code behavior.
  • the Parrot algorithmic transformation may be used, which leverages a simple programmer annotation ("approximable”) to transform a code region from a von Neumann model to a neural model.
  • the compiler replaces the original code with an invocation of a low-power accelerator called an NPU.
  • the NPU is tightly coupled to the processor to permit profitable acceleration even when small regions of code are transformed. Offloading approximable code regions to the NPU is faster and more energy efficient than executing the original code.
  • NPU acceleration provides whole-application speed increase up to 2.3 times and energy savings of up to 3 times on average with an average quality loss of 9.6% at most.
  • the NPU forms a new class of accelerators and shows that significant gains in both performance and efficiency are achievable when the traditional abstraction of near-perfect accuracy is relaxed in general-purpose computing. It is widely understood that energy efficiency now fundamentally limits microprocessor performance gains.
  • FIG.7 is a schematic diagram 700 that illustrates operations performed by an adaptive interface to customize output on a user device based on user data, according to an example embodiment.
  • the user 102 may interact with a user device 104 by reviewing graphics 702 shown on a user interface 716 of the user device 104 and listening to audio 704 produced by the user device 104.
  • a sensor 106 may continuously sense user data 124 during the interaction of the user 102 with the user device 104.
  • the user device 104 may provide data related to output data currently shown to the user on the user interface 716 to the adaptive interface 206.
  • the sensor 106 may provide the sensed user data 124 to the user device 104, and the user device 104 may analyze the sensed user data 124 and provide the results of the analysis to the adaptive interface 206.
  • the adaptive interface 206 may process the data related to output data currently shown to the user on the user interface 716 and the analyzed user data 124 using machine learning techniques 126. Based on the processing, the adaptive interface 206 may continuously customize output data of the user device 104 and provide customized output data 706 to the API 404 of the user device 104. Upon receipt of the customized output data 706, the API 404 may provide the customized output data 706 to the user 102 on a display of the user device 104.
  • the customized output data 706 may include one or more of the following: an adapted microservice 708, adapted image files 710 (e.g., in JPEG, JPG, GIF, PNG, and other formats), adapted audio files 712 (e.g., in WAV, MP3, WMA, and other formats), adapted files 714 (e.g., in HTM, HTML, JSP, AXPX, PHPH, XML, CSHTML, JS, and other formats), and so forth.
  • FIG.8 is a flow chart 800 illustrating customizing output of a user device based on user data, in accordance with some example embodiments.
  • a user device may receive programming data from an API associated with the user device.
  • the user device may provide the received programming data to the user at step 804 by displaying graphics on a display of the user device and playing audio using a speaker of the user device.
  • the user may interact with the user device, e.g., by viewing the graphics displayed on the user device and listening to the audio played the user device, as shown by step 806.
  • User data such as biological data of the user, may be continuously sensed by a sensor as shown by step 808. Additionally, the user device may communicate with the sensor at step 810 to obtain the time of the user interaction with the user device. The time of interaction may be used for determining a dependency of the user data on the interaction of the user with the user device at each moment of time.
  • the user device and the sensor may send user data and data on user interaction to an adaptive interface.
  • the adaptive interface may customize output data of the API of the user device by using machine learning techniques at step 814.
  • the adaptive interface may analyze the user data and select changes to be done to the output data of the API to cause changing of the user data.
  • the adaptive interface may analyze blood pressure of the user, determine that the user pressure exceeds a predetermined value, review historical data related to dependency of the user pressure on visual and audio data provided to the user on the user device, and customize the visual and audio data to elicit decreasing of the blood pressure of the user.
  • the adaptive interface may send the customized output data to the API of the user device at step 816.
  • FIG.9 is a schematic diagram 900 showing customization of output data on a user device based on biological data of a user, according to an example embodiment.
  • the breath of a user 102 may be continuously monitored by a biosensor 902, such as a breath sensor.
  • User data collected by the biosensor 902 may be provided to a user device 104 as an input 904 from the biosensor 902.
  • the user device 104 may provide the input 904 from the biosensor 902 to a computing resource.
  • the computing resource may analyze the input 904 from the biosensor 902.
  • the analysis may include determining breath depth 906 and breath frequency 908.
  • the computing resource may provide the results of the analysis to an adaptive interface.
  • the user device 104 may also provide, to an adaptive interface, data related to output 910 viewed by the user 102 at the time the biosensor 902 collected the user data.
  • the output 910 provided to the user 102 on the user device 104 may include graphics 912 shown to the user 102 using the user device 104, such as background, fonts, GUI elements, CGI, UX, and the like.
  • the adaptive interface may process the results of the analysis, the output 910 viewed by the user 102, and historical data previously collected for the user 102 and/or a plurality of users on dependency of biological data of the user 102 and/or the plurality of users on the output 910 viewed on the user device 104.
  • the adaptive interface may perform the processing 922 using machine learning techniques. Based on the processing, the adaptive interface may customize the output 910 to provoke changing of the user data (e.g., to provoke deeper inhales of the user 102).
  • the adaptive interface may provide customized output 914 to the user device 104.
  • the customized output 914 provided to the user 102 on the user device 104 may include customized graphics 916 shown to the user 102 using the user device 104, such as customized background, fonts, GUI elements, CGI, UX, and the like.
  • the biosensor 902 may continue monitoring the user data and provide the data collected based on the monitoring to the computing resource.
  • the computing resource may analyze the input 904 from the biosensor 902. In an example embodiment, the analysis may include determining breath depth 918 and breath frequency 920 that the user has after reviewing the customized output 914.
  • the user data continuously collected by the biosensor 902 and the customized output 914 provided to the user 102 may be continuously analyzed by the adaptive interface.
  • the adaptive interface may continue applying the machine learning techniques for the analysis 924.
  • FIG.10 is a schematic diagram 1000 illustrating processing data from a sensor using machine learning techniques, according to an example embodiment.
  • the sensor such as a breath sensor, may provide detected data as an input to an adaptive interface.
  • the adaptive interface may process the input from the sensor in a machine learning environment using machine learning algorithms.
  • the adaptive interface may continuously learn about user response to providing customized visual and audio output data to the user using the user device.
  • the user response may include changing of the biological parameters of the user invoked by reviewing the customized visual and audio output data by the user.
  • the biological parameters may include an average breath force 1004 sensed by the sensor and an average breath force 1006 and 1008 further sensed by the sensor upon providing the customized visual and audio output data to the user.
  • FIG.11 is a flow chart 1100 illustrating continuous customization of output based on user data, according to an example embodiment. Data related to biological parameters of the user and interaction of the user with a user device may be continuously captured by a sensor and the user device in a form of user biodata and interaction 1102.
  • the captured biological data, also referred to as biodata, and data on interaction 1102 may be provided to an adaptive interface 206 as input 1104.
  • the adaptive interface 206 may detect the user biodata and interaction 1102 and send the data to be displayed by a user device as output 1108.
  • a displaying unit of the user device may process the data received from the adaptive interface 206 and provide the output on the user device, as shown by block 1110.
  • the output may be provided by displaying visual data, playing audio data, providing haptic feedback, and so forth.
  • the sensor and the user device may continuously provide further user biodata and data on interaction, as shown by blocks 1112, 1114, 1116, and 1118.
  • the adaptive interface 206 may apply the machine learning techniques to customize the output of the user device and send the customized output to the user device, as shown by blocks 1120, 1122, 1124, and 1126.
  • the displaying unit of the user device may process the customized data received from the adaptive interface 206 and provide the updated data on the user device, as shown by blocks 1128, 1130, 1132, 1134, and 1136.
  • FIG.12 is a schematic diagram 1200 showing operations performed by an adaptive interface to continuously customize output data using machine learning techniques, according to an example embodiment.
  • the adaptive interface may continuously receive input 1202 from a user device.
  • the input 1202 may include user data sensed by a sensor and time of providing data, e.g., graphics, on a display of the user device to the user.
  • the adaptive interface may determine which user data the user had at a time of providing the data on the display. For example, the adaptive interface may determine the blood pressure the user had when the user read information on a green font of a webpage displayed on the user device.
  • the adaptive interface may apply machine learning techniques and neural networks 1206 to determine whether the user data need to be changed according to predetermined criteria (e.g., whether the blood pressure of the user is above a predetermined value at the current moment of time).
  • the adaptive interface may further apply machine learning techniques and neural networks 1206 to determine specific changes 1208 to be applied to the output data of the user device to cause changing of the user data (e.g., to cause decreasing of the blood pressure of the user).
  • the adaptive interface may send the changed data to be displayed to the user device.
  • a displaying unit of the user device may process and display the changed data as the output 1210 of the user device.
  • the adaptive interface may continue receiving input 1202 from the user device. Specifically, the adaptive interface may receive user data sensed by the sensor and time of providing changed data on the display of the user device to the user. Based on the input 1202, the adaptive interface may determine which user data the user had at a time of providing the changed data on the display, as shown by block 1212. The adaptive interface may determine whether the user data still needs to be changed (e.g., if the blood pressure of the user is still above the predetermined value).
  • the adaptive interface may determine, at block 1214, which adjustments of data to be displayed to the user need to be made.
  • the adaptive interface may send the adjusted data to be displayed to the user device.
  • the displaying unit of the user device may process and display the adjusted data as the output 1210 of the user device.
  • the adaptive interface may continue receiving input 1202 from the user device. Specifically, the adaptive interface may receive user data sensed by the sensor and time of providing the adjusted data on the display of the user device to the user. Based on the input 1202, the adaptive interface may determine which user data the user had at a time of providing the adjusted data on the display, as shown by block 1216.
  • the adaptive interface may determine whether the adjustment of data to be displayed to the user led to a personalized change (i.e. to the change of biological parameters of the user).
  • the adaptive interface may perform continuous adjustment of data to be displayed to the user, as shown by block 1220.
  • the adaptive interface may continuously provide the adjusted data as the output 1210 to the user device.
  • FIG.13 is a schematic diagram 1300 showing operations performed by an adaptive interface to continuously customize output data using machine learning techniques, according to an example embodiment.
  • the adaptive interface may continuously receive input 1302 from a user device.
  • the input 1302 may include user data sensed by a sensor and time of providing data, e.g., output data in a form of graphics, on a display of the user device to the user.
  • the adaptive interface may determine which user data the user had at a time of providing the data on the display at block 1304.
  • the adaptive interface may apply machine learning techniques and neural networks 1306 to determine whether the user data needs to be changed according to predetermined criteria.
  • the adaptive interface may further apply machine learning techniques and neural networks 1306 to determine specific changes 1308 to be applied to the output data, e.g., graphics, audio, or olfactory data, of the user device to cause changing of the user data.
  • the adaptive interface may send the changed data to be displayed in a frontend, i.e., on a display of the user device.
  • a displaying unit of the user device may process and display the changed data as the output 1310 of the user device.
  • the adaptive interface may receive further input 1312 from the user device.
  • the input 1312 may include changed user data sensed by the sensor and time of providing changed data on the display of the user device to the user.
  • the adaptive interface may determine whether the change of the output data led to personalized or desired change of the user data, as shown by block 1314. Specifically, the adaptive interface may determine in which way the user data changed in response to providing the changed output data to the user.
  • the adaptive interface may further apply machine learning techniques and neural networks 1316 to determine adjustments 1318 to be applied to the changed output data of the user device to cause further changing of the user data.
  • the adaptive interface may send the adjusted data to be displayed in the frontend (i.e., on the display of the user device).
  • FIG.14 is a block diagram 1400 illustrating continuous personalization of a brightness level on a user device based on data related to respiration or heart rate of a user, according to an example embodiment.
  • the respiration may be determined based on respiratory muscle movements, thermal changes of skin, movement of belly or chest, a heart rate, and so forth.
  • An input 1402 may be continuously provided to an adaptive interface 206.
  • the adaptive interface 206 may process the input 1402 and provide an output 1404 for displaying on the user device.
  • user data 1406 may be provided as the input 1402 to the adaptive interface 206.
  • the user data 1406 may include data related to respiration or a heart rate of the user at the time of interaction of the user with the user device having a particular brightness level.
  • the user data 1406 may further include a time when the user interacted with or perceived the particular brightness level of the user device.
  • the adaptive interface 206 may determine, at block 1408, which user data the user had at a particular time when the user device had a particular brightness level (e.g., what respiration and heart rate the user had at the time when the brightness level of the user device was 5) as shown by block 1410. [00109] At block 1412, the adaptive interface 206 may change the brightness level on a scale from 1 to 10 to cause the change of the respiration of the heart rate of the user.
  • the determination whether the brightness level needs to be changed and to what value may be made using machine learning techniques based on historical data of the user or a plurality of users.
  • the adaptive interface 206 may receive further user input data 1416.
  • the further user data 1416 may include data related to respiration or the heart rate of the user at the time of interaction of the user with the user device having the brightness level 1414 from 1 to 10.
  • the user data 1406 may further include a time when the user interacted with or perceived the brightness level 1414 from 1 to 10 of the user device.
  • the adaptive interface 206 may determine, at block 1418, which user data the user had at particular time when the user device had the brightness level 1414 (e.g., what respiration and heart rate the user had at the time when the brightness level 1414 of the user device was from 1 to 10). At block 1420, the adaptive interface 206 may select an adjusted, i.e., personalized, value of brightness level intended, for example, to slow the respiration or the heart rate of the user.
  • the personalized brightness level 1422 (e.g., 3-4) selected by the adaptive interface 206 may be set on the user device.
  • the adaptive interface 206 may receive continuously detected user data 1424.
  • the adaptive interface 206 may determine which user data the user had at a particular time when the user device had the personalized brightness level 1422.
  • the adaptive interface 206 may perform continuous personalization of the brightness level at block 1428 to elicit a personalized change of user data, such as the respiration or the heart rate of the user.
  • FIG.15 is a block diagram 1500 illustrating continuous personalization of a volume level on a user device based on data related to respiration or a heart rate of a user, according to an example embodiment.
  • An input 1502 may be continuously provided to an adaptive interface 206.
  • the adaptive interface 206 may process the input 1502 and provide an output 1504 for displaying on the user device.
  • user data 1506 may be provided to the adaptive interface 206.
  • the user data 1506 may include data related to respiration or the heart rate of the user at the time of interaction of the user with the user device having a particular volume level.
  • the user data 1506 may further include a time when the user interacted with or perceived the particular volume level of the user device.
  • the adaptive interface 206 may determine, at block 1508, which user data the user had at a particular time when the user device had a particular volume level (e.g., what respiration and heart rate the user had at the time when the volume level of the user device was 5), as shown by block 1510. [00114] At block 1512, the adaptive interface 206 may change the volume level on a scale from 1 to 10 to cause the change of the respiration of heart rate of the user.
  • the determination whether the volume level needs to be changed and to what value may be made using machine learning techniques based on historical data of the user or a plurality of users.
  • the adaptive interface 206 may receive further user input data 1516.
  • the further user data 1516 may include data related to respiration or the heart rate of the user at the time of interaction of the user with the user device having the volume level 1514 from 1 to 10.
  • the user data 1506 may further include time when the user interacted with or perceived the volume level 1514 from 1 to 10 of the user device.
  • the adaptive interface 206 may determine, at block 1518, which user data the user had at a particular time when the user device had the volume level 1514 (e.g., what respiration and heart rate the user had at the time when the volume level 1514 of the user device was from 1 to 10). At block 1520, the adaptive interface 206 may select an adjusted value of the volume level intended, for example, to slower the respiration or the heart rate of the user.
  • the personalized volume level 1522 (e.g., 3-4) selected by the adaptive interface 206 may be set on the user device.
  • the adaptive interface 206 may receive continuously detected user data 1524.
  • FIG.16 is a block diagram 1600 illustrating continuous personalization of an odorant level on a user device based on data related to respiration or a heart rate of a user, according to an example embodiment.
  • An input 1602 may be continuously provided to an adaptive interface 206.
  • the adaptive interface 206 may process the input 1602 and provide an output 1604 for displaying on the user device.
  • the user device may include an artificial olfaction device 122 as shown on FIG.1.
  • User data 1606 may be provided to the adaptive interface 206.
  • the user data 1606 may include data related to respiration or the heart rate of the user at the time of interaction of the user with the user device having a particular volume level.
  • the user data 1606 may further include a time when the user interacted with or perceived the particular odorant level of the user device.
  • the adaptive interface 206 may determine, at block 1608, which user data the user had at a particular time when the user device had a particular odorant level (e.g., what respiration and heart rate the user had at the time when the odorant level of the user device was 5), as shown by block 1610.
  • the adaptive interface 206 may change the odorant level on a scale from 1 to 10 to cause the change of the respiration of heart rate of the user.
  • the determination whether the odorant level needs to be changed and to what value may be made using machine learning techniques based on historical data of the user or a plurality of users.
  • the adaptive interface 206 may receive further user input data 1616.
  • the further user data 1616 may include data related to respiration or the heart rate of the user at the time of interaction of the user with the user device having the odorant level 1614 from 1 to 10.
  • the user data 1616 may further include a time when the user interacted with or perceived the odorant level 1614 from 1 to 10 of the user device.
  • the adaptive interface 206 may determine, at block 1618, which user data the user had at a particular time when the user device had the odorant level 1614 (e.g., what respiration and heart rate the user had at the time when the odorant level 1614 of the user device was from 1 to 10).
  • the adaptive interface 206 may select a personalized value of the odorant level intended, for example, to slow the respiration or the heart rate of the user.
  • the personalized odorant level 1622 (e.g., 3-4) selected by the adaptive interface 206 may be set on the user device.
  • the adaptive interface 206 may receive continuously detected user data 1624. At block 1626, the adaptive interface 206 may determine which user data the user had at a particular time when the user device had the personalized odorant level 1622. The adaptive interface 206 may perform continuous personalization of the odorant level at block 1628 to elicit a personalized change of user data, such as the respiration or the heart rate of the user.
  • FIG.17 is a schematic diagram 1700 showing a user interface of a mobile device customized by a machine learning system for customizing output based on user data, according to an example embodiment.
  • FIG.17 illustrates customizing a graphics output on a user device 104 based on user data 124 sensed by a sensor 106.
  • a user interface 1702 may display output data 1704 on a screen of the user device 104.
  • the adaptive interface 206 of the machine learning system 200 for customizing output based on user data may customize the output data 1704 and send customized output data 1706 to the user interface 1702.
  • the user interface 1702 may display the customized data 1706 on the screen of the user device 104.
  • the customized output data 1706 may include a changed font, changed colors, changed brightness, changed contrast, and the like.
  • the adaptive interface 206 may send further customized output data 1708 to the user interface 1702.
  • the user interface 1702 may display the further customized data 1708 on the screen of the user device 104.
  • the customized output data 1708 may include a changed font, changed colors, changed brightness, a changed contrast, a changed background, and the like.
  • the adaptive interface 206 may continuously customize the output data and provide the further customized output data 1710 to the user interface 1702.
  • the user interface 1702 may display the further customized data 1710 on the screen of the user device 104.
  • FIG.18 is a schematic diagram 1800 showing output data of a user device customized by a machine learning system for customizing output based on user data, according to an example embodiment. Specifically, FIG.18 illustrates customizing an audio output on headphones 118 based on user data 124 of a user 102 sensed by a sensor 106.
  • the output data such as a pitch 1802 and a volume 1804 of the sound, may be provided to the headphones 118.
  • the adaptive interface 206 of the machine learning system 200 for customizing output based on user data may customize the pitch 1802 and the volume 1804 and send data associated with customized pitch 1806 and customized volume 1808 to the headphones 118.
  • the headphones 118 may reproduce the audio output with the customized pitch 1806 and customized volume 1808.
  • the adaptive interface 206 may send further customized pitch 1810 and further customized volume 1812 to the headphones 118.
  • the headphones 118 may reproduce the audio output with the further customized pitch 1810 and further customized volume 1812.
  • FIG.19 is a schematic diagram 1900 showing output data of a user device customized by a machine learning system for customizing output based on user data, according to an example embodiment. Specifically, FIG.19 illustrates customizing olfactory data on an artificial olfaction device 120 based on user data 124 of a user 102 sensed by a sensor 106.
  • the output data such as units 1902 of a perceptual axis 1904 of odorant pleasantness that ranges from very pleasant (e.g., rose as shown by element 1906) to very unpleasant (e.g., skunk as shown by element 1908), may be provided to the user 102 by the artificial olfaction device 120.
  • the adaptive interface 206 of the machine learning system 200 for customizing output based on user data may customize the units 1902 of the perceptual axis 1904 and send customized units 1910 to the artificial olfaction device 120.
  • the artificial olfaction device 120 may set the olfactory data according to the customized units 1910.
  • the adaptive interface 206 may send further customized units 1912 of the perceptual axis 1904 to the artificial olfaction device 120.
  • the artificial olfaction device 120 may set the olfactory data according to the customized units 1912.
  • the adaptive interface 206 may continuously customize the olfactory data and provide the further customized units 1914 of the perceptual axis 1904 to the artificial olfaction device 120.
  • the artificial olfaction device 120 may set the olfactory data according to the customized units 1914.
  • FIG.20 is a schematic diagram 2000 showing customizing output of a user device based on user data captured by a digital camera of the user device, according to an example embodiment.
  • the user device may include a smartphone.
  • the digital camera shown as camera 2002 may be disposed at a distance 2004 from the user 102.
  • the distance 2004 at which the camera 2002 may be configured to capture user data may be up to several meters or any other distance depending on parameters of the camera 2002 and environmental conditions.
  • the camera 2002 may be selected from a charge- coupled device, a complementary metal-oxide semiconductor image sensor, or any other type of an image sensor.
  • the camera 2002 of the user device may be used as a non-contact and non-invasive device to measure user data.
  • the user data may include a respiratory rate, pulse rate, blood volume pulse, and so forth.
  • the camera 2002 may be used to capture an image the user 102.
  • the user data captured by the camera 2002 may be processed by the adaptive interface 206 of the system 200. The processing may be performed using CPU, GPU, and/or NPU.
  • the user data captured by the camera 2002 may be the input for the adaptive interface 206 and may be processed together with the data concerning the time of the visuals displayed to the user and audio provided to the user at the time of capture of the user data, and together with the data concerning the time of the recording of the user data.
  • the respiratory rate, the heart rate, and the blood volume pulse may be sensed simultaneously using the camera 2002.
  • the camera 2002 may capture an image 2006 of the user.
  • a part of a skin 2008 of the user 202 may be detected on the image 2006.
  • a region of interest 2010 may be selected as shown by step 2012.
  • the changes in the average image brightness of the region of interest 2012 for a short time can be measured.
  • the selection of the region of interest 2012 of the face may be used to obtain blood circulation features and obtain the raw blood volume pulse signal.
  • the selection of a region of interest may influence the following heart rate detection steps. First, the selection of a region of interest may affect the tracking directly since a commonly applied tracking method uses a first frame of the captured region of interest. Second, the selected regions of interest are regarded as the source of cardiac information.
  • the pixel values inside the region of interest can be used for intensity-based methods, while feature point locations inside the region of interest can be used for motion-based methods.
  • the time-lapse image of a part of the skin of the user 102 may be consecutively captured, and the changes in the average brightness of the region of interest 2010 can be measured for a period of time, for example, for 30 seconds.
  • the brightness data can be processed by a series of operations of interpolation using a first- order derivative, a low pass filter of 2 Hz, and a sixth-order auto-regressive spectral analysis.
  • Remote photoplethysmography may be used for contactless monitoring of the blood volume pulse using the camera.
  • Various optical models can be applied to extract the intensity of color changes caused by pulse. [00136] It is possible to capture heart rate signals at a frame rate of eight frames per second (fps), under the hypothesis that the human heartbeat frequency lies between 0.4 and 4 Hz. A frame rate between 15 and 30 fps is sufficient for heart rate detection. The estimation of the heart rate is performed by directly applying noise reduction algorithms and optical modeling methods. Alternatively, the usage of manifold learning methods mapping multidimensional face video data into one-dimensional space can be used to reveal the heart rate signal.
  • FIG.21 is a schematic diagram 2100 showing an analysis of captured user data by an adaptive interface, according to an example embodiment.
  • Intensity-based methods 2106, 2108, 2110 can be used to process PPG signals captured by the camera.
  • Normalized color intensity can be analyzed. Using auto-regressive spectral analysis, two clear peaks can be detected at approximately 0.3 and 1.2 Hz. The peaks correspond to the respiratory rate and the heart rate. The peak 2102 with the frequency of 0.3 Hz corresponds to the respiratory rate, and the peak 2104 with the frequency of 1.2 Hz corresponds to the heart rate. The green channel provides the strongest signal-to-noise ratio. Consequently, the green channel can be used for extracting the heart rate. [00138] Referring back to FIG. 20, upon detecting the red signal 2020, green signal 2022, and blue signal 2024, an average RGB signal can be determined at step 2026. The signal de-trending of the average RGB signal can be performed at step 2028.
  • the processed signal can be normalized at step 2030. Filtering of the normalized signal may be performed at step 2030.
  • the analysis of captured user data can further include capturing brightness level, contrast level, saturation level, and vibrance level of data shown to the user on the user device. The user perceives the visuals shown on the user device with a continuously changing degree of brightness level, contrast level, saturation level, and vibrance level.
  • the adaptive interface may receive an input in a form of the time of the displaying of the degree or level of brightness, contrast, saturation and vibrance, and the time of the heart rate and respiratory rate as analyzed from user data detected by the camera. As shown in FIG.
  • the inputs are levels of brightness 2112, contrast 2114, saturation 2116, and vibrance 2118 that may be mapped to the time, as well as may be mapped to the time and analysis 2120, 2122, 2124 of values of heart rate and respiratory rate.
  • the adaptive interface can perform a continuous processing 2126 using a neural processing unit for predictive modeling of the datasets captured by the sensor (the camera) and the visual adjustments.
  • the input of datasets may be processed using deep learning techniques.
  • the deep learning technique may apply specific and differing machine learning techniques to the datasets to learn how to process the datasets and adapt the visual adjustments on the user device to support slower heart rate and respiratory rate.
  • Machine learning techniques can be supervised, semi-supervised, and unsupervised.
  • the adaptive interface may analyze correlations between visual adjustments and heart rate and respiratory rates, process the datasets, and create predictive models to adapt the visual adjustments to the desired outcome of slower heart rate and slower respiratory rate personalized to the user in real-time and continuously.
  • the adaptive interface may identify features of the visual adjustments that are predictive of the outcomes to predict heart rate and respiratory rate.
  • the adaptive interface may use classification, regression, clustering, convolutional neural networks, and other machine learning techniques based on which the datasets are analyzed and the customized output data are predicted for the fastest lowering and slowing of the heart rate and respiratory rate of the user.
  • the probability and predictive modeling performed by the adaptive interface may be adaptive to learn how to adapt the visual adjustments of the user device to the user with varying heart rate and respiratory rate.
  • the adaptive interface may learn from the observations. When exposed to more observations, the predictive performance of the adaptive interface may be improved.
  • the analysis performed by the adaptive interface may include step 2126, at which signal extraction may be performed.
  • the signal extraction may include detection, definition, and tracking of a region of interest, raw signal extraction to obtain raw signal.
  • the visual adjustments may be selected based on the signal extraction.
  • the visual adjustments may include adjustments of the vibrance level, saturation level, contrast level, brightness level.
  • the display spectrum may be determined and applied to a display of the user device.
  • the analysis may further include step 2128, at which signal estimation may be performed by applying filtering, dimensionality, and reduction to the signal to obtain an RGB signal. Furthermore, the heart rate and respiratory rate may be estimated using frequency analysis and peak detection. [00143]
  • the analysis may further include step 2130, at which adaptive modeling may be performed.
  • the adaptive modeling may include deep learning techniques, machine learning techniques for adaptation of visuals based on the heart rate and respiratory rate, model learning using regression, clustering, feature selection, and convolutional neural networks.
  • the analysis may further include step 2132, at which adaptive implementation may be performed.
  • FIG.22 is a schematic diagram 2200 showing output data continuously adapted by an adaptive interface, according to an example embodiment.
  • the adaptive interface 206 may send the analyzed, adapted and customized output data to a CPU or GPU for processing of the personalized datasets.
  • the user 102 may be presented with user interfaces 2202, 2204, 2206, 2208, 2210, which may be continuously personalized in real-time.
  • the user interfaces 2202, 2204, 2206, 2208, 2210 may have varying brightness, contrast, saturation, and vibrance levels.
  • the user device 104 may also provide to the adaptive interface data relating to a processing speed. Therefore, the adaptive interface may learn how to improve the data processing for faster processing so that the visual adjustments for slower heart rate and respiratory rate can be processed faster and more efficiently.
  • the adaptive interface may receive updates from a network and database about methods to analyze and process the datasets. The updates may be based on scientific studies and tests as to what visuals are supportive to slower the heart rate and the respiratory rate. The updates may also include data on how the heart rate and the respiratory rate can be analyzed and what a “slower” heart rate and respiratory rate means. The focus of the adaptive interface may be to provide a slower, calmer, and deeper heart rate and respiratory rate.
  • the definition of a slower, calmer, and deeper heart rate and respiratory rate can change over time based on scientific and other data, and the adaptive interface may be updated from the network to integrate these changes and adapt the processing of the input data to the updates.
  • Each adaptive interface used by each of a plurality of users may provide data relating to datasets and processing to a database, so the machine learning techniques may use data from a plurality of adaptive interfaces associated with the plurality of users to improve the visual adjustments to the heart rate and respiratory rate of the user.
  • the updates relating to data learned from the plurality of adaptive interfaces and users may be provided to each adaptive interface.
  • the adaptive interface may be directed to analyzing and evaluating which datasets of visual adjustments, i.e.
  • FIG.23 is a flow chart 2300 showing a method for customizing output based on user data, according to an example embodiment.
  • the adaptive interface may continuously process user data, as shown by operation 2302. At operation 2304, the adaptive interface may determine whether the heart rate and the respiratory rate of the user slow down.
  • the method may continue with operation 2306, at which the adaptive interface may change visuals, i.e., the output data on the user device, using adjustments of brightness level, contrast level, saturation level, and vibrance level of data shown to the user on the user device.
  • the adaptive interface may again determine whether the heart rate and the respiratory rate of the user slow down. If the heart rate and the respiratory rate slow down, the adaptive interface may perform operation 2312, at which data related to the visuals with adjustments of brightness level, contrast level, saturation level, and vibrance level may be processed.
  • the adaptive interface may continue processing of the user data.
  • the adaptive interface may employ other machine learning techniques at operation 2316 to perform further adaptation of the visuals.
  • the adaptive interface may determine, at operation 2316, whether tendencies of adjustment of visuals support slower heart rate and respiratory rate. If the tendencies of adjustment of visuals support slower heart rate and respiratory rate, the adaptive interface may continue with operation 2308. If tendencies of adjustment of visuals do not support slower heart rate and respiratory rate, the adaptive interface may perform operation 2318, at which the adaptive interface may search for adjustments of visuals that may support slower heart rate and slower respiratory rate.
  • FIG.24 illustrates an exemplary computing system 2400 that may be used to implement embodiments described herein.
  • the exemplary computing system 2400 of FIG.24 may include one or more processors 2410 and memory 2420.
  • Memory 2420 may store, in part, instructions and data for execution by the one or more processors 2410.
  • Memory 2420 can store the executable code when the exemplary computing system 2400 is in operation.
  • the exemplary computing system 2400 of FIG.24 may further include a mass storage 2430, portable storage 2440, one or more output devices 2450, one or more input devices 2460, a network interface 2470, and one or more peripheral devices 2480.
  • FIG.24 The components shown in FIG.24 are depicted as being connected via a single bus 2490. The components may be connected through one or more data transport means.
  • the one or more processors 2410 and memory 2420 may be connected via a local microprocessor bus, and the mass storage 2430, one or more peripheral devices 2480, portable storage 2440, and network interface 2470 may be connected via one or more input/output buses.
  • Mass storage 2430 which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by a magnetic disk or an optical disk drive, which in turn may be used by one or more processors 2410.
  • Mass storage 2430 can store the system software for implementing embodiments described herein for purposes of loading that software into memory 2420.
  • Portable storage 2440 may operate in conjunction with a portable non-volatile storage medium, such as a compact disk (CD) or digital video disc (DVD), to input and output data and code to and from the computing system 2400 of FIG.24.
  • the system software for implementing embodiments described herein may be stored on such a portable medium and input to the computing system 2400 via the portable storage 2440.
  • One or more input devices 2460 provide a portion of a user interface.
  • the one or more input devices 2460 may include an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, a stylus, or cursor direction keys. Additionally, the computing system 2400 as shown in FIG.24 includes one or more output devices 2450. Suitable one or more output devices 2450 include speakers, printers, network interfaces, and monitors.
  • Network interface 2470 can be utilized to communicate with external devices, external computing devices, servers, and networked systems via one or more communications networks such as one or more wired, wireless, or optical networks including, for example, the Internet, intranet, LAN, WAN, cellular phone networks (e.g., Global System for Mobile communications network, packet switching communications network, circuit switching communications network), Bluetooth radio, and an IEEE 802.11-based radio frequency network, among others.
  • Network interface 2470 may be a network interface card, such as an Ethernet card, optical transceiver, radio frequency transceiver, or any other type of device that can send and receive information.
  • Other examples of such network interfaces may include Bluetooth®, 3G, 4G, and WiFi® radios in mobile computing devices as well as a USB.
  • One or more peripheral devices 2480 may include any type of computer support device to add additional functionality to the computing system.
  • the one or more peripheral devices 2480 may include a modem or a router.
  • the components contained in the exemplary computing system 2400 of FIG. 24 are those typically found in computing systems that may be suitable for use with embodiments described herein and are intended to represent a broad category of such computer components that are well known in the art.
  • the exemplary computing system 2400 of FIG.24 can be a personal computer, hand held computing device, telephone, mobile computing device, workstation, server, minicomputer, mainframe computer, or any other computing device.
  • the computer can also include different bus configurations, networked platforms, multi-processor platforms, and so forth.
  • OS operating systems
  • Some of the above-described functions may be composed of instructions that are stored on storage media (e.g., computer-readable medium). The instructions may be retrieved and executed by the processor. Some examples of storage media are memory devices, tapes, disks, and the like. The instructions are operational when executed by the processor to direct the processor to operate in accord with the example embodiments. Those skilled in the art are familiar with instructions, processor(s), and storage media.
  • storage media e.g., computer-readable medium
  • the instructions are operational when executed by the processor to direct the processor to operate in accord with the example embodiments.
  • Those skilled in the art are familiar with instructions, processor(s), and storage media.
  • any hardware platform suitable for performing the processing described herein is suitable for use with the example embodiments.
  • Non-volatile media include, for example, optical or magnetic disks, such as a fixed disk.
  • Volatile media include dynamic memory, such as RAM.
  • Transmission media include coaxial cables, copper wire, and fiber optics, among others, including the wires that include one embodiment of a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency and infrared data communications.
  • Computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-read-only memory (ROM) disk, DVD, any other optical medium, any other physical medium with patterns of marks or holes, a RAM, a PROM, an EPROM, an EEPROM, a FLASHEPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read.
  • Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to a CPU for execution.
  • a bus carries the data to system RAM, from which a CPU retrieves and executes the instructions.
  • system RAM can optionally be stored on a fixed disk either before or after execution by a CPU.

Abstract

L'invention concerne des systèmes et des procédés permettant de personnaliser une sortie sur la base de données d'utilisateur. Un procédé donné à titre d'exemple permettant de personnaliser une sortie sur la base de données d'utilisateur peut commencer par la capture en continu, par au moins un capteur, des données d'utilisateur. Le procédé peut continuer par l'analyse, par au moins une ressource informatique, des données d'utilisateur reçues du ou des capteurs et par la détermination des liens de dépendance entre les données d'utilisateur et les données de sortie. Le procédé peut en outre comprendre la détermination sur la base de critères prédéterminés, qu'une quantité des données d'utilisateur et des liens de dépendance est suffisante pour personnaliser les données de sortie. Le procédé peut continuer par la personnalisation de façon continue, par une interface adaptative, des données de sortie à l'aide d'au moins une technique d'apprentissage automatique sur la base de l'analyse des données d'utilisateur. Les données de sortie personnalisées peuvent être prévues pour déclencher un changement individualisé.
PCT/US2020/052103 2019-09-23 2020-09-23 Interface adaptative pour des interactions basées sur un écran WO2021061699A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/579,747 US11561806B2 (en) 2017-08-04 2019-09-23 Adaptive interface for screen-based interactions
US16/579,747 2019-09-23

Publications (1)

Publication Number Publication Date
WO2021061699A1 true WO2021061699A1 (fr) 2021-04-01

Family

ID=75166422

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/052103 WO2021061699A1 (fr) 2019-09-23 2020-09-23 Interface adaptative pour des interactions basées sur un écran

Country Status (1)

Country Link
WO (1) WO2021061699A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230091371A1 (en) * 2021-09-20 2023-03-23 Nvidia Corporation Joint estimation of heart rate and respiratory rate using neural networks
CN117910539A (zh) * 2024-03-19 2024-04-19 电子科技大学 一种基于异构半监督联邦学习的家庭特征识别方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050010102A1 (en) * 2003-03-20 2005-01-13 Marchesini Renato Angelo Apparatus for the characterisation of pigmented skin lesions
US20080294012A1 (en) * 2007-05-22 2008-11-27 Kurtz Andrew F Monitoring physiological conditions
US20130322711A1 (en) * 2012-06-04 2013-12-05 Verizon Patent And Licesing Inc. Mobile dermatology collection and analysis system
US20140078301A1 (en) * 2011-05-31 2014-03-20 Koninklijke Philips N.V. Method and system for monitoring the skin color of a user
US20140316235A1 (en) * 2013-04-18 2014-10-23 Digimarc Corporation Skin imaging and applications

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050010102A1 (en) * 2003-03-20 2005-01-13 Marchesini Renato Angelo Apparatus for the characterisation of pigmented skin lesions
US20080294012A1 (en) * 2007-05-22 2008-11-27 Kurtz Andrew F Monitoring physiological conditions
US20140078301A1 (en) * 2011-05-31 2014-03-20 Koninklijke Philips N.V. Method and system for monitoring the skin color of a user
US20130322711A1 (en) * 2012-06-04 2013-12-05 Verizon Patent And Licesing Inc. Mobile dermatology collection and analysis system
US20140316235A1 (en) * 2013-04-18 2014-10-23 Digimarc Corporation Skin imaging and applications

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230091371A1 (en) * 2021-09-20 2023-03-23 Nvidia Corporation Joint estimation of heart rate and respiratory rate using neural networks
US11954862B2 (en) * 2021-09-20 2024-04-09 Nvidia Corporation Joint estimation of heart rate and respiratory rate using neural networks
CN117910539A (zh) * 2024-03-19 2024-04-19 电子科技大学 一种基于异构半监督联邦学习的家庭特征识别方法

Similar Documents

Publication Publication Date Title
US11561806B2 (en) Adaptive interface for screen-based interactions
US10423893B2 (en) Adaptive interface for screen-based interactions
US10854103B2 (en) Modular wearable device for conveying affective state
JP7152950B2 (ja) 眠気開始検出
US10796246B2 (en) Brain-mobile interface optimization using internet-of-things
WO2017193497A1 (fr) Serveur et système de gestion de santé intellectualisée basé sur un modèle de fusion, et procédé de commande pour ceux-ci
CN112005311B (zh) 用于基于睡眠架构模型向用户递送感官刺激的系统和方法
US20230259208A1 (en) Interactive electronic content delivery in coordination with rapid decoding of brain activity
US20190336009A1 (en) Predictively controlling operational states of wearable device and related methods and systems
WO2018042799A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
WO2021061699A1 (fr) Interface adaptative pour des interactions basées sur un écran
CN112384131A (zh) 用于使用神经网络来增强被递送给用户的感官刺激的系统和方法
US20240152208A1 (en) Brain-computer interface
Scrugli et al. An adaptive cognitive sensor node for ECG monitoring in the Internet of Medical Things
WO2018222589A1 (fr) Système et méthode de traitement de troubles au moyen d'un système de réalité virtuelle
JP2024056697A (ja) クラスタベースの睡眠分析
JP2023109741A (ja) 生体リズム最適化によるディープラーニング睡眠アシスタントシステム
CN113272908A (zh) 用于利用感官刺激增强rem睡眠的系统和方法
CN113053492B (zh) 基于用户背景和情绪的自适应虚拟现实干预系统及方法
EP4193913A1 (fr) Procédé de détection de signe vital et dispositif électronique
US20230380793A1 (en) System and method for deep audio spectral processing for respiration rate and depth estimation using smart earbuds
EP4186414A1 (fr) Dispositif électronique et procédé de commande d'un dispositif électronique
CN111430006A (zh) 情感调节方法、装置、计算机设备和存储介质
KR20240059539A (ko) 생체 정보를 이용한 인공지능 기반의 체중 관리 서비스 제공 방법 및 장치
CN117936113A (zh) 一种基于aigc的多参数移动穿戴监护健康模型及应用方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20869272

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20869272

Country of ref document: EP

Kind code of ref document: A1