WO2020102459A1 - Systèmes et procédés pour évaluer une réponse affective chez un utilisateur par l'intermédiaire de données de sortie générées par un être humain - Google Patents

Systèmes et procédés pour évaluer une réponse affective chez un utilisateur par l'intermédiaire de données de sortie générées par un être humain Download PDF

Info

Publication number
WO2020102459A1
WO2020102459A1 PCT/US2019/061331 US2019061331W WO2020102459A1 WO 2020102459 A1 WO2020102459 A1 WO 2020102459A1 US 2019061331 W US2019061331 W US 2019061331W WO 2020102459 A1 WO2020102459 A1 WO 2020102459A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
data
sensors
state
vaporizer
Prior art date
Application number
PCT/US2019/061331
Other languages
English (en)
Inventor
John T. WELLEHAN
Eric Robert HENDRIES
Original Assignee
Cloudmode Corp.
Cloudmode Analytics Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudmode Corp., Cloudmode Analytics Corp. filed Critical Cloudmode Corp.
Publication of WO2020102459A1 publication Critical patent/WO2020102459A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02405Determining heart rate variability
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4845Toxicology, e.g. by detection of alcohol, drug or toxic products

Definitions

  • the present disclosure relates generally to systems and methods for efficiency improvements in networks comprising humans and machines and for optimizing industrial hemp and cannabis production.
  • a human-machine interface system includes a display device, a camera, a communication module, and one or more storage mediums.
  • the camera is configured to capture an image.
  • the communication module is configured to transmit and receive data between a first remote user device and a second user remote device.
  • the one or more storage mediums have, individually or in combination, code sections stored thereon. When executed by one or more processors, the code sections cause a computing device to detect a face of the image captured by the camera, and to extract a plurality of physical features from the face detected in the image. Based at least in part on the extracted physical features, the code sections cause the computing device to generate a first avatar.
  • a computer-readable medium stores a computer program executable by a computing device to process facial landmark data.
  • the computer program includes code sections for causing the computing device to receive, from a camera, images.
  • the computer device is caused to detect a face of a user in an image captured by the camera, extract a plurality of physical features from the face detected in the image, and determine an affective state of the user. Based at least in part on the plurality of physical features, the computer device is caused to generate an avatar representative of the determined affective state of the user and display the avatar.
  • a hemp and cannabis production system includes a hemp and cannabis plant, an agronomic sensor, one or more processors, and a communication module.
  • the agronomic sensor is configured to generate agronomic data associated with the plant field.
  • the one or more processors are configured to analyze the generated agronomic data, to determine a chemical composition of the plant.
  • the communication module is coupled to the one or more processors and configured to transmit at least a portion of the determined chemical composition of the plant.
  • a system for evaluating compound intoxication and affective response in a user includes a processor and a non-transitory computer readable medium storing instructions thereon such that executing the instructions causes the system to perform the steps including:
  • a system for de- virtualizing a social network includes a first vaporizer and a mobile device.
  • the first vaporizer is configured to: deliver a compound via a vapor to a user, determine a physical location of the first vaporizer, determine locations of a group of vaporizers, and send a meetup signal for initiating a convergence of a subset of vaporizers in the group of vaporizers and the first vaporizer.
  • the mobile device is configured to: receive the meetup signal, and broadcast a meetup message to the group of vaporizers.
  • FIG. 1A is a block diagram of a generic system involving an interaction between a biological machine and a computing device, according to some implementations of the present disclosure
  • FIG. IB is a block diagram of a system involving an interaction between a human and a server, according to some implementations of the present disclosure
  • FIG. 2 illustrates an example vaporizing device according to some implementations of the present disclosure
  • FIG. 3 is a block diagram of a system for determining physiological and affective change associated with a substance, according to some implementations of the present disclosure.
  • FIG. 4 is a block diagram of a system for optimizing hemp and cannabis production, according to some implementations of the present disclosure.
  • Machine 1 can convert the binary to comma separated values (CSV) form and transmit the data to Machine 2.
  • CSV comma separated values
  • the human can process the data with the fewer cognitive processing cycles, less energy, and less time.
  • the transmission from binary to CSV by Machine 1 and the reverse conversion (e.g., the transmission from CSV to binary) by Machine 2 introduces significant additional processing cycles, energy and time.
  • maintenance and transmission of data in native formats can lead to greater efficiency.
  • Biological animals communicate information in various native formats.
  • Physiological state of a biological machine includes sensory data obtained from the biological machine.
  • Affective state of a biological machine describes an emotional experience of the biological machine.
  • two humans can consume alcohol with both the first human and the second human having a blood alcohol level of 0.08. Even though the blood alcohol level of both humans provide a physiological state of a 0.08 blood alcohol level, both humans can experience the 0.08 blood alcohol level differently.
  • the first human can be quite unaffected cognitively and/or emotionally, thus showing no outward indication of intoxication.
  • the second human on the other hand can be very garrulous, smiling, and using multiple facial expressions. Although the physiological response to alcohol is similar in both humans, their affective response (or state) is very different.
  • FIG. 1A is a block diagram of a generic system 100 involving an interaction between a biological machine 102 and a computing device 106, according to some implementations of the present disclosure.
  • the system 100 includes an interface 104 that allows information capture from the biological machine 102.
  • Optimized use of native data formats on machines can also yield superior capabilities.
  • a developer can use a host of potential tools and languages.
  • the two dominant formats are iOS and Android.
  • developers can use tools such React Native which creates a single programming layer in which the developers can develop the app.
  • React Native then translates the data into a form that can simulate and interact with the native iOS or Android language on the phone.
  • the savings associated with reduced developer hours can come at a cost beyond the expected increase in processing cycles, energy and time.
  • These non-native layers often inhibit the refinement and control of the phone, making lower level access and the phone’s full feature set less accessible.
  • the non-native layer reduces functional potential and capabilities.
  • interactions with humans which as biological systems are in a sense biological machines, though less native channels not only increases the processing cycles, energy consumption and time, but also reduces the function potential and capabilities of the human interaction.
  • FIG. IB illustrates a block diagram of a system 101 involving an interaction between a human 103 and a server 107, according to some implementations of the present disclosure.
  • FIG. IB is a subset of FIG. 1A, where the human 103 is an example of a biological machine 102, the sensor 105 is an example of the interface 104, and the computing device 106 is an example of the server 107.
  • the sensor 105 can include a camera that captures movements or facial expressions of the human 103.
  • the server 107 can apply algorithms to analyze the movements or facial expressions of the human 103 to determine an affective state of the human 103.
  • the human 103 is able to communicate the affective state of the human 103 to the server 107, via the sensor 105.
  • a biological machine is thus able to be understood by a computing device.
  • This dynamic should be thought of as extensible to other native layers of human interface and transmission protocols, such as, but not limited to the acoustical qualities of vocal production as well.
  • HGOD human generated output data
  • Examples of data that the sensor 105 can collect include electrodermal activity (EDA), cardiac time series such as electrocardiogram (ECG) and heart rate variability (HRV), electroencephalogram (EEG), electromyogram (EMG), photoplethsmography (PPG), pupillometry, facial landmark data, etc.
  • HGOD collected by the sensor 105 can be further processed by non-biological machines (e.g., the server 107) for signal decomposition and feature extraction.
  • the resulting data sets, in isolation or in a conjoined manner, can be used for objective classification of affective state of the human 103.
  • affective state can be mapped in a lower dimensional manner, such as the 2- dimension valence by arousal continua, it can be described by any number of dimensions.
  • the system 101 uses more than two dimensions in describing affective state.
  • the server 107 can then map the processed HGOD, either in isolation or conjoined, to the dimensions of affective state to generate an objective and quantitative description of what the human 103 is communicating via the HGOD.
  • facial landmark data can contain rich yet descriptive affective and physiological data. Many of these data types not only appear stable cross-culturally but also across species implying an evolutionarily stable hardwired rooting. Unfortunately, some of the observed changes in facial landmark observation can vary depending upon context. As such, the server 107 performing both conjoined analysis and contextual data improves accuracy of conclusions drawn from facial landmark data.
  • Contextual data can include data describing an aggregate setting of the human 103. For example, this can include whether the human 103 is around other humans, whether the human 103 is consuming a substance, whether the human 103 recently exercised, etc.
  • Contextual data can also lead to determining that the human 103 is communicating subtle information.
  • HGOD including facial landmarks can be hijacked by a higher layer of processing to convey intention and to deceive for social signaling.
  • the higher layer that sits atop the lower layers can mask the affective and physiological data at times.
  • Facial landmark data is provided as an example of HGOD that can mask affective state, but other forms of HGOD can be relatively more or relatively less susceptible to signal interference.
  • the system 101 leverages facial landmark data format and communication channel as the native format to communicate information from the non- biological machine (i.e., the computing device 106 or the server 107) to the biological machine 102 or the human 103.
  • the non- biological machine i.e., the computing device 106 or the server 107
  • HGOD from facial landmarks can be readily transmitted and processed from a distance and with minimal translation, much like the example above where Machine 1 and Machine 2 operate most efficiently maintaining communication in binary. Transmitting facial landmark data without translation reduces processing cycles, energy and time required when communicating from the server 107 to the human 103.
  • This dynamic should be thought of as extensible to other native layers of human interface and transmission protocols, such as, but not limited to the acoustical qualities of vocal production as well.
  • Humans can process facial expressions (or faces) in about 40 ms, approximately one- fifth the time for a human to process a conscious thought. While the number and types of facial landmarks can vary, in some implementations of the present disclosure, the server 107 can use 128 facial landmarks and can generate a three dimensional model of their relative positions, including capturing textural elements (such as, but not limited to wrinkles, as well as movement). In some implementations, in addition to potentially using facial landmark data (facial HGOD), the system 101 can construct a model of an affective space of the human 103 using other affective HGODs, such as EDA and HRV.
  • facial HGOD facial HGOD
  • affective HGODs can be used in rendering an estimated facial expression or facial landmark data such that interference from higher layer processing is eliminated. For example, when an individual is taking a picture, the facial HGOD data can be ignored, and other HGOD data can be used to construct an estimated facial HGOD that conveys the individual’s actual affective state, as if the individual were not forcing a smile.
  • the system 101 can be used by a doctor to monitor a large number of patients efficiently and effectively, or to readily observe material changes in a given patient’s status immediately without significant mental processing cycles or delays.
  • each of the humans 103 can be coupled to multiple sensors 105.
  • the multiple sensors 105 can generate HGOD data provided to the server 107 for feature extraction.
  • the server 107 can run the extracted features though a machine learning model (ML) which would map the features into a multidimensional affective space.
  • the server 107 can then map the affective state or affective space migration on to a model of facial HGOD.
  • the server 107 can then render the model on a screen of the server 107 or on a remote display.
  • the server 107 can render the model by adjusting 128 landmark features on a three dimensional output of a facial avatar.
  • a doctor can visually scan a number of facial avatars rapidly, for example, at rates of up to 25 faces per second, to look for any anomalies that the doctor might want to address. Having the doctor quickly scan facial avatars allows the doctor to process information much quicker than painstakingly reading patient data records one at a time.
  • a doctor who is evaluating a patient attempting to maintain a stoic demeanor can otherwise be fooled by the patient’s attempts to control their outward appearance.
  • the doctor can note any divergent affect and address the patient’s needs more effectively.
  • a series of avatars can be rendered, each one reflecting the patient’s affective state in a moment in time. These renderings can be linked, creating a time series of affective states. The doctor can then view the time series either in an animated fashion, or avatar by avatar, to quickly apprehend the progression of the patient’s status over time.
  • a user can efficiently review their own affective status without technical knowledge.
  • the user would be capturing HGOD, from a wearable for example, which would be passed via Bluetooth or other means of transmission to a mobile device (e.g., cell phone).
  • the mobile device would conduct feature extraction and machine learning on the HGOD and display the facial avatar on the screen of the mobile device.
  • the user without technical knowledge can review a time-series of facial avatar to efficiently understand the patterns and trends in their own affective state.
  • the user without technical knowledge can share their facial avatar or facial avatar time series with another user via text or other means of transmission, allowing the recipient to efficiently apprehend the affective state of the sender.
  • the shared avatar can be contextualized, with reference points such as, but not limited to, events, activities, locations, and other users referenced in the avatar or time series avatar.
  • groups of people can have their facial avatars compared, contrasted or aggregated, efficiently providing the recipient of the avatar or time series avatar with insight into a collective.
  • the server 101 in the system 101 can group facial avatars of a people of a same group to determine similar or dissimilar features within the people of the same group.
  • a collective facial avatar can then be developed from the feature analysis to represent the people of the same group.
  • the server 107 can send this collective facial avatar, which can be a time series avatar, to a recipient.
  • the recipient can efficiently apprehend an affective state or an affective state time series, synchronously and asynchronously, of the people of the same group.
  • Examples of people of a same group include a group of people within a room, a series of riders in the same seat of a roller coaster, a group of people sipping from their cups of coffee or inhaling from their vaporizers, etc.
  • the facial avatars or visualizations generated can convey affective response from stimuli within these contexts for rapid apprehension by a biological machine (e.g., the biological machine 102 or the human 103) with fewer processing cycles, less energy and time.
  • affective visualization data i.e., the facial avatar described above in connection to some implementations of the present disclosure
  • affective visualization data can be used to map social cascades that form and spread within groups of humans. Like other networked machines, humans often solve individual problems collectively with great success. At times, information processing is performed on a single biological machine and is then aggregated as illustrated through“the wisdom of the crowds.” At other times, the information processing is performed in a localized manner on one or more biological machines and then later spread through other machines. An example of this information processing can be observed in herding responses that can prevent a biological machine from ending at the hands of a predator.
  • networked biological machines solve problems in an iterative manner through the cascades of machine-influence-weighted information spread across a network of machines. Solving individual problems collectively not only determines a solution to discrete problems, but also serves to generate consensus affective states of a network of biological machines.
  • human machines use facial HGOD to send social signals for deception and negotiation.
  • a new facial avatar can be generated flagging the divergence.
  • the new avatar could be used, for example in an augmented reality application on a mobile device (e.g., a phone or a pair of smart glasses), to flag the points of divergence.
  • the new avatar can be displayed in a manner where a user can immediately apprehend the points of divergence.
  • time series or live representations of these divergences and their resulting cascades can be used in an augmented reality application that would allow a user to observe influence and her impact across a network of biological machines in real-time.
  • Identification of thought or mood leaders with augmented reality through a native format can enable a user to navigate an evolving social context more adeptly, because when the user is flashed a facial avatar image for 40ms, their brain would mark significant point before becoming consciously aware of it.
  • this system can be used with augmented reality (AR) to determine whether a person’s attempt to influence or persuade are being met with resonance. This could be done in person or virtually, such as over video conferencing or a like medium.
  • AR augmented reality
  • a facial avatar can be generated that reflects a particular affective state and flashed to a human to present a faster response. For example, if a machine mounted on a car were to detect ice below the car with LIDAR and wanted to send a signal to a human driver, the machine can flash a facial avatar that conveys an affective state with either significant concern or alertness to the human driver for a faster driver response. The driver would not have to be consciously aware of the facial avatar to respond appropriately.
  • a cascading facial avatars of affective states can be generated by a machine and inserted into media such as a movie or a videogame to elicit a cascading response from a biological machine watching the movie or playing the videogame, thereby guiding affective state of the biological machine in a precise and calculated manner. Guiding the biological machine in this manner can also mitigate impact of the uncanny valley by changing the immersiveness of experience by connecting to the biological machine watching the movie or playing the videogame preattentively through a native channel. Further, the system could observe and dynamically respond to the potentially changing affective state of the biological machine in an iterative or guiding manner.
  • the facial avatar can take on an array of realistic human forms, including representations of the human users themselves.
  • the facial avatar can take on the images of notable or imaginary persons, or even non-persons such as a creature with a highly detailed anthropomorphic face.
  • the facial avatar can use a specially constructed neutral face as a basal avatar in order to mitigate impact of biases associated with perceptions of trust or status.
  • facial avatars can be generated by the server 107 based on facial 1TGOD obtained by the sensors 105.
  • the facial avatars can be provided to humans via their electronic devices (e.g., smartphone, laptops, desktops, televisions, etc.).
  • the facial avatars are chosen as a native format to communicate information relating to affective state quickly to the humans.
  • the humans can potentially absorb and process the communicated information within 40 ms.
  • the fast information processing indicates that humans can subconsciously process such information.
  • computing devices 106 can analyze features of facial HGODs, or in some cases can analyze generated facial avatars, to determine affective states of humans.
  • Embodiments of the present disclosure further provide systems and methods for analyzing HGOD data produced in response to one or more humans consuming a substance that can alter the one or more humans’ affective states.
  • Affective state altering substances are of interest because many of these substances are consumed by humans without specific understanding of how impairment can occur. For example, while general mechanism and nature of impairment of alcohol consumption are well studied and understood, the mechanisms and nature of impairment from other compounds, such as but not limited to cannabinoids and terpenes, are less well understood. While tools and techniques for objectively identifying alcohol intoxication are readily available and used by law enforcement amongst other groups, such as breathalyzers or blood test that determine blood alcohol level, no such effective analogue exists for cannabis. This is in part due to the complexity in the chemical composition of cannabis, but also due to the persistence of these compound for period of time after post intoxication.
  • cannabinoids such as tetrahydrocannabinol (THC) and cannabidiol (CBD)
  • THC tetrahydrocannabinol
  • CBD cannabidiol
  • Other phytochemicals such as terpenes, can add to the complexity. Beyond intoxication, the understanding of how these compounds that impact a user’s affective state and a nature of the user’s affective migration are even less well understood than intoxication.
  • FIG. 2 illustrates an example vaporizing device 200 for delivering a substance to a user.
  • the user inserts a cartridge 230 containing a substance 240 into the vaporizing device 200
  • the vaporizing device 200 uses an array of onboard cartridge sensors to collect data on one or more of the cartridge, substance, and label specification.
  • the cartridge sensors quantify the attributes of the substance 240 along multiple dimensions, including, but not limited to, any one or more of direct and indirect measures of turbidity, color, chemical composition, viscosity, and flavor.
  • the vaporizing device 200 can be used with cannabis.
  • the substance turbidity is measured using one or more optical sensors emitting light and measuring refraction.
  • the substance color is quantified using a spectrometer sensor, detecting absorption.
  • the chemical composition of the substance 240 is measured using a nondispersive infrared sensor to identify specific compounds, such as, but not limited to, cannabinoids, terpenes, terpenoids, and flavonoids, by their resonance frequency.
  • the substance capacitance is measured using capacitive sensors.
  • the sensing can be supplemented with a reference data set (associated with, for example, an RFID tag).
  • the one or more sensors can be activated by a number of triggers.
  • the one or more sensors are triggered by a pressure-sensitive electrical or mechanical switch that is activated through the process of cartridge insertion.
  • the sensors are triggered by an on/off switch 220 on the vaporizing device 200.
  • the sensors are triggered by a computing device, e.g., a cellphone, a virtual assistant, a laptop, a desktop, a server, etc.
  • the cartridge sensors are triggered by the activation of an onboard accelerometer within the vaporizing device 200.
  • the cartridge sensors Upon activation, the cartridge sensors generate data describing the physical attributes of the substance 240 contained within the cartridge 230 along multiple dimensions, such as those described above. The data from each of these dimensions generates a distinctive pattern of data for a given substance 240, which can be analyzed.
  • Cartridge and substance data generated by the cartridge sensors can be recorded. Brand data and label specifications for the cartridge 230 and substance 240 can be recorded via the vaporizing device 200.
  • the vaporizing device 200 uses a sensor tag, such as, but not limited to, an RFID chip embedded in the cartridge 230.
  • an app running on a smartphone of the user enables a drop-down menu for selection.
  • package or sales transaction receipt information embedded in machine-readable optical label is captured with sensors, such as a camera or scanner, linking the information to the label specifications.
  • brand and product information is captured by obtaining an image of the packaging or cartridge 230, and communication sensors 264 then transmit the image data to a server for further processing.
  • the cartridge sensors also, directly or indirectly, measure the volume of the substance 240 within the cartridge 230.
  • an ultrasonic emitter generates a sonic chirp into the cartridge 230; the ultrasonic receiver captures the sonic response as the waves travel through the air bubble within the cartridge 230.
  • the sensors record the changing nature sonic chirp as it passes the air bubble, thereby collecting data for subsequent processing by a server to determine an amount of remaining substance 240.
  • a light sensor can measure the growing size of the air bubble.
  • a light sensor can measure the substance 240 directly.
  • an ultrasonic sensor can measure the substance 240 directly.
  • the quantity can be determined by an estimate developed by an trained machine learning (ML) model using visual input.
  • the user may begin vaping.
  • the user activates the vaporizing device 200, which turns on a vaporizing element 280 and vapor sensors 210b, through a number of potential triggers.
  • a thermal blanket 270 can be provided to protect sensors and other sensitive electronics to heat produced by the vaporizing element 280.
  • the vaporizing device 200 can house a cartridge warmer 250 that can dynamically adjust substance viscosity to a desired level depending upon factors such as substance fingerprint, ambient temperature, or other factors.
  • the vaporizing element 280 and vapor sensors 210b are triggered by a sensor on a mouthpiece 210a of the vaporizing device 200.
  • the sensor can be, a pH sensor, an airflow sensor, pressure sensor, or other sensor.
  • the vaporizing element 280 and vapor sensors 210b are triggered by an on/off switch 220 on the vaporizing device 200.
  • the sensors are triggered by a computing device, e.g., a smartphone, or some other device.
  • the vaporizing element 280 and vapor sensors 210b are triggered by the activation of an onboard accelerometer sensor within the vaporizing element 200.
  • the vapor sensors 210b can record a continuous description of the inhalation of vapor 290 from the vaporizing device 200 into the user’ s respiratory apparatus or lungs.
  • activation of the vaporizing element 200 can trigger the activation of cartridge sensors, vapor sensors 210b, physiological sensors 263, environmental sensors 262, performance sensors 261, affectivity sensors, and/or communication sensors 264, the activation of which may in turn prompt data transfer and storage between system components.
  • a specific request generated by another component of the system can trigger the activation of cartridge sensors, vapor sensors 210b, physiological sensors 263, environmental sensors 262, performance sensors 261, affectivity sensors, and/or communication sensors 264, the activation of which may in turn prompt data transfer and storage between system components.
  • the vaporizing device 200 of FIG. 2 can be incorporated into an overall system 300 depicted in FIG. 3 for detecting, classifying, and reporting intoxication and affective change associated with a substance ingested by a user 340.
  • the system 300 includes the user 340 who consumes a substance 240.
  • the substance 240 can get into a body of the user 340 via contact with a solid/liquid form of the substance 240 or by inhalation of the vapor 290 of the substance 240.
  • the substance 240 can be a combination of multiple chemical substances or a combination of different chemical substances delivered via different cartridges.
  • the user 340 can provide physiological data and/or affective data to a logic circuitry 350.
  • the logic circuitry 350 is a computing device, e.g., a server, an application specific integrated circuit, a laptop computer, a cloud server, etc.
  • the logic circuitry 350 can collect other information, e.g., contextual data, to further facilitate classifying intoxication or affective change of the user 340.
  • the other information collected can include data from social media accounts and other internet-available information 351 of the user 340, environmental characteristics 352, personal characteristics 353, user subjective responses 354, and data from other systems 360.
  • User subjective responses 354 can include user subjective ratings, user response times, user survey data, etc.
  • the logic circuitry 350 can collect information from disparate sources to contextualize affective data and physiological data obtained from sensors measuring HGOD data of the user 340.
  • the logic circuitry 350 can be used to adjust heat energy 310 of the vaporizing device 200 such that an affective state of the user 340 can be maintained at a certain level. In some implementations, the logic circuitry 350 can adjust a composition of the solid/liquid substance 240 being delivered to the user 340 to maintain the affective state of the user 340 at the certain level.
  • Embodiments of the present disclosure provide a system and method for detecting, classifying and reporting intoxication and affective change associated with cannabis consumption.
  • the system 300 can include one or more sensors that provide physiological data and/or affective data to the logic circuitry 350.
  • the logic circuitry 350 can ingest the data from the one or more sensors, extract features from the data, apply one or more machine learning algorithms to the data to obtain an output for informing the user 340 of her intoxication level or for maintaining and controlling the intoxication level of the user 340 via adjusting delivery of cannabis to the user 340.
  • each component of the system 300 can be one or more components such that the logic circuitry 350 can monitor multiple users using multiple substance delivery devices or vaporizing devices 200.
  • the sensors can initially be used to build data sets that record a number of physiological and affective measures such as, but not limited to, ECG, PPG, EMG, EDA, HRV, pupilometry, facial landmark data, facial texture data, and the movement and dynamics of each.
  • the data can be obtained by affixing or training the sensors on human subjects in a laboratory-like setting, or gathering them in an unobtrusive manner in naturalistic setting where subj ects are less cognizant of the sensing.
  • a given human subject can be monitored to gain baseline physiological and affect data.
  • the subject is then given cannabis, with a specific phytochemical profile, including but not limited to specific cannabinoids and terpenes in known concentrations and ratios, and new readings are taken. This process is repeated over a larger number of subjects.
  • the phytochemical profile is changed, reflecting new concentrations and ratios, and a group of human subj ects consumes the compound after generating baseline data. The new response data is recorded. The process is then repeated with another specific phytochemical profile.
  • the logic circuitry 300 ingests the response data from multiple recordings and extracts and stores relevant features within the data.
  • the logic circuitry 300 can then apply machine learning to analyze the recorded data.
  • the machine learning applied can involve using a neural network to analyze the data sets.
  • the logic circuitry 350 can then identify patterns of physiological and affective response which it associates with affective and intoxication states. These data sets can supplement reference data sets which can include subject surveys, questionnaires, or less obtrusive observation.
  • the subjects are handed a vaporizer (e.g., the vaporizing device 200) with onboard physiological and affective sensors such as, but not limited to an EDA sensor, PPG sensor and a camera.
  • the vaporizer can record phytochemicals, concentration and ratios of the phytochemicals, and consumption behavior of each subject. Consumption behavior includes a schedule of consumption, amount consumed per consumption event, etc. Referring to FIG. 3, as the user 340 holds and uses the vaporizing device 200, baseline physiological and affective measures are taken.
  • an onboard camera system captures facial landmark data.
  • the facial landmark data can be used to for biometric identification, assigning the captured data to the appropriate subject, and enabling personalized features and mode of system access.
  • the captured facial data can be used in a manner that classifies the subject to prevent unauthorized usage, for example, because the system determines that it is highly probable that the user 340 is underaged for consumption.
  • the same facial image data can be used to establish a baseline facial affect. That is, at the beginning of a consumption event, the facial image data captured by the onboard camera system can be used to determine a baseline prior to consuming the substance 240. After consumption, the sensors can collect, in a time series, additional facial image data for the logic circuitry 350 to establish a transformation of an affective state of the subject. In another embodiment.
  • the camera or cameras can be on an another device or multitude of devices.
  • a pattern in individual measures such as HRV and skin conductance or facial landmark migration can be classified reflecting a common pattern of response to a phytochemical compound.
  • the pattern in the union of individual measures of subject response can also be classified, providing additional affective and intoxication insights.
  • the nature of these responses and insights can be trained to specific phytocannabinoid compounds, and particular subject cohorts that exhibit patterns of distinct response.
  • the subject can have their data collected from the sensor laden vaporizer while consuming, for example, from the camera collecting facial data, and then have subsequent facial data collected from another device, such as a smartphone of the subject.
  • the smartphone can obtain subsequent facial data at an opportune time when the subject checks the smartphone to inquire how the affective state has changed.
  • the physiological sensors could be embedded in wearables, a phone, or other peripheral device or networked sensors.
  • the onboard camera could be used as a novel means to facilitate the establishment of a data account.
  • the owner or possessor of a sensor laden vaporizer e.g., the vaporizing device 200
  • In-app offerings from a data analytics service can collect and analyze data collected by the vaporizer, allowing the owner to share data from the vaporizer with a friend and also potentially share the vaporizer with the friend.
  • the vaporizer and app can enable the owner to consume and record her facial data as well as separately allow the friend to consume and record facial data.
  • Facial data can be recorded in, for example, a 128 landmark three dimensional representation using a 640x480 pixel image, amongst other means.
  • the owner of the vaporizer can subsequently be prompted in-app with an image of her friend, amongst other means, and a question asking whether to text a link to her friend enabling the automatic set up of an account.
  • the friend who has already enj oyed the vaporizer experience can then be able to easily download the app populated with their own data, at their convenience. This would improve the convenience and experience for all parties and increase the vitality of adoption.
  • the logic circuitry 350 can be used for descriptive purposes. By descriptive purposes, the logic circuitry 350 can take physiological and affective data and classify a state of the user 340.
  • the logic circuitry 350 can describe a nature and intensity of cannabis intoxication and the affective state of the user 340.
  • the description can be provided to the user 340 in a real time basis.
  • the description can be used by the user 340 or another party (e.g., a friend of the user 340 sharing the vaporizing device 200 with the user 340) to understand a nature and intensity of a person’s intoxication for a multitude of purposes.
  • the logic circuitry 350 can make predictions on future affective states of the user 340, as well as a future nature and magnitude of an intoxication level of the user 340.
  • the prediction can be used to guide consumers of the substance 240 such that the consumers make informed decisions prior to consumption of the substance 240.
  • a nature and magnitude of cannabis intoxication can be determined from a single data type. For example, patterns in the facial landmark and texture data, the facial landmark migration, movement, and kinesthetic patterns can be used to characterize the nature and intensity of cannabis intoxication.
  • classification of the nature and magnitude of intoxication can be accomplished on an app on a smartphone, or by an Internet of Things (IoT) device.
  • the classification of the nature and magnitude of intoxication can be realized via a camera within a facility, such as but not limited to a security camera.
  • classification of the nature and magnitude of intoxication can be determined via EDA conductance sensors in a steering wheel, or a combination of EDA conductance sensors and a camera or other sensors within a car.
  • the system can be used as the basis for an alternative to a breathalyzer or blood test to determine cannabis intoxication.
  • the logic circuitry can also classify or refine a classification of cannabis intoxication through the collection of one or more subjects’ observation of affective stimuli.
  • the logic circuitry 350 shows the user 340 facial affect data in the form of a face, either real or manufactured.
  • the logic circuitry 350 can then observe and record responses of the user 340 made post-consumption to the stimuli (the shown facial affect data).
  • some embodiments of the present disclosure can be applied to other compounds or chemicals, such as, coffee, etc. Dynamic Modulation of Vapor to Change an Affective State of a User
  • Embodiments of the present disclosure provide systems and methods for dynamic modulation of affective state and/or intoxication/sobriety of the user 340.
  • the system 300 of FIG. 3 can include one or more drug delivery systems (e.g., the vaporizing device 200), one or more sensors for gathering physiological and affective data, the logic circuitry 350 including one or more data storage devices and processors, and output components.
  • the output components can be in a single form or distributed.
  • the output devices can include lights, visual displays, speakers, haptic components, or any combination thereof.
  • the system 300 further includes a mobile device of the user 340.
  • the mobile device can execute an app.
  • the system 300 can further include wearable devices (e.g., a wearable smart watch, wearable smart headband, wearable smart jewelry like rings, braclets, etc., or any combination thereof).
  • wearable devices e.g., a wearable smart watch, wearable smart headband, wearable smart jewelry like rings, braclets, etc., or any combination thereof.
  • the system 300 can include any other IoT device, including those owned and/or managed by others apart from the user 340.
  • the system 300 includes the vaporizing device 200 with multiple cartridges (e.g., two, three, five, ten, fifty, etc.) that each contains one of a variety of compounds such as, for example, cannabinoids, terpenes, nicotine, other phytochemicals and mycochemicals, and/or other compounds, or any combination thereof. That is, each cartridge includes a single substance therein. Alternatively, one or more of the cartridges can included a mixture of two or more substances.
  • cartridges e.g., two, three, five, ten, fifty, etc.
  • each cartridge includes one of a variety of compounds such as, for example, cannabinoids, terpenes, nicotine, other phytochemicals and mycochemicals, and/or other compounds, or any combination thereof. That is, each cartridge includes a single substance therein. Alternatively, one or more of the cartridges can included a mixture of two or more substances.
  • the system 300 can dynamically modulate affective state and/or degree or nature of intoxication/sobriety across a set of changing needs of the user 340.
  • the changing needs of the user 340 can include a diurnal cycle, a need for sleep or wakefulness, management of work, a sudden need to parent, or a sudden need to address other responsibilities, dynamic shifts in interests, such as a desire to workout or socialize while maintaining a desired affective state or level or nature of intoxication/sobriety.
  • the system 300 can automatically vary ratios of substances in a vapor (e.g., the vapor 290) based on the environmental characteristics 352 which can include a time of day, a geographical location (e.g., at a home location of the user 340, at a work location of the user 340, at a home location of a parent of the user 340, at a home location of a friend of the user 340, at a mall, at a movie theater, etc.).
  • the environmental characteristics 352 can include a time of day, a geographical location (e.g., at a home location of the user 340, at a work location of the user 340, at a home location of a parent of the user 340, at a home location of a friend of the user 340, at a mall, at a movie theater, etc.).
  • the system 300 can automatically vary ratios of substances in the vapor based on other factors such as patterns of past use, with or without user input or with or without user modifications.
  • the logic circuitry 350 can set a vapor mixture in the morning to have a ratio of THC to CBD of 1 to 1, whereas the logic circuitry 350 can set a vapor mixture in the evening to have a ratio of THC to CBD of 10 to 1.
  • the logic circuitry 350 can set a vapor mixture when the logic circuitry 350 determines that the user 340 is at the home location of the user 340.
  • the vapor mixture at the home location of the user 340 can be set to a ratio of THC to CBD of 20 to 1.
  • the logic circuitry 350 can set the vapor mixture when it determines that the user 340 is at the home location of the parents of the user 340.
  • the vapor mixture at the home location of the parents of the user 340 can be set to a ratio of THC to CBD of 5 to 1, etc. That way, the system 300 can dynamically adjust a compound that the user 340 consumes based on the environmental characteristics 352 obtained. For example, environmental stressors on the affective state of the consumer can become determining factors for a model.
  • a vaporizer uses two or more cartridges together to create an infinite variability of ratios of compounds that can be delivered by the vaporizer.
  • two cartridges one containing THC, the other containing CBD, can be coupled in the vaporizing device 200.
  • the vaporizing device 200 drives two vaporizing elements associated with respective ones of the cartridges.
  • the vaporizing device 200 can drive the vaporizing elements using pulse width modulation, with distinct duty cycles for each, generating different rate levels of vaporization for each of the two cartridges.
  • the vaporizing device 200 can adjust the duty cycles to establish the new vapor mixture.
  • the system 300 can allow the user 340 to consume an infinite number of different ratios of compounds without having to buy or insert additional cartridges (i.e., additional cartridges in addition to the two already in the vaporizing device 200).
  • a level or granularity of adjustment of the ratio of the two cartridges can be continuous. In some implementations, the level or granularity of adjustment of the ratio of the two cartridges can be discontinuous.
  • the level or granularity of adjustment of the ratio of the two cartridges can be at discrete intervals or even binary, or any combination thereof.
  • the multi-cartridge setup for the vaporizing device 200 that supports an infinite ratio vapor mixing enables the system 300 to precisely, efficiently, and conveniently deliver vapor to meet a modelled need of the user 340.
  • the system 300 can include substances or compounds that mitigate and/or reverse impacts of intoxicants.
  • the user 340 first consumes a first vapor mixture with a relatively high ratio of the intoxicating cannabinoid THC and subsequently becomes intoxicated. Shortly thereafter, the user 340 desires sobriety and/or a lesser level of intoxication. The user 340 then consumes a second vapor mixture with a relatively high level of THC antagonistic compound, thereby speeding up or accelerating sobriety.
  • antagonistic compound consumption can be used to mitigate and/or reduce an affective state of a user of THC, thereby, for example, resulting in a sobering impact/effect.
  • Other compounds or combinations of compounds can also be used to mitigate and/or reduce an affective state of a consumer/user of THC. Examples of such other compounds include cannabinoids, terpenoids, and other compounds.
  • Benefits of being able to accelerate sobriety are immeasurable. For example, a parent can enjoy a high THC vapor socially at a party. When the parent returns home, he is surprised by a sick child that needs care.
  • the parent can use the vaporizing device 200, according to some implementations of the present disclosure, to increase a ratio of a sobering substance so as to consume a second vapor with a different ratio of substances to aid in sobering the parent.
  • the adjustment in vapor mixture ratios to aid a consumer/user from switching from insobriety to sobriety can be automatically or manually initiated and supported with guidance from the system 300 via an app executing on a mobile device of the consumer/user and/or via the vaporizing device itself.
  • the system 300 via the logic circuitry 350 which operates as an integrated distributed classification, prediction, and response system, can leverage affective, physiological, environmental, and consumption data collected by sensors and be analyzed with machine learning techniques to provide personalized decision support.
  • the system 300 can generate a modelled prediction of an amount of time needed for the user 340 to return to or achieve a given level of sobriety.
  • the system 300 can provide live behavioral guidance based, at least in part, on vapor mixture(s) consumed by the user 340 and the vapor mixture to be consumed by the consumer to achieve sobriety.
  • the user 340 can input a level of intoxication and adjust the desired sobriety levels without modelled assistance. In some implementations, such adjustments can be implemented via the vaporizing device 200.
  • the system 300 can use machine learning to classify the user 340 into a like cohort of users whose data informs personalized prediction for dosing, impact and timing.
  • the system 300 can obtain current affective, physiological, and environmental data of the user 340 to generate modeled prediction of timing of intoxication, timing of sobriety, etc.
  • the system 300 can use pulse width modulation or like techniques to modify an intensity of a vapor mixture and a time duration and inhalation amount required to consume a desired level of a compound or compounds.
  • Pulse width modulation is a technique that is used for electrical signal manipulation in which continuously increasing or decreasing levels of output can be generated from a binary signal input in purporting to the duty cycle.
  • the pulse width modulation approach is able to generate a continuum of variable vaporization.
  • the system 300 can apply independent/different pulse width modulation to each of the cartridges in the vaporizing device 200 such that, the output vaporization from each cartridge can be set independently or set in coordination with any other cartridge.
  • Embodiments of the present disclosure provide a system and method for facilitating a navigation of a complex social space with an array of constraints and opportunities.
  • Most social networks are largely virtual where participants engage remotely, often from their own homes, using an electronic medium (e.g., mobile device, tablet, computer, etc.).
  • an electronic medium e.g., mobile device, tablet, computer, etc.
  • Social media interactions typically require a reason or rationalization for connection and purpose. For example, Facebook “friends” may be connections with whom you went to school and want to share life experiences with images, comments, and other posts or likes.
  • Instagram connections tend to a be a broad collection of people with whom one wants to share common aesthetic interests, passions, or hobbies, with whom one wants to share your curated collection of aesthetically appealing or otherwise interesting images and videos.
  • Embodiments of the present disclosure provide an electronic social network that facilitates de-virtualization of social connections.
  • the vaporizing device 340 includes assisted GPS to approximately place the user 340 in a mapped physical region geospatially.
  • the vaporizing device 340 can use sensors to refine the approximate placement for very precise location (e.g., sub decimeter location) on a three dimensional geospatial map, relative to other objects, landmarks, people, and/or others who are nodes of the social network of the user 340.
  • Node information can be determined using the social media accounts 351 of the user 340.
  • the system 300 via the logic circuitry 350 can enable nodes of the social network 340 to efficiently converge.
  • one node e.g., a user of a vaporizer
  • the meeting location can be, for example, at a concert.
  • the system 300 can provide instructions and/or guidance for the two nodes to converge (e.g., by providing directions using an app executing on a mobile device). This convergence and de-virtualization of the social network can be used to enjoy a cannabis sharing ritual in person or for another purpose, for example, meeting for lunch in a park.
  • the system 300 can use a number of sensing methods to determine the geolocation of a node/user.
  • the system 300 can use a Bluetooth sensor (or any wireless sensor technology) in one location to gauge signal strength to another Bluetooth sensor at another node.
  • the system 300 can obtain signal strength between the Bluetooth sensors over time.
  • a time staggered triangulation is accomplished using the Bluetooth sensors.
  • multiple antennas on a single Bluetooth device or multiple Bluetooth sensors on a single vaporizer, or the union of multiple Bluetooth sensors across multiple devices can be used.
  • the assisted GPS would guide the nodes within to the 100-meter range, and the sub decimeter geolocation on the three dimensional map would enable the system 300 to guide the nodes to convergence.
  • the nodes can experience the guidance from the system 300 through any number of modalities. For example, as the nodes converge on a common location, a haptic element (e.g., a vibrating motor) in the vaporizer can signal a reduction in distance or which direction a user should head. Similarly, light or sound emitted from the vaporizer can aid in guiding the nodes/users to converge together. Alternatively, a smart phone or a wearable device can aid in providing guidance.
  • a haptic element e.g., a vibrating motor
  • light or sound emitted from the vaporizer can aid in guiding the nodes/users to converge together.
  • a smart phone or a wearable device can aid in providing guidance.
  • All of the calculations required to provide guidance could be done remotely in the cloud, e.g., at the logic circuitry 350.
  • calculations can be performed on the vaporizer (e.g., the vaporizing device 200).
  • the vaporizing device 200 can include, for example, a tensor processing unit (TPU) for performing calculations.
  • the calculations can be performed by another computing device, such as, for example, a mobile device, a tablet, etc.
  • the system 300 calculates the guidance using a distributed approach, leveraging the collective processing power of all devices associated with the system 300 and the social network.
  • the social network and the system 300 can also function as a mesh network thereby optimizing data flow and communication among nodes within the social network.
  • the social network used in this context includes computing devices like mobile phones, tablets, vaporizers of other users associated with the user 340 via the social media account 351 of the user 340.
  • Nodes of the social network both direct (e.g., friends) and indirect nodes (e.g., friend of friends), can be guided to meet at a particular space at a precise time, by guiding nodes to a location using other devices or beacons as reference points. As such, meeting a friend (or a friend of a friend that the user 340 has not yet met) can be more convenient.
  • sensors such as, for example, optical sensors and RFID sensors, onboard the vaporizer, mobile phone, or other device can allow users to authenticate the identity of another node to make meeting safer.
  • Verification of the identity of the other party can be performed, for example, by confirming that one or more biometrical measurements (e.g., fingerprint, face scan, iris scan, etc., or any combination thereof) of the other party matches a corresponding biometric measurement that is stored in the system 300 and is associated with the node.
  • biometrical measurements e.g., fingerprint, face scan, iris scan, etc., or any combination thereof
  • the system 300 is configured to authenticate users to aid in enabling safe purchases of restricted goods (e.g., cannabis, alcohol, etc.) as well as enabling financial transitions (e.g., using credit cards or the like).
  • the system 300 authenticates the user 340 by using the vaporizer, mobile device, or other component of the system 300 to capture an image of a credit card, identification (ID) card, or other official document of the user 340. This image is fed through a machine learning model with optical character recognition as well as other visual pattern recognition to digitally transcribe and ingest the requisite features and information to compare reference datasets. This process authenticates the person, documents, credit card, and status on the system 300.
  • restricted goods e.g., cannabis, alcohol, etc.
  • financial transitions e.g., using credit cards or the like.
  • the system 300 authenticates the user 340 by using the vaporizer, mobile device, or other component of the system 300 to capture an image of a credit card, identification (ID) card, or other official document of the user
  • a node of the system can select a social identifier, such as, for example, a personalized music snippet or song or other audio output.
  • the social identifier of the node can be displayed/played upon a given trigger, such as, for example, upon vapor inhalation by the node using a connected vaporizer.
  • the node is able to express its social identity through“walk-on music” when consuming the vapor.
  • a social identifier can be used to help other nodes identify and/or authenticate the node/user.
  • the vaporizer (e.g., the vaporizing device 200) is associated with the user 340 such that the vaporizer is able to communicate with the user 340 as the user 340 consumes vapor.
  • the vaporizer can signal an amount of vapor that the user 340 has consumed through a live output element, such as a haptic component or light.
  • the output element provides sensory guidance on the vaporizer and/or a computing device (e.g., a mobile device, tablet, etc.).
  • An app executing on a mobile device can also reflect consumption data, but with less sensory immediacy than a live vaporizer feedback provided by the vaporizer.
  • complementary sensor data from the devices of other users can help improve accuracy.
  • the system 300 can also use complementary sensor types, such as but not limited to accelerometer and gyroscopic data, to further refine a user’s geolocation as the user moves. Amongst other things, this would provide precise triangulation even in dynamic contexts.
  • the system 300 can use sensor data to estimate the progression of the other node(s) toward convergence and indicate such estimate to at least some of the nodes (e.g., via an app, the vaporizer, etc. or a combination thereof). For example, if the accelerometer data (e.g., from a vaporizer) indicates a particular set of features, and the convergence were slowed, a model could indicate that the other node was slowed or stopped (e.g., the other node/user slipped on the ice and needs help, the other node ran into another friend on the way to converge/meet, etc.).
  • the accelerometer data e.g., from a vaporizer
  • a model could indicate that the other node was slowed or stopped (e.g., the other node/user slipped on the ice and needs help, the other node ran into another friend on the way to converge/meet, etc.).
  • the system 300 can provide support by suggesting a status of the other node (e.g., fallen, meeting with another node, etc.), by providing a current location of the other node (e.g., the site of the modelled fall, etc.).
  • the system is able to bring together nodes of a social network in a synchronized collective experience through coordinated communications. For example, a collection of nodes at a concert can experience a collective response to a stimulus, such as musical rhythm or drum beat, causing all vaporizers, phones, or ancillary device to light up, vibrate or generate another output in unison.
  • an application programming interface can allow a party, such as a musician, to dynamically control experiential elements for nodes of a social network or components of the system 300.
  • a party such as a musician
  • the musician can dynamically control the color of lights or intensity on vaporizers, phones or other devices within the social network.
  • the system itself would become part of the show as wave of light wash over segments of the audience in which the nodes are present.
  • nodes/users themselves initiate a shared collective experience such as a geospatially manifested output, such as, for example, a wave of light spreading across a crowd or emanating like a ripple from themselves, much like people standing and then sitting at a football game generating an engaging illusion.
  • a geospatially manifested output such as, for example, a wave of light spreading across a crowd or emanating like a ripple from themselves, much like people standing and then sitting at a football game generating an engaging illusion.
  • shared experiential data can be collectively experienced across a group of people, such as, for example, sound, music, or system driven vaporization.
  • collections of nodes can consume a substance in-person together (e.g., by using their respective vaporizers).
  • collections of nodes can consume the substance at a same time but at different locations via an electronic network.
  • the nodes can share data and other common experiential elements such as dynamic information, games, entertainment, messaging, video, audio and other forms of communication.
  • geospatial node density maps can be used to drive people to gather at a single location or multiple locations for a spontaneous social event (e.g., involving consumption of substances via vaporizers).
  • vendors of experience such as, for example, restaurants, could guide nodes to a geospatial and temporal convergence.
  • a single node or a group of nodes can guide the collective nodes to a geospatial and temporal convergence for any reason.
  • affective or other data types derived from a node, or modelled from a collection of nodes can be used for the modulation of devices within a relevant geospatial and temporal parameter, such as lights, temperature, noise, and sound in a venue. Hemp and Cannabis Production
  • FIG. 4 illustrates a system 400 for optimizing industrial hemp and cannabis production, according to some embodiments of the present disclosure.
  • the system 400 includes a standard array of agronomic sensors 402 and data collection elements, such as, but not limited to, grow light data collection elements, nutrient data collection elements, watering data collection elements, ambient air data collection elements, as well as non-standard data collection devices in the form of optical sensors, and in some embodiments electronic nose chemo-sensors. These optical sensors can collect light from both visible and non-visible spectra.
  • these optical sensors can be used to collect data for generation of growth and plant health metrics such as normalized difference vegetation index (NDVI). Further, these optical sensors can be trained on specific parts of the hemp or cannabis plant, for example, the optical sensors can be trained on inflorescence of the plant.
  • the agronomic sensors 402 can collect data for further processing by the computing device 406
  • the data collected by the agronomic sensors 402 can be used for chemical composition of the inflorescence of the plant and the resins that the inflorescence produces.
  • lean spectrographic sampling with infrared or ultraviolet light can be used to capture the nature of chemical bonds indicative of the chemical composition of the inflorescence and resins. It can also indicate the concentration of phytochemicals and provide guidance for harvest timing or necessary change of environmental variables.
  • the actuators 404 in the system 400 can automate watering, lighting, fertilizer, etc., of the plants based on computations by the computing device 406
  • Every cannabis plant cultivar often referred to as “strains” by industry professionals, is different. As the strains grow and pass through different phenological states, they exhibit different sensitivities to inputs, and exhibit materially different traits. For example, as a given cultivar matures to point of flowering, the inflorescence and associated resins can differ wildly from another cultivar, with respect to color, texture, shape and scent. One cultivar may consistently develop a purple hue in the early stages of flowering, while another may develop a white hue. Furthermore, as they continue to mature phenologically, the characteristics of the inflorescence and resin continue to change in a material way.
  • the cultivar exhibiting a white hue to the inflorescence and resin may begin to exhibit a golden hue instead.
  • the color change may correspond with the THC oxidizing into CBN.
  • the array of sensors e.g., the agronomic sensors 402 within the system 400 can record the changes collected by optical sensors and store this data in a database. This data would also associate with other data identifying the identity of the particularly cultivar and the phenological state.
  • olfactory chemo-sensors would also record and store the data quantitatively describing the scent.
  • the scent data would be stored in a database. This data would also associate with other data identifying the identity of the particularly cultivar and phenological state.
  • phytochemicals include, but are not limited to, terpenoids and cannabinoids.
  • other observable phytochemicals correspond to changes in terpenes and cannabinoids.
  • the system 400 uses electromagnetic radiation, optical sensors, or optical sensors and chemo-sensors, to create time series records of each plant’s inflorescence and resins, as well as the standard set of agronomic sensors 402 tracking each plant’s growth, and storing data on the plant’s growth in a database that associates the plant with its cultivar identity. Further, the system 400 contains a reference database with phytochemical composition confirmed with additional high resolution chemical testing. After extracting the relevant feature from each sensor data type, the system 400 applies machine learning to determine the current and future phytochemical composition of each plant available for extraction and distillation. The systems then aggregate the level of distillates across all of the plant in production.
  • the system 400 also calculates the potential range in the levels of each distillate, depending upon harvest time and other factors. In some embodiments, the system performs optimization for harvest time and other factors of plants and group of plants, for example, factoring market pricing or projected pricing of individual distillates, for the optimization of production profits.
  • the system can optimize the production of distillates for any number of factors, or combinations of factors, including but not limited the assurance of yield of a level of a given distillate or distillates and maximal profitability from remaining distillate yield.
  • the system would also reduce the need for labor and increase standardization and quality control across multiple facilities.
  • cannabis production companies rely on lead cultivation personnel to observe plant growth and subjectively determine when to commence harvesting. The subordinate cultivation personnel then begin the harvest. If a given cannabis production company were to open a second cultivation facility in an area far away, the lead cultivator would then need to travel to the new facility and begin the process of observation and assessment, ideally training the staff before returning to oversee the original facility.
  • the system would be able to provide cultivation and harvest decision support across 10,000 facilities as readily as it could one.
  • the system 400 would optimize the production facility for distillates and robotic harvesters would harvest the plants.
  • the system would optimize the yield for a combination of distillates and“flower.”
  • the system would incorporate addition exogenous supply and demand factors, such as satellite data of outdoor cannabis production to determine the optimal cultivation and harvest path.
  • the system 400 would make recommendations for the planting or removal of specific cultivars, in order to optimize the production facility or groups of production facilities (new or existing). For example, the system could recommend the planting or removal of sets of cultivars in certain numbers and/or ratios to optimize a given facility for maximal flexibility with respect to changes, either real or projected or potential, in distillate pricing while maintaining maximal profitability of distillate sales. Similarly, the system could recommend the planting or removal of sets of cultivars in certain numbers and/or ratios to optimize a given facility for maximal modelled potential with respect to changes, either real or projected or potential, in distillate pricing while maintaining maximal profitability of distillate sales.
  • the system 400 would also be able to make the above recommendations with constraints, such as, but limited to, base production levels of given distillates, base production levels of flower, combinations of base levels of flower and given distillates, availability of cultivars, input costs, timing, and legal and geopolitical constraints.
  • the system 400 could optimize the production of phytochemicals, such as, but not limited to, cannabinoid and terpenes, that remain within the inflorescence and resin, and are sold as flower instead being extracted as distillate.
  • planting recommendations can be tailored to new production facility in isolation, a facility, or set of facilities, expansion or contraction, or incrementally added facilities.
  • the system could make these recommendations with respect to the replacement of existing plants that are aging or maturing, performing sub-optimally or likely to perform sub-optimally along a given dimension of performance.
  • the system could also tailor the recommendations to global, local, or the optimized intersection of global and local supply and demand dynamics.
  • the system could optimize the production of distillates in a conjoined manner to yield optimal production of any number of conjoined units defined along any dimension.
  • groups of cannabinoid and terpenes in certain ratios could be defined by units of affective and or medical response by consumers, which the system could optimize.
  • any reference signs placed between parentheses shall not be constructed as limiting the claim.
  • the word“comprising” does not exclude the presence of elements or steps other than those listed in a claim.
  • the word“a” or“an” preceding an element does not exclude the presence of a plurality of such elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Library & Information Science (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Selon certains modes de réalisation de la présente invention, un système d'interface homme-machine comprend un dispositif d'affichage, une caméra, un module de communication et un ou plusieurs supports d'informations. La caméra est configurée pour capturer une image. Le module de communication est configuré pour émettre et recevoir des données entre un premier dispositif utilisateur distant et un second dispositif utilisateur distant. Le ou les supports d'informations comportent, individuellement ou en association, des sections de code mémorisées sur ceux-ci. Lorsqu'elles sont exécutées par un ou plusieurs processeurs, les sections de code amènent un dispositif informatique à détecter un visage de l'image capturée par la caméra, et à extraire une pluralité de caractéristiques physiques du visage détecté dans l'image. Sur la base, au moins en partie, des caractéristiques physiques extraites, les sections de code amènent le dispositif informatique à générer un premier avatar.
PCT/US2019/061331 2018-11-13 2019-11-13 Systèmes et procédés pour évaluer une réponse affective chez un utilisateur par l'intermédiaire de données de sortie générées par un être humain WO2020102459A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201862760773P 2018-11-13 2018-11-13
US201862760731P 2018-11-13 2018-11-13
US62/760,731 2018-11-13
US62/760,773 2018-11-13
US201962819294P 2019-03-15 2019-03-15
US62/819,294 2019-03-15

Publications (1)

Publication Number Publication Date
WO2020102459A1 true WO2020102459A1 (fr) 2020-05-22

Family

ID=70731173

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/061331 WO2020102459A1 (fr) 2018-11-13 2019-11-13 Systèmes et procédés pour évaluer une réponse affective chez un utilisateur par l'intermédiaire de données de sortie générées par un être humain

Country Status (1)

Country Link
WO (1) WO2020102459A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140267544A1 (en) * 2013-03-15 2014-09-18 Intel Corporation Scalable avatar messaging
KR101743763B1 (ko) * 2015-06-29 2017-06-05 (주)참빛솔루션 감성 아바타 이모티콘 기반의 스마트 러닝 학습 제공 방법, 그리고 이를 구현하기 위한 스마트 러닝 학습 단말장치
WO2017152673A1 (fr) * 2016-03-10 2017-09-14 腾讯科技(深圳)有限公司 Procédé et appareil de génération d'animation d'expression pour un modèle de visage humain

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140267544A1 (en) * 2013-03-15 2014-09-18 Intel Corporation Scalable avatar messaging
KR101743763B1 (ko) * 2015-06-29 2017-06-05 (주)참빛솔루션 감성 아바타 이모티콘 기반의 스마트 러닝 학습 제공 방법, 그리고 이를 구현하기 위한 스마트 러닝 학습 단말장치
WO2017152673A1 (fr) * 2016-03-10 2017-09-14 腾讯科技(深圳)有限公司 Procédé et appareil de génération d'animation d'expression pour un modèle de visage humain

Similar Documents

Publication Publication Date Title
US11743527B2 (en) System and method for enhancing content using brain-state data
US11587272B2 (en) Intelligent interactive and augmented reality cloud platform
US20220084055A1 (en) Software agents and smart contracts to control disclosure of crowd-based results calculated based on measurements of affective response
US11200964B2 (en) Short imagery task (SIT) research method
US10387898B2 (en) Crowd-based personalized recommendations of food using measurements of affective response
US11494390B2 (en) Crowd-based scores for hotels from measurements of affective response
CN109564706B (zh) 基于智能交互式增强现实的用户交互平台
US10572679B2 (en) Privacy-guided disclosure of crowd-based scores computed based on measurements of affective response
US11269891B2 (en) Crowd-based scores for experiences from measurements of affective response
US10198505B2 (en) Personalized experience scores based on measurements of affective response
US20210248656A1 (en) Method and system for an interface for personalization or recommendation of products
US20230034337A1 (en) Animal data prediction system
US11483618B2 (en) Methods and systems for improving user experience
US20180115802A1 (en) Methods and systems for generating media viewing behavioral data
CA3189350A1 (fr) Procede et systeme d'interface pour la personnalisation ou la recommandation de produits
WO2020102459A1 (fr) Systèmes et procédés pour évaluer une réponse affective chez un utilisateur par l'intermédiaire de données de sortie générées par un être humain
WO2022181080A1 (fr) Dispositif de détermination de tendance, dispositif d'affichage ayant un corps réfléchissant, dispositif de système d'affichage de tendance, procédé de détermination de tendance, procédé de traitement d'affichage, procédé d'affichage de tendance, programme, et support d'enregistrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19885127

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 11/10/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19885127

Country of ref document: EP

Kind code of ref document: A1