WO2020102459A1 - Systems and methods for evaluating affective response in a user via human generated output data - Google Patents
Systems and methods for evaluating affective response in a user via human generated output data Download PDFInfo
- Publication number
- WO2020102459A1 WO2020102459A1 PCT/US2019/061331 US2019061331W WO2020102459A1 WO 2020102459 A1 WO2020102459 A1 WO 2020102459A1 US 2019061331 W US2019061331 W US 2019061331W WO 2020102459 A1 WO2020102459 A1 WO 2020102459A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- data
- sensors
- state
- vaporizer
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/179—Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02405—Determining heart rate variability
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/318—Heart-related electrical modalities, e.g. electrocardiography [ECG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/389—Electromyography [EMG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4845—Toxicology, e.g. by detection of alcohol, drug or toxic products
Definitions
- the present disclosure relates generally to systems and methods for efficiency improvements in networks comprising humans and machines and for optimizing industrial hemp and cannabis production.
- a human-machine interface system includes a display device, a camera, a communication module, and one or more storage mediums.
- the camera is configured to capture an image.
- the communication module is configured to transmit and receive data between a first remote user device and a second user remote device.
- the one or more storage mediums have, individually or in combination, code sections stored thereon. When executed by one or more processors, the code sections cause a computing device to detect a face of the image captured by the camera, and to extract a plurality of physical features from the face detected in the image. Based at least in part on the extracted physical features, the code sections cause the computing device to generate a first avatar.
- a computer-readable medium stores a computer program executable by a computing device to process facial landmark data.
- the computer program includes code sections for causing the computing device to receive, from a camera, images.
- the computer device is caused to detect a face of a user in an image captured by the camera, extract a plurality of physical features from the face detected in the image, and determine an affective state of the user. Based at least in part on the plurality of physical features, the computer device is caused to generate an avatar representative of the determined affective state of the user and display the avatar.
- a hemp and cannabis production system includes a hemp and cannabis plant, an agronomic sensor, one or more processors, and a communication module.
- the agronomic sensor is configured to generate agronomic data associated with the plant field.
- the one or more processors are configured to analyze the generated agronomic data, to determine a chemical composition of the plant.
- the communication module is coupled to the one or more processors and configured to transmit at least a portion of the determined chemical composition of the plant.
- a system for evaluating compound intoxication and affective response in a user includes a processor and a non-transitory computer readable medium storing instructions thereon such that executing the instructions causes the system to perform the steps including:
- a system for de- virtualizing a social network includes a first vaporizer and a mobile device.
- the first vaporizer is configured to: deliver a compound via a vapor to a user, determine a physical location of the first vaporizer, determine locations of a group of vaporizers, and send a meetup signal for initiating a convergence of a subset of vaporizers in the group of vaporizers and the first vaporizer.
- the mobile device is configured to: receive the meetup signal, and broadcast a meetup message to the group of vaporizers.
- FIG. 1A is a block diagram of a generic system involving an interaction between a biological machine and a computing device, according to some implementations of the present disclosure
- FIG. IB is a block diagram of a system involving an interaction between a human and a server, according to some implementations of the present disclosure
- FIG. 2 illustrates an example vaporizing device according to some implementations of the present disclosure
- FIG. 3 is a block diagram of a system for determining physiological and affective change associated with a substance, according to some implementations of the present disclosure.
- FIG. 4 is a block diagram of a system for optimizing hemp and cannabis production, according to some implementations of the present disclosure.
- Machine 1 can convert the binary to comma separated values (CSV) form and transmit the data to Machine 2.
- CSV comma separated values
- the human can process the data with the fewer cognitive processing cycles, less energy, and less time.
- the transmission from binary to CSV by Machine 1 and the reverse conversion (e.g., the transmission from CSV to binary) by Machine 2 introduces significant additional processing cycles, energy and time.
- maintenance and transmission of data in native formats can lead to greater efficiency.
- Biological animals communicate information in various native formats.
- Physiological state of a biological machine includes sensory data obtained from the biological machine.
- Affective state of a biological machine describes an emotional experience of the biological machine.
- two humans can consume alcohol with both the first human and the second human having a blood alcohol level of 0.08. Even though the blood alcohol level of both humans provide a physiological state of a 0.08 blood alcohol level, both humans can experience the 0.08 blood alcohol level differently.
- the first human can be quite unaffected cognitively and/or emotionally, thus showing no outward indication of intoxication.
- the second human on the other hand can be very garrulous, smiling, and using multiple facial expressions. Although the physiological response to alcohol is similar in both humans, their affective response (or state) is very different.
- FIG. 1A is a block diagram of a generic system 100 involving an interaction between a biological machine 102 and a computing device 106, according to some implementations of the present disclosure.
- the system 100 includes an interface 104 that allows information capture from the biological machine 102.
- Optimized use of native data formats on machines can also yield superior capabilities.
- a developer can use a host of potential tools and languages.
- the two dominant formats are iOS and Android.
- developers can use tools such React Native which creates a single programming layer in which the developers can develop the app.
- React Native then translates the data into a form that can simulate and interact with the native iOS or Android language on the phone.
- the savings associated with reduced developer hours can come at a cost beyond the expected increase in processing cycles, energy and time.
- These non-native layers often inhibit the refinement and control of the phone, making lower level access and the phone’s full feature set less accessible.
- the non-native layer reduces functional potential and capabilities.
- interactions with humans which as biological systems are in a sense biological machines, though less native channels not only increases the processing cycles, energy consumption and time, but also reduces the function potential and capabilities of the human interaction.
- FIG. IB illustrates a block diagram of a system 101 involving an interaction between a human 103 and a server 107, according to some implementations of the present disclosure.
- FIG. IB is a subset of FIG. 1A, where the human 103 is an example of a biological machine 102, the sensor 105 is an example of the interface 104, and the computing device 106 is an example of the server 107.
- the sensor 105 can include a camera that captures movements or facial expressions of the human 103.
- the server 107 can apply algorithms to analyze the movements or facial expressions of the human 103 to determine an affective state of the human 103.
- the human 103 is able to communicate the affective state of the human 103 to the server 107, via the sensor 105.
- a biological machine is thus able to be understood by a computing device.
- This dynamic should be thought of as extensible to other native layers of human interface and transmission protocols, such as, but not limited to the acoustical qualities of vocal production as well.
- HGOD human generated output data
- Examples of data that the sensor 105 can collect include electrodermal activity (EDA), cardiac time series such as electrocardiogram (ECG) and heart rate variability (HRV), electroencephalogram (EEG), electromyogram (EMG), photoplethsmography (PPG), pupillometry, facial landmark data, etc.
- HGOD collected by the sensor 105 can be further processed by non-biological machines (e.g., the server 107) for signal decomposition and feature extraction.
- the resulting data sets, in isolation or in a conjoined manner, can be used for objective classification of affective state of the human 103.
- affective state can be mapped in a lower dimensional manner, such as the 2- dimension valence by arousal continua, it can be described by any number of dimensions.
- the system 101 uses more than two dimensions in describing affective state.
- the server 107 can then map the processed HGOD, either in isolation or conjoined, to the dimensions of affective state to generate an objective and quantitative description of what the human 103 is communicating via the HGOD.
- facial landmark data can contain rich yet descriptive affective and physiological data. Many of these data types not only appear stable cross-culturally but also across species implying an evolutionarily stable hardwired rooting. Unfortunately, some of the observed changes in facial landmark observation can vary depending upon context. As such, the server 107 performing both conjoined analysis and contextual data improves accuracy of conclusions drawn from facial landmark data.
- Contextual data can include data describing an aggregate setting of the human 103. For example, this can include whether the human 103 is around other humans, whether the human 103 is consuming a substance, whether the human 103 recently exercised, etc.
- Contextual data can also lead to determining that the human 103 is communicating subtle information.
- HGOD including facial landmarks can be hijacked by a higher layer of processing to convey intention and to deceive for social signaling.
- the higher layer that sits atop the lower layers can mask the affective and physiological data at times.
- Facial landmark data is provided as an example of HGOD that can mask affective state, but other forms of HGOD can be relatively more or relatively less susceptible to signal interference.
- the system 101 leverages facial landmark data format and communication channel as the native format to communicate information from the non- biological machine (i.e., the computing device 106 or the server 107) to the biological machine 102 or the human 103.
- the non- biological machine i.e., the computing device 106 or the server 107
- HGOD from facial landmarks can be readily transmitted and processed from a distance and with minimal translation, much like the example above where Machine 1 and Machine 2 operate most efficiently maintaining communication in binary. Transmitting facial landmark data without translation reduces processing cycles, energy and time required when communicating from the server 107 to the human 103.
- This dynamic should be thought of as extensible to other native layers of human interface and transmission protocols, such as, but not limited to the acoustical qualities of vocal production as well.
- Humans can process facial expressions (or faces) in about 40 ms, approximately one- fifth the time for a human to process a conscious thought. While the number and types of facial landmarks can vary, in some implementations of the present disclosure, the server 107 can use 128 facial landmarks and can generate a three dimensional model of their relative positions, including capturing textural elements (such as, but not limited to wrinkles, as well as movement). In some implementations, in addition to potentially using facial landmark data (facial HGOD), the system 101 can construct a model of an affective space of the human 103 using other affective HGODs, such as EDA and HRV.
- facial HGOD facial HGOD
- affective HGODs can be used in rendering an estimated facial expression or facial landmark data such that interference from higher layer processing is eliminated. For example, when an individual is taking a picture, the facial HGOD data can be ignored, and other HGOD data can be used to construct an estimated facial HGOD that conveys the individual’s actual affective state, as if the individual were not forcing a smile.
- the system 101 can be used by a doctor to monitor a large number of patients efficiently and effectively, or to readily observe material changes in a given patient’s status immediately without significant mental processing cycles or delays.
- each of the humans 103 can be coupled to multiple sensors 105.
- the multiple sensors 105 can generate HGOD data provided to the server 107 for feature extraction.
- the server 107 can run the extracted features though a machine learning model (ML) which would map the features into a multidimensional affective space.
- the server 107 can then map the affective state or affective space migration on to a model of facial HGOD.
- the server 107 can then render the model on a screen of the server 107 or on a remote display.
- the server 107 can render the model by adjusting 128 landmark features on a three dimensional output of a facial avatar.
- a doctor can visually scan a number of facial avatars rapidly, for example, at rates of up to 25 faces per second, to look for any anomalies that the doctor might want to address. Having the doctor quickly scan facial avatars allows the doctor to process information much quicker than painstakingly reading patient data records one at a time.
- a doctor who is evaluating a patient attempting to maintain a stoic demeanor can otherwise be fooled by the patient’s attempts to control their outward appearance.
- the doctor can note any divergent affect and address the patient’s needs more effectively.
- a series of avatars can be rendered, each one reflecting the patient’s affective state in a moment in time. These renderings can be linked, creating a time series of affective states. The doctor can then view the time series either in an animated fashion, or avatar by avatar, to quickly apprehend the progression of the patient’s status over time.
- a user can efficiently review their own affective status without technical knowledge.
- the user would be capturing HGOD, from a wearable for example, which would be passed via Bluetooth or other means of transmission to a mobile device (e.g., cell phone).
- the mobile device would conduct feature extraction and machine learning on the HGOD and display the facial avatar on the screen of the mobile device.
- the user without technical knowledge can review a time-series of facial avatar to efficiently understand the patterns and trends in their own affective state.
- the user without technical knowledge can share their facial avatar or facial avatar time series with another user via text or other means of transmission, allowing the recipient to efficiently apprehend the affective state of the sender.
- the shared avatar can be contextualized, with reference points such as, but not limited to, events, activities, locations, and other users referenced in the avatar or time series avatar.
- groups of people can have their facial avatars compared, contrasted or aggregated, efficiently providing the recipient of the avatar or time series avatar with insight into a collective.
- the server 101 in the system 101 can group facial avatars of a people of a same group to determine similar or dissimilar features within the people of the same group.
- a collective facial avatar can then be developed from the feature analysis to represent the people of the same group.
- the server 107 can send this collective facial avatar, which can be a time series avatar, to a recipient.
- the recipient can efficiently apprehend an affective state or an affective state time series, synchronously and asynchronously, of the people of the same group.
- Examples of people of a same group include a group of people within a room, a series of riders in the same seat of a roller coaster, a group of people sipping from their cups of coffee or inhaling from their vaporizers, etc.
- the facial avatars or visualizations generated can convey affective response from stimuli within these contexts for rapid apprehension by a biological machine (e.g., the biological machine 102 or the human 103) with fewer processing cycles, less energy and time.
- affective visualization data i.e., the facial avatar described above in connection to some implementations of the present disclosure
- affective visualization data can be used to map social cascades that form and spread within groups of humans. Like other networked machines, humans often solve individual problems collectively with great success. At times, information processing is performed on a single biological machine and is then aggregated as illustrated through“the wisdom of the crowds.” At other times, the information processing is performed in a localized manner on one or more biological machines and then later spread through other machines. An example of this information processing can be observed in herding responses that can prevent a biological machine from ending at the hands of a predator.
- networked biological machines solve problems in an iterative manner through the cascades of machine-influence-weighted information spread across a network of machines. Solving individual problems collectively not only determines a solution to discrete problems, but also serves to generate consensus affective states of a network of biological machines.
- human machines use facial HGOD to send social signals for deception and negotiation.
- a new facial avatar can be generated flagging the divergence.
- the new avatar could be used, for example in an augmented reality application on a mobile device (e.g., a phone or a pair of smart glasses), to flag the points of divergence.
- the new avatar can be displayed in a manner where a user can immediately apprehend the points of divergence.
- time series or live representations of these divergences and their resulting cascades can be used in an augmented reality application that would allow a user to observe influence and her impact across a network of biological machines in real-time.
- Identification of thought or mood leaders with augmented reality through a native format can enable a user to navigate an evolving social context more adeptly, because when the user is flashed a facial avatar image for 40ms, their brain would mark significant point before becoming consciously aware of it.
- this system can be used with augmented reality (AR) to determine whether a person’s attempt to influence or persuade are being met with resonance. This could be done in person or virtually, such as over video conferencing or a like medium.
- AR augmented reality
- a facial avatar can be generated that reflects a particular affective state and flashed to a human to present a faster response. For example, if a machine mounted on a car were to detect ice below the car with LIDAR and wanted to send a signal to a human driver, the machine can flash a facial avatar that conveys an affective state with either significant concern or alertness to the human driver for a faster driver response. The driver would not have to be consciously aware of the facial avatar to respond appropriately.
- a cascading facial avatars of affective states can be generated by a machine and inserted into media such as a movie or a videogame to elicit a cascading response from a biological machine watching the movie or playing the videogame, thereby guiding affective state of the biological machine in a precise and calculated manner. Guiding the biological machine in this manner can also mitigate impact of the uncanny valley by changing the immersiveness of experience by connecting to the biological machine watching the movie or playing the videogame preattentively through a native channel. Further, the system could observe and dynamically respond to the potentially changing affective state of the biological machine in an iterative or guiding manner.
- the facial avatar can take on an array of realistic human forms, including representations of the human users themselves.
- the facial avatar can take on the images of notable or imaginary persons, or even non-persons such as a creature with a highly detailed anthropomorphic face.
- the facial avatar can use a specially constructed neutral face as a basal avatar in order to mitigate impact of biases associated with perceptions of trust or status.
- facial avatars can be generated by the server 107 based on facial 1TGOD obtained by the sensors 105.
- the facial avatars can be provided to humans via their electronic devices (e.g., smartphone, laptops, desktops, televisions, etc.).
- the facial avatars are chosen as a native format to communicate information relating to affective state quickly to the humans.
- the humans can potentially absorb and process the communicated information within 40 ms.
- the fast information processing indicates that humans can subconsciously process such information.
- computing devices 106 can analyze features of facial HGODs, or in some cases can analyze generated facial avatars, to determine affective states of humans.
- Embodiments of the present disclosure further provide systems and methods for analyzing HGOD data produced in response to one or more humans consuming a substance that can alter the one or more humans’ affective states.
- Affective state altering substances are of interest because many of these substances are consumed by humans without specific understanding of how impairment can occur. For example, while general mechanism and nature of impairment of alcohol consumption are well studied and understood, the mechanisms and nature of impairment from other compounds, such as but not limited to cannabinoids and terpenes, are less well understood. While tools and techniques for objectively identifying alcohol intoxication are readily available and used by law enforcement amongst other groups, such as breathalyzers or blood test that determine blood alcohol level, no such effective analogue exists for cannabis. This is in part due to the complexity in the chemical composition of cannabis, but also due to the persistence of these compound for period of time after post intoxication.
- cannabinoids such as tetrahydrocannabinol (THC) and cannabidiol (CBD)
- THC tetrahydrocannabinol
- CBD cannabidiol
- Other phytochemicals such as terpenes, can add to the complexity. Beyond intoxication, the understanding of how these compounds that impact a user’s affective state and a nature of the user’s affective migration are even less well understood than intoxication.
- FIG. 2 illustrates an example vaporizing device 200 for delivering a substance to a user.
- the user inserts a cartridge 230 containing a substance 240 into the vaporizing device 200
- the vaporizing device 200 uses an array of onboard cartridge sensors to collect data on one or more of the cartridge, substance, and label specification.
- the cartridge sensors quantify the attributes of the substance 240 along multiple dimensions, including, but not limited to, any one or more of direct and indirect measures of turbidity, color, chemical composition, viscosity, and flavor.
- the vaporizing device 200 can be used with cannabis.
- the substance turbidity is measured using one or more optical sensors emitting light and measuring refraction.
- the substance color is quantified using a spectrometer sensor, detecting absorption.
- the chemical composition of the substance 240 is measured using a nondispersive infrared sensor to identify specific compounds, such as, but not limited to, cannabinoids, terpenes, terpenoids, and flavonoids, by their resonance frequency.
- the substance capacitance is measured using capacitive sensors.
- the sensing can be supplemented with a reference data set (associated with, for example, an RFID tag).
- the one or more sensors can be activated by a number of triggers.
- the one or more sensors are triggered by a pressure-sensitive electrical or mechanical switch that is activated through the process of cartridge insertion.
- the sensors are triggered by an on/off switch 220 on the vaporizing device 200.
- the sensors are triggered by a computing device, e.g., a cellphone, a virtual assistant, a laptop, a desktop, a server, etc.
- the cartridge sensors are triggered by the activation of an onboard accelerometer within the vaporizing device 200.
- the cartridge sensors Upon activation, the cartridge sensors generate data describing the physical attributes of the substance 240 contained within the cartridge 230 along multiple dimensions, such as those described above. The data from each of these dimensions generates a distinctive pattern of data for a given substance 240, which can be analyzed.
- Cartridge and substance data generated by the cartridge sensors can be recorded. Brand data and label specifications for the cartridge 230 and substance 240 can be recorded via the vaporizing device 200.
- the vaporizing device 200 uses a sensor tag, such as, but not limited to, an RFID chip embedded in the cartridge 230.
- an app running on a smartphone of the user enables a drop-down menu for selection.
- package or sales transaction receipt information embedded in machine-readable optical label is captured with sensors, such as a camera or scanner, linking the information to the label specifications.
- brand and product information is captured by obtaining an image of the packaging or cartridge 230, and communication sensors 264 then transmit the image data to a server for further processing.
- the cartridge sensors also, directly or indirectly, measure the volume of the substance 240 within the cartridge 230.
- an ultrasonic emitter generates a sonic chirp into the cartridge 230; the ultrasonic receiver captures the sonic response as the waves travel through the air bubble within the cartridge 230.
- the sensors record the changing nature sonic chirp as it passes the air bubble, thereby collecting data for subsequent processing by a server to determine an amount of remaining substance 240.
- a light sensor can measure the growing size of the air bubble.
- a light sensor can measure the substance 240 directly.
- an ultrasonic sensor can measure the substance 240 directly.
- the quantity can be determined by an estimate developed by an trained machine learning (ML) model using visual input.
- the user may begin vaping.
- the user activates the vaporizing device 200, which turns on a vaporizing element 280 and vapor sensors 210b, through a number of potential triggers.
- a thermal blanket 270 can be provided to protect sensors and other sensitive electronics to heat produced by the vaporizing element 280.
- the vaporizing device 200 can house a cartridge warmer 250 that can dynamically adjust substance viscosity to a desired level depending upon factors such as substance fingerprint, ambient temperature, or other factors.
- the vaporizing element 280 and vapor sensors 210b are triggered by a sensor on a mouthpiece 210a of the vaporizing device 200.
- the sensor can be, a pH sensor, an airflow sensor, pressure sensor, or other sensor.
- the vaporizing element 280 and vapor sensors 210b are triggered by an on/off switch 220 on the vaporizing device 200.
- the sensors are triggered by a computing device, e.g., a smartphone, or some other device.
- the vaporizing element 280 and vapor sensors 210b are triggered by the activation of an onboard accelerometer sensor within the vaporizing element 200.
- the vapor sensors 210b can record a continuous description of the inhalation of vapor 290 from the vaporizing device 200 into the user’ s respiratory apparatus or lungs.
- activation of the vaporizing element 200 can trigger the activation of cartridge sensors, vapor sensors 210b, physiological sensors 263, environmental sensors 262, performance sensors 261, affectivity sensors, and/or communication sensors 264, the activation of which may in turn prompt data transfer and storage between system components.
- a specific request generated by another component of the system can trigger the activation of cartridge sensors, vapor sensors 210b, physiological sensors 263, environmental sensors 262, performance sensors 261, affectivity sensors, and/or communication sensors 264, the activation of which may in turn prompt data transfer and storage between system components.
- the vaporizing device 200 of FIG. 2 can be incorporated into an overall system 300 depicted in FIG. 3 for detecting, classifying, and reporting intoxication and affective change associated with a substance ingested by a user 340.
- the system 300 includes the user 340 who consumes a substance 240.
- the substance 240 can get into a body of the user 340 via contact with a solid/liquid form of the substance 240 or by inhalation of the vapor 290 of the substance 240.
- the substance 240 can be a combination of multiple chemical substances or a combination of different chemical substances delivered via different cartridges.
- the user 340 can provide physiological data and/or affective data to a logic circuitry 350.
- the logic circuitry 350 is a computing device, e.g., a server, an application specific integrated circuit, a laptop computer, a cloud server, etc.
- the logic circuitry 350 can collect other information, e.g., contextual data, to further facilitate classifying intoxication or affective change of the user 340.
- the other information collected can include data from social media accounts and other internet-available information 351 of the user 340, environmental characteristics 352, personal characteristics 353, user subjective responses 354, and data from other systems 360.
- User subjective responses 354 can include user subjective ratings, user response times, user survey data, etc.
- the logic circuitry 350 can collect information from disparate sources to contextualize affective data and physiological data obtained from sensors measuring HGOD data of the user 340.
- the logic circuitry 350 can be used to adjust heat energy 310 of the vaporizing device 200 such that an affective state of the user 340 can be maintained at a certain level. In some implementations, the logic circuitry 350 can adjust a composition of the solid/liquid substance 240 being delivered to the user 340 to maintain the affective state of the user 340 at the certain level.
- Embodiments of the present disclosure provide a system and method for detecting, classifying and reporting intoxication and affective change associated with cannabis consumption.
- the system 300 can include one or more sensors that provide physiological data and/or affective data to the logic circuitry 350.
- the logic circuitry 350 can ingest the data from the one or more sensors, extract features from the data, apply one or more machine learning algorithms to the data to obtain an output for informing the user 340 of her intoxication level or for maintaining and controlling the intoxication level of the user 340 via adjusting delivery of cannabis to the user 340.
- each component of the system 300 can be one or more components such that the logic circuitry 350 can monitor multiple users using multiple substance delivery devices or vaporizing devices 200.
- the sensors can initially be used to build data sets that record a number of physiological and affective measures such as, but not limited to, ECG, PPG, EMG, EDA, HRV, pupilometry, facial landmark data, facial texture data, and the movement and dynamics of each.
- the data can be obtained by affixing or training the sensors on human subjects in a laboratory-like setting, or gathering them in an unobtrusive manner in naturalistic setting where subj ects are less cognizant of the sensing.
- a given human subject can be monitored to gain baseline physiological and affect data.
- the subject is then given cannabis, with a specific phytochemical profile, including but not limited to specific cannabinoids and terpenes in known concentrations and ratios, and new readings are taken. This process is repeated over a larger number of subjects.
- the phytochemical profile is changed, reflecting new concentrations and ratios, and a group of human subj ects consumes the compound after generating baseline data. The new response data is recorded. The process is then repeated with another specific phytochemical profile.
- the logic circuitry 300 ingests the response data from multiple recordings and extracts and stores relevant features within the data.
- the logic circuitry 300 can then apply machine learning to analyze the recorded data.
- the machine learning applied can involve using a neural network to analyze the data sets.
- the logic circuitry 350 can then identify patterns of physiological and affective response which it associates with affective and intoxication states. These data sets can supplement reference data sets which can include subject surveys, questionnaires, or less obtrusive observation.
- the subjects are handed a vaporizer (e.g., the vaporizing device 200) with onboard physiological and affective sensors such as, but not limited to an EDA sensor, PPG sensor and a camera.
- the vaporizer can record phytochemicals, concentration and ratios of the phytochemicals, and consumption behavior of each subject. Consumption behavior includes a schedule of consumption, amount consumed per consumption event, etc. Referring to FIG. 3, as the user 340 holds and uses the vaporizing device 200, baseline physiological and affective measures are taken.
- an onboard camera system captures facial landmark data.
- the facial landmark data can be used to for biometric identification, assigning the captured data to the appropriate subject, and enabling personalized features and mode of system access.
- the captured facial data can be used in a manner that classifies the subject to prevent unauthorized usage, for example, because the system determines that it is highly probable that the user 340 is underaged for consumption.
- the same facial image data can be used to establish a baseline facial affect. That is, at the beginning of a consumption event, the facial image data captured by the onboard camera system can be used to determine a baseline prior to consuming the substance 240. After consumption, the sensors can collect, in a time series, additional facial image data for the logic circuitry 350 to establish a transformation of an affective state of the subject. In another embodiment.
- the camera or cameras can be on an another device or multitude of devices.
- a pattern in individual measures such as HRV and skin conductance or facial landmark migration can be classified reflecting a common pattern of response to a phytochemical compound.
- the pattern in the union of individual measures of subject response can also be classified, providing additional affective and intoxication insights.
- the nature of these responses and insights can be trained to specific phytocannabinoid compounds, and particular subject cohorts that exhibit patterns of distinct response.
- the subject can have their data collected from the sensor laden vaporizer while consuming, for example, from the camera collecting facial data, and then have subsequent facial data collected from another device, such as a smartphone of the subject.
- the smartphone can obtain subsequent facial data at an opportune time when the subject checks the smartphone to inquire how the affective state has changed.
- the physiological sensors could be embedded in wearables, a phone, or other peripheral device or networked sensors.
- the onboard camera could be used as a novel means to facilitate the establishment of a data account.
- the owner or possessor of a sensor laden vaporizer e.g., the vaporizing device 200
- In-app offerings from a data analytics service can collect and analyze data collected by the vaporizer, allowing the owner to share data from the vaporizer with a friend and also potentially share the vaporizer with the friend.
- the vaporizer and app can enable the owner to consume and record her facial data as well as separately allow the friend to consume and record facial data.
- Facial data can be recorded in, for example, a 128 landmark three dimensional representation using a 640x480 pixel image, amongst other means.
- the owner of the vaporizer can subsequently be prompted in-app with an image of her friend, amongst other means, and a question asking whether to text a link to her friend enabling the automatic set up of an account.
- the friend who has already enj oyed the vaporizer experience can then be able to easily download the app populated with their own data, at their convenience. This would improve the convenience and experience for all parties and increase the vitality of adoption.
- the logic circuitry 350 can be used for descriptive purposes. By descriptive purposes, the logic circuitry 350 can take physiological and affective data and classify a state of the user 340.
- the logic circuitry 350 can describe a nature and intensity of cannabis intoxication and the affective state of the user 340.
- the description can be provided to the user 340 in a real time basis.
- the description can be used by the user 340 or another party (e.g., a friend of the user 340 sharing the vaporizing device 200 with the user 340) to understand a nature and intensity of a person’s intoxication for a multitude of purposes.
- the logic circuitry 350 can make predictions on future affective states of the user 340, as well as a future nature and magnitude of an intoxication level of the user 340.
- the prediction can be used to guide consumers of the substance 240 such that the consumers make informed decisions prior to consumption of the substance 240.
- a nature and magnitude of cannabis intoxication can be determined from a single data type. For example, patterns in the facial landmark and texture data, the facial landmark migration, movement, and kinesthetic patterns can be used to characterize the nature and intensity of cannabis intoxication.
- classification of the nature and magnitude of intoxication can be accomplished on an app on a smartphone, or by an Internet of Things (IoT) device.
- the classification of the nature and magnitude of intoxication can be realized via a camera within a facility, such as but not limited to a security camera.
- classification of the nature and magnitude of intoxication can be determined via EDA conductance sensors in a steering wheel, or a combination of EDA conductance sensors and a camera or other sensors within a car.
- the system can be used as the basis for an alternative to a breathalyzer or blood test to determine cannabis intoxication.
- the logic circuitry can also classify or refine a classification of cannabis intoxication through the collection of one or more subjects’ observation of affective stimuli.
- the logic circuitry 350 shows the user 340 facial affect data in the form of a face, either real or manufactured.
- the logic circuitry 350 can then observe and record responses of the user 340 made post-consumption to the stimuli (the shown facial affect data).
- some embodiments of the present disclosure can be applied to other compounds or chemicals, such as, coffee, etc. Dynamic Modulation of Vapor to Change an Affective State of a User
- Embodiments of the present disclosure provide systems and methods for dynamic modulation of affective state and/or intoxication/sobriety of the user 340.
- the system 300 of FIG. 3 can include one or more drug delivery systems (e.g., the vaporizing device 200), one or more sensors for gathering physiological and affective data, the logic circuitry 350 including one or more data storage devices and processors, and output components.
- the output components can be in a single form or distributed.
- the output devices can include lights, visual displays, speakers, haptic components, or any combination thereof.
- the system 300 further includes a mobile device of the user 340.
- the mobile device can execute an app.
- the system 300 can further include wearable devices (e.g., a wearable smart watch, wearable smart headband, wearable smart jewelry like rings, braclets, etc., or any combination thereof).
- wearable devices e.g., a wearable smart watch, wearable smart headband, wearable smart jewelry like rings, braclets, etc., or any combination thereof.
- the system 300 can include any other IoT device, including those owned and/or managed by others apart from the user 340.
- the system 300 includes the vaporizing device 200 with multiple cartridges (e.g., two, three, five, ten, fifty, etc.) that each contains one of a variety of compounds such as, for example, cannabinoids, terpenes, nicotine, other phytochemicals and mycochemicals, and/or other compounds, or any combination thereof. That is, each cartridge includes a single substance therein. Alternatively, one or more of the cartridges can included a mixture of two or more substances.
- cartridges e.g., two, three, five, ten, fifty, etc.
- each cartridge includes one of a variety of compounds such as, for example, cannabinoids, terpenes, nicotine, other phytochemicals and mycochemicals, and/or other compounds, or any combination thereof. That is, each cartridge includes a single substance therein. Alternatively, one or more of the cartridges can included a mixture of two or more substances.
- the system 300 can dynamically modulate affective state and/or degree or nature of intoxication/sobriety across a set of changing needs of the user 340.
- the changing needs of the user 340 can include a diurnal cycle, a need for sleep or wakefulness, management of work, a sudden need to parent, or a sudden need to address other responsibilities, dynamic shifts in interests, such as a desire to workout or socialize while maintaining a desired affective state or level or nature of intoxication/sobriety.
- the system 300 can automatically vary ratios of substances in a vapor (e.g., the vapor 290) based on the environmental characteristics 352 which can include a time of day, a geographical location (e.g., at a home location of the user 340, at a work location of the user 340, at a home location of a parent of the user 340, at a home location of a friend of the user 340, at a mall, at a movie theater, etc.).
- the environmental characteristics 352 can include a time of day, a geographical location (e.g., at a home location of the user 340, at a work location of the user 340, at a home location of a parent of the user 340, at a home location of a friend of the user 340, at a mall, at a movie theater, etc.).
- the system 300 can automatically vary ratios of substances in the vapor based on other factors such as patterns of past use, with or without user input or with or without user modifications.
- the logic circuitry 350 can set a vapor mixture in the morning to have a ratio of THC to CBD of 1 to 1, whereas the logic circuitry 350 can set a vapor mixture in the evening to have a ratio of THC to CBD of 10 to 1.
- the logic circuitry 350 can set a vapor mixture when the logic circuitry 350 determines that the user 340 is at the home location of the user 340.
- the vapor mixture at the home location of the user 340 can be set to a ratio of THC to CBD of 20 to 1.
- the logic circuitry 350 can set the vapor mixture when it determines that the user 340 is at the home location of the parents of the user 340.
- the vapor mixture at the home location of the parents of the user 340 can be set to a ratio of THC to CBD of 5 to 1, etc. That way, the system 300 can dynamically adjust a compound that the user 340 consumes based on the environmental characteristics 352 obtained. For example, environmental stressors on the affective state of the consumer can become determining factors for a model.
- a vaporizer uses two or more cartridges together to create an infinite variability of ratios of compounds that can be delivered by the vaporizer.
- two cartridges one containing THC, the other containing CBD, can be coupled in the vaporizing device 200.
- the vaporizing device 200 drives two vaporizing elements associated with respective ones of the cartridges.
- the vaporizing device 200 can drive the vaporizing elements using pulse width modulation, with distinct duty cycles for each, generating different rate levels of vaporization for each of the two cartridges.
- the vaporizing device 200 can adjust the duty cycles to establish the new vapor mixture.
- the system 300 can allow the user 340 to consume an infinite number of different ratios of compounds without having to buy or insert additional cartridges (i.e., additional cartridges in addition to the two already in the vaporizing device 200).
- a level or granularity of adjustment of the ratio of the two cartridges can be continuous. In some implementations, the level or granularity of adjustment of the ratio of the two cartridges can be discontinuous.
- the level or granularity of adjustment of the ratio of the two cartridges can be at discrete intervals or even binary, or any combination thereof.
- the multi-cartridge setup for the vaporizing device 200 that supports an infinite ratio vapor mixing enables the system 300 to precisely, efficiently, and conveniently deliver vapor to meet a modelled need of the user 340.
- the system 300 can include substances or compounds that mitigate and/or reverse impacts of intoxicants.
- the user 340 first consumes a first vapor mixture with a relatively high ratio of the intoxicating cannabinoid THC and subsequently becomes intoxicated. Shortly thereafter, the user 340 desires sobriety and/or a lesser level of intoxication. The user 340 then consumes a second vapor mixture with a relatively high level of THC antagonistic compound, thereby speeding up or accelerating sobriety.
- antagonistic compound consumption can be used to mitigate and/or reduce an affective state of a user of THC, thereby, for example, resulting in a sobering impact/effect.
- Other compounds or combinations of compounds can also be used to mitigate and/or reduce an affective state of a consumer/user of THC. Examples of such other compounds include cannabinoids, terpenoids, and other compounds.
- Benefits of being able to accelerate sobriety are immeasurable. For example, a parent can enjoy a high THC vapor socially at a party. When the parent returns home, he is surprised by a sick child that needs care.
- the parent can use the vaporizing device 200, according to some implementations of the present disclosure, to increase a ratio of a sobering substance so as to consume a second vapor with a different ratio of substances to aid in sobering the parent.
- the adjustment in vapor mixture ratios to aid a consumer/user from switching from insobriety to sobriety can be automatically or manually initiated and supported with guidance from the system 300 via an app executing on a mobile device of the consumer/user and/or via the vaporizing device itself.
- the system 300 via the logic circuitry 350 which operates as an integrated distributed classification, prediction, and response system, can leverage affective, physiological, environmental, and consumption data collected by sensors and be analyzed with machine learning techniques to provide personalized decision support.
- the system 300 can generate a modelled prediction of an amount of time needed for the user 340 to return to or achieve a given level of sobriety.
- the system 300 can provide live behavioral guidance based, at least in part, on vapor mixture(s) consumed by the user 340 and the vapor mixture to be consumed by the consumer to achieve sobriety.
- the user 340 can input a level of intoxication and adjust the desired sobriety levels without modelled assistance. In some implementations, such adjustments can be implemented via the vaporizing device 200.
- the system 300 can use machine learning to classify the user 340 into a like cohort of users whose data informs personalized prediction for dosing, impact and timing.
- the system 300 can obtain current affective, physiological, and environmental data of the user 340 to generate modeled prediction of timing of intoxication, timing of sobriety, etc.
- the system 300 can use pulse width modulation or like techniques to modify an intensity of a vapor mixture and a time duration and inhalation amount required to consume a desired level of a compound or compounds.
- Pulse width modulation is a technique that is used for electrical signal manipulation in which continuously increasing or decreasing levels of output can be generated from a binary signal input in purporting to the duty cycle.
- the pulse width modulation approach is able to generate a continuum of variable vaporization.
- the system 300 can apply independent/different pulse width modulation to each of the cartridges in the vaporizing device 200 such that, the output vaporization from each cartridge can be set independently or set in coordination with any other cartridge.
- Embodiments of the present disclosure provide a system and method for facilitating a navigation of a complex social space with an array of constraints and opportunities.
- Most social networks are largely virtual where participants engage remotely, often from their own homes, using an electronic medium (e.g., mobile device, tablet, computer, etc.).
- an electronic medium e.g., mobile device, tablet, computer, etc.
- Social media interactions typically require a reason or rationalization for connection and purpose. For example, Facebook “friends” may be connections with whom you went to school and want to share life experiences with images, comments, and other posts or likes.
- Instagram connections tend to a be a broad collection of people with whom one wants to share common aesthetic interests, passions, or hobbies, with whom one wants to share your curated collection of aesthetically appealing or otherwise interesting images and videos.
- Embodiments of the present disclosure provide an electronic social network that facilitates de-virtualization of social connections.
- the vaporizing device 340 includes assisted GPS to approximately place the user 340 in a mapped physical region geospatially.
- the vaporizing device 340 can use sensors to refine the approximate placement for very precise location (e.g., sub decimeter location) on a three dimensional geospatial map, relative to other objects, landmarks, people, and/or others who are nodes of the social network of the user 340.
- Node information can be determined using the social media accounts 351 of the user 340.
- the system 300 via the logic circuitry 350 can enable nodes of the social network 340 to efficiently converge.
- one node e.g., a user of a vaporizer
- the meeting location can be, for example, at a concert.
- the system 300 can provide instructions and/or guidance for the two nodes to converge (e.g., by providing directions using an app executing on a mobile device). This convergence and de-virtualization of the social network can be used to enjoy a cannabis sharing ritual in person or for another purpose, for example, meeting for lunch in a park.
- the system 300 can use a number of sensing methods to determine the geolocation of a node/user.
- the system 300 can use a Bluetooth sensor (or any wireless sensor technology) in one location to gauge signal strength to another Bluetooth sensor at another node.
- the system 300 can obtain signal strength between the Bluetooth sensors over time.
- a time staggered triangulation is accomplished using the Bluetooth sensors.
- multiple antennas on a single Bluetooth device or multiple Bluetooth sensors on a single vaporizer, or the union of multiple Bluetooth sensors across multiple devices can be used.
- the assisted GPS would guide the nodes within to the 100-meter range, and the sub decimeter geolocation on the three dimensional map would enable the system 300 to guide the nodes to convergence.
- the nodes can experience the guidance from the system 300 through any number of modalities. For example, as the nodes converge on a common location, a haptic element (e.g., a vibrating motor) in the vaporizer can signal a reduction in distance or which direction a user should head. Similarly, light or sound emitted from the vaporizer can aid in guiding the nodes/users to converge together. Alternatively, a smart phone or a wearable device can aid in providing guidance.
- a haptic element e.g., a vibrating motor
- light or sound emitted from the vaporizer can aid in guiding the nodes/users to converge together.
- a smart phone or a wearable device can aid in providing guidance.
- All of the calculations required to provide guidance could be done remotely in the cloud, e.g., at the logic circuitry 350.
- calculations can be performed on the vaporizer (e.g., the vaporizing device 200).
- the vaporizing device 200 can include, for example, a tensor processing unit (TPU) for performing calculations.
- the calculations can be performed by another computing device, such as, for example, a mobile device, a tablet, etc.
- the system 300 calculates the guidance using a distributed approach, leveraging the collective processing power of all devices associated with the system 300 and the social network.
- the social network and the system 300 can also function as a mesh network thereby optimizing data flow and communication among nodes within the social network.
- the social network used in this context includes computing devices like mobile phones, tablets, vaporizers of other users associated with the user 340 via the social media account 351 of the user 340.
- Nodes of the social network both direct (e.g., friends) and indirect nodes (e.g., friend of friends), can be guided to meet at a particular space at a precise time, by guiding nodes to a location using other devices or beacons as reference points. As such, meeting a friend (or a friend of a friend that the user 340 has not yet met) can be more convenient.
- sensors such as, for example, optical sensors and RFID sensors, onboard the vaporizer, mobile phone, or other device can allow users to authenticate the identity of another node to make meeting safer.
- Verification of the identity of the other party can be performed, for example, by confirming that one or more biometrical measurements (e.g., fingerprint, face scan, iris scan, etc., or any combination thereof) of the other party matches a corresponding biometric measurement that is stored in the system 300 and is associated with the node.
- biometrical measurements e.g., fingerprint, face scan, iris scan, etc., or any combination thereof
- the system 300 is configured to authenticate users to aid in enabling safe purchases of restricted goods (e.g., cannabis, alcohol, etc.) as well as enabling financial transitions (e.g., using credit cards or the like).
- the system 300 authenticates the user 340 by using the vaporizer, mobile device, or other component of the system 300 to capture an image of a credit card, identification (ID) card, or other official document of the user 340. This image is fed through a machine learning model with optical character recognition as well as other visual pattern recognition to digitally transcribe and ingest the requisite features and information to compare reference datasets. This process authenticates the person, documents, credit card, and status on the system 300.
- restricted goods e.g., cannabis, alcohol, etc.
- financial transitions e.g., using credit cards or the like.
- the system 300 authenticates the user 340 by using the vaporizer, mobile device, or other component of the system 300 to capture an image of a credit card, identification (ID) card, or other official document of the user
- a node of the system can select a social identifier, such as, for example, a personalized music snippet or song or other audio output.
- the social identifier of the node can be displayed/played upon a given trigger, such as, for example, upon vapor inhalation by the node using a connected vaporizer.
- the node is able to express its social identity through“walk-on music” when consuming the vapor.
- a social identifier can be used to help other nodes identify and/or authenticate the node/user.
- the vaporizer (e.g., the vaporizing device 200) is associated with the user 340 such that the vaporizer is able to communicate with the user 340 as the user 340 consumes vapor.
- the vaporizer can signal an amount of vapor that the user 340 has consumed through a live output element, such as a haptic component or light.
- the output element provides sensory guidance on the vaporizer and/or a computing device (e.g., a mobile device, tablet, etc.).
- An app executing on a mobile device can also reflect consumption data, but with less sensory immediacy than a live vaporizer feedback provided by the vaporizer.
- complementary sensor data from the devices of other users can help improve accuracy.
- the system 300 can also use complementary sensor types, such as but not limited to accelerometer and gyroscopic data, to further refine a user’s geolocation as the user moves. Amongst other things, this would provide precise triangulation even in dynamic contexts.
- the system 300 can use sensor data to estimate the progression of the other node(s) toward convergence and indicate such estimate to at least some of the nodes (e.g., via an app, the vaporizer, etc. or a combination thereof). For example, if the accelerometer data (e.g., from a vaporizer) indicates a particular set of features, and the convergence were slowed, a model could indicate that the other node was slowed or stopped (e.g., the other node/user slipped on the ice and needs help, the other node ran into another friend on the way to converge/meet, etc.).
- the accelerometer data e.g., from a vaporizer
- a model could indicate that the other node was slowed or stopped (e.g., the other node/user slipped on the ice and needs help, the other node ran into another friend on the way to converge/meet, etc.).
- the system 300 can provide support by suggesting a status of the other node (e.g., fallen, meeting with another node, etc.), by providing a current location of the other node (e.g., the site of the modelled fall, etc.).
- the system is able to bring together nodes of a social network in a synchronized collective experience through coordinated communications. For example, a collection of nodes at a concert can experience a collective response to a stimulus, such as musical rhythm or drum beat, causing all vaporizers, phones, or ancillary device to light up, vibrate or generate another output in unison.
- an application programming interface can allow a party, such as a musician, to dynamically control experiential elements for nodes of a social network or components of the system 300.
- a party such as a musician
- the musician can dynamically control the color of lights or intensity on vaporizers, phones or other devices within the social network.
- the system itself would become part of the show as wave of light wash over segments of the audience in which the nodes are present.
- nodes/users themselves initiate a shared collective experience such as a geospatially manifested output, such as, for example, a wave of light spreading across a crowd or emanating like a ripple from themselves, much like people standing and then sitting at a football game generating an engaging illusion.
- a geospatially manifested output such as, for example, a wave of light spreading across a crowd or emanating like a ripple from themselves, much like people standing and then sitting at a football game generating an engaging illusion.
- shared experiential data can be collectively experienced across a group of people, such as, for example, sound, music, or system driven vaporization.
- collections of nodes can consume a substance in-person together (e.g., by using their respective vaporizers).
- collections of nodes can consume the substance at a same time but at different locations via an electronic network.
- the nodes can share data and other common experiential elements such as dynamic information, games, entertainment, messaging, video, audio and other forms of communication.
- geospatial node density maps can be used to drive people to gather at a single location or multiple locations for a spontaneous social event (e.g., involving consumption of substances via vaporizers).
- vendors of experience such as, for example, restaurants, could guide nodes to a geospatial and temporal convergence.
- a single node or a group of nodes can guide the collective nodes to a geospatial and temporal convergence for any reason.
- affective or other data types derived from a node, or modelled from a collection of nodes can be used for the modulation of devices within a relevant geospatial and temporal parameter, such as lights, temperature, noise, and sound in a venue. Hemp and Cannabis Production
- FIG. 4 illustrates a system 400 for optimizing industrial hemp and cannabis production, according to some embodiments of the present disclosure.
- the system 400 includes a standard array of agronomic sensors 402 and data collection elements, such as, but not limited to, grow light data collection elements, nutrient data collection elements, watering data collection elements, ambient air data collection elements, as well as non-standard data collection devices in the form of optical sensors, and in some embodiments electronic nose chemo-sensors. These optical sensors can collect light from both visible and non-visible spectra.
- these optical sensors can be used to collect data for generation of growth and plant health metrics such as normalized difference vegetation index (NDVI). Further, these optical sensors can be trained on specific parts of the hemp or cannabis plant, for example, the optical sensors can be trained on inflorescence of the plant.
- the agronomic sensors 402 can collect data for further processing by the computing device 406
- the data collected by the agronomic sensors 402 can be used for chemical composition of the inflorescence of the plant and the resins that the inflorescence produces.
- lean spectrographic sampling with infrared or ultraviolet light can be used to capture the nature of chemical bonds indicative of the chemical composition of the inflorescence and resins. It can also indicate the concentration of phytochemicals and provide guidance for harvest timing or necessary change of environmental variables.
- the actuators 404 in the system 400 can automate watering, lighting, fertilizer, etc., of the plants based on computations by the computing device 406
- Every cannabis plant cultivar often referred to as “strains” by industry professionals, is different. As the strains grow and pass through different phenological states, they exhibit different sensitivities to inputs, and exhibit materially different traits. For example, as a given cultivar matures to point of flowering, the inflorescence and associated resins can differ wildly from another cultivar, with respect to color, texture, shape and scent. One cultivar may consistently develop a purple hue in the early stages of flowering, while another may develop a white hue. Furthermore, as they continue to mature phenologically, the characteristics of the inflorescence and resin continue to change in a material way.
- the cultivar exhibiting a white hue to the inflorescence and resin may begin to exhibit a golden hue instead.
- the color change may correspond with the THC oxidizing into CBN.
- the array of sensors e.g., the agronomic sensors 402 within the system 400 can record the changes collected by optical sensors and store this data in a database. This data would also associate with other data identifying the identity of the particularly cultivar and the phenological state.
- olfactory chemo-sensors would also record and store the data quantitatively describing the scent.
- the scent data would be stored in a database. This data would also associate with other data identifying the identity of the particularly cultivar and phenological state.
- phytochemicals include, but are not limited to, terpenoids and cannabinoids.
- other observable phytochemicals correspond to changes in terpenes and cannabinoids.
- the system 400 uses electromagnetic radiation, optical sensors, or optical sensors and chemo-sensors, to create time series records of each plant’s inflorescence and resins, as well as the standard set of agronomic sensors 402 tracking each plant’s growth, and storing data on the plant’s growth in a database that associates the plant with its cultivar identity. Further, the system 400 contains a reference database with phytochemical composition confirmed with additional high resolution chemical testing. After extracting the relevant feature from each sensor data type, the system 400 applies machine learning to determine the current and future phytochemical composition of each plant available for extraction and distillation. The systems then aggregate the level of distillates across all of the plant in production.
- the system 400 also calculates the potential range in the levels of each distillate, depending upon harvest time and other factors. In some embodiments, the system performs optimization for harvest time and other factors of plants and group of plants, for example, factoring market pricing or projected pricing of individual distillates, for the optimization of production profits.
- the system can optimize the production of distillates for any number of factors, or combinations of factors, including but not limited the assurance of yield of a level of a given distillate or distillates and maximal profitability from remaining distillate yield.
- the system would also reduce the need for labor and increase standardization and quality control across multiple facilities.
- cannabis production companies rely on lead cultivation personnel to observe plant growth and subjectively determine when to commence harvesting. The subordinate cultivation personnel then begin the harvest. If a given cannabis production company were to open a second cultivation facility in an area far away, the lead cultivator would then need to travel to the new facility and begin the process of observation and assessment, ideally training the staff before returning to oversee the original facility.
- the system would be able to provide cultivation and harvest decision support across 10,000 facilities as readily as it could one.
- the system 400 would optimize the production facility for distillates and robotic harvesters would harvest the plants.
- the system would optimize the yield for a combination of distillates and“flower.”
- the system would incorporate addition exogenous supply and demand factors, such as satellite data of outdoor cannabis production to determine the optimal cultivation and harvest path.
- the system 400 would make recommendations for the planting or removal of specific cultivars, in order to optimize the production facility or groups of production facilities (new or existing). For example, the system could recommend the planting or removal of sets of cultivars in certain numbers and/or ratios to optimize a given facility for maximal flexibility with respect to changes, either real or projected or potential, in distillate pricing while maintaining maximal profitability of distillate sales. Similarly, the system could recommend the planting or removal of sets of cultivars in certain numbers and/or ratios to optimize a given facility for maximal modelled potential with respect to changes, either real or projected or potential, in distillate pricing while maintaining maximal profitability of distillate sales.
- the system 400 would also be able to make the above recommendations with constraints, such as, but limited to, base production levels of given distillates, base production levels of flower, combinations of base levels of flower and given distillates, availability of cultivars, input costs, timing, and legal and geopolitical constraints.
- the system 400 could optimize the production of phytochemicals, such as, but not limited to, cannabinoid and terpenes, that remain within the inflorescence and resin, and are sold as flower instead being extracted as distillate.
- planting recommendations can be tailored to new production facility in isolation, a facility, or set of facilities, expansion or contraction, or incrementally added facilities.
- the system could make these recommendations with respect to the replacement of existing plants that are aging or maturing, performing sub-optimally or likely to perform sub-optimally along a given dimension of performance.
- the system could also tailor the recommendations to global, local, or the optimized intersection of global and local supply and demand dynamics.
- the system could optimize the production of distillates in a conjoined manner to yield optimal production of any number of conjoined units defined along any dimension.
- groups of cannabinoid and terpenes in certain ratios could be defined by units of affective and or medical response by consumers, which the system could optimize.
- any reference signs placed between parentheses shall not be constructed as limiting the claim.
- the word“comprising” does not exclude the presence of elements or steps other than those listed in a claim.
- the word“a” or“an” preceding an element does not exclude the presence of a plurality of such elements.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Library & Information Science (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
According to some implementations of the present disclosure, a human-machine interface system includes a display device, a camera, a communication module, and one or more storage mediums. The camera is configured to capture an image. The communication module is configured to transmit and receive data between a first remote user device and a second user remote device. The one or more storage mediums have, individually or in combination, code sections stored thereon. When executed by one or more processors, the code sections cause a computing device to detect a face of the image captured by the camera, and to extract a plurality of physical features from the face detected in the image. Based at least in part on the extracted physical features, the code sections cause the computing device to generate a first avatar.
Description
SYSTEMS AND METHODS FOR EVALUATING AFFECTIVE RESPONSE IN A USER VIA HUMAN GENERATED OUTPUT DATA
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to and benefit of U.S. Provisional Patent Application No. 62/760,731, filed November 13, 2018, U.S. Provisional Patent Application No. 62/760,773, filed November 13, 2018, and U.S. Provisional Patent Application No. 62/819,294, filed March 15, 2019, each of which is incorporated by reference in its entirety.
TECHNICAL FIELD
[0002] The present disclosure relates generally to systems and methods for efficiency improvements in networks comprising humans and machines and for optimizing industrial hemp and cannabis production.
BACKGROUND
[0003] Creative use and development of complex tools distinguishes humans from other animals. Once developed, these tools canp become tightly and inextricably linked with their users. The strength of the linkage between humans and tools has been fundamental to the success of homo sapiens as a species. For example, fire, writing, glasses, cellphones, etc. have become entwined in human culture and are part of individual identities.
[0004] Humans and their technology co-evolve, but at radically different rates. For example, machines running deep learning algorithms produce outputs from insights that humans are unable to apprehend. As the machines continue to evolve quickly, the strength of a native organic interface will become increasingly important. The present disclosure presents methods and systems to bridge humans and machines in a manner that addresses the strains associated with the growing evolutionary gap. Additionally, the present disclosure presents method of monitoring human affective response with machines especially in connection with chemical substances like cannabis.
[0005] The cannabis industry continues to emerge globally after an extended period of prohibition prompting new investment in efficient production of cannabis crops. During cannabis prohibition, cannabis producers worked in small teams using artisanal techniques. Later on, cannabis production became more sophisticated and producers began applying modem agronomic techniques applied to other types of crops. These techniques include, for example, managing spectra, intensity, and duration of light that cannabis plants are exposed to;
improving watering and ambient climate of the cannabis plants; and cannabis plant fertilization. These approaches improved yield as defines in terms of energy input to cannabis inflorescence mass. The yield approach also served as the best accepted approach for capturing economic value, by maximizing the spread between their input costs and the units of output from which they derived revenue. The present disclosure provides a system and method to further increase yield of cannabis plants.
SUMMARY
[0006] According to some implementations of the present disclosure, a human-machine interface system includes a display device, a camera, a communication module, and one or more storage mediums. The camera is configured to capture an image. The communication module is configured to transmit and receive data between a first remote user device and a second user remote device. The one or more storage mediums have, individually or in combination, code sections stored thereon. When executed by one or more processors, the code sections cause a computing device to detect a face of the image captured by the camera, and to extract a plurality of physical features from the face detected in the image. Based at least in part on the extracted physical features, the code sections cause the computing device to generate a first avatar.
[0007] According to some implementations of the present disclosure, a computer-readable medium stores a computer program executable by a computing device to process facial landmark data. The computer program includes code sections for causing the computing device to receive, from a camera, images. The computer device is caused to detect a face of a user in an image captured by the camera, extract a plurality of physical features from the face detected in the image, and determine an affective state of the user. Based at least in part on the plurality of physical features, the computer device is caused to generate an avatar representative of the determined affective state of the user and display the avatar.
[0008] According to some implementations of the present disclosure, a hemp and cannabis production system includes a hemp and cannabis plant, an agronomic sensor, one or more processors, and a communication module. The agronomic sensor is configured to generate agronomic data associated with the plant field. The one or more processors are configured to analyze the generated agronomic data, to determine a chemical composition of the plant. The communication module is coupled to the one or more processors and configured to transmit at least a portion of the determined chemical composition of the plant.
[0009] According to some implementations of the present disclosure, a system for evaluating compound intoxication and affective response in a user is provided. The system includes a processor and a non-transitory computer readable medium storing instructions thereon such that executing the instructions causes the system to perform the steps including:
(a) receiving, from a set of sensors, physiological measures describing a first state of the user;
(b) monitoring the physiological measures to determine a baseline associated with the user; (c) receiving, from the set of sensors, physiological measures describing a second state of the user, wherein the second state of the user is a state after a dosage of the compound is consumed by the user and the first state is a state before the dosage of the compound is consumed by the user; and (d) determining features of the affective response in the user based on the baseline associated with the user, the physiological measures describing the first state, and the physiological measures describing the second state.
[0010] According to some implementations of the present disclosure, a system for de- virtualizing a social network is provided. The system includes a first vaporizer and a mobile device. The first vaporizer is configured to: deliver a compound via a vapor to a user, determine a physical location of the first vaporizer, determine locations of a group of vaporizers, and send a meetup signal for initiating a convergence of a subset of vaporizers in the group of vaporizers and the first vaporizer. The mobile device is configured to: receive the meetup signal, and broadcast a meetup message to the group of vaporizers.
[0011] Additional aspects of the present disclosure will be apparent to those of ordinary skill in the art in view of the detailed description of various embodiments, which is made with reference to the drawings, a brief description of which is provided below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] FIG. 1A is a block diagram of a generic system involving an interaction between a biological machine and a computing device, according to some implementations of the present disclosure;
[0013] FIG. IB is a block diagram of a system involving an interaction between a human and a server, according to some implementations of the present disclosure;
[0014] FIG. 2 illustrates an example vaporizing device according to some implementations of the present disclosure;
[0015] FIG. 3 is a block diagram of a system for determining physiological and affective change associated with a substance, according to some implementations of the present disclosure; and
[0016] FIG. 4 is a block diagram of a system for optimizing hemp and cannabis production, according to some implementations of the present disclosure.
DETAILED DESCRIPTION
[0017] While the present disclosure is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail preferred embodiments of the present disclosure with the understanding that the present disclosure is to be considered as an exemplification of the principles of the present disclosure and is not intended to limit the broad aspect of the present disclosure to the embodiments illustrated. For purposes of the present detailed description, the singular includes the plural and vice versa (unless specifically disclaimed); the word“or” shall be both conjunctive and disjunctive; the word“all” means “any and all”; the word“any” means“any and all”; and the word“including” means“including without limitation.”
[0018] When designing interfaces between a network of machines for the transmission of data, a fewer number of processing or conversion layers usually results in better performance. When data is transformed from one form to another, processing cycles, energy, and time are required. Furthermore, transforming data from one form to another introduces potential for error. For example, a machine that processes and generates data in binary form (“Machine 1”) that needs to transmit said data to another networked machine (“Machine 2”) for further processing in binary form may do so in multiple ways. Machine 1 can convert the binary to comma separated values (CSV) form and transmit the data to Machine 2. A human managing the two machines and the transmission process can take comfort in being able to open the transmitted CSV file, a readable textual medium more native to the human. As the textual expression creates a representation closer to“natural language”, for which the human has robust processing systems, and can thus be thought of as more native than binary, the human can process the data with the fewer cognitive processing cycles, less energy, and less time. However, from the standpoint of the two machines, the transmission from binary to CSV by Machine 1 and the reverse conversion (e.g., the transmission from CSV to binary) by Machine 2, introduces significant additional processing cycles, energy and time. When possible, preferably, maintenance and transmission of data in native formats can lead to greater efficiency.
[0019] The previous example contrasted native mediums of two machines to that of a human, showing that communication in native formats can be more efficient and save
processing cycles, energy, and time. Biological animals (or biological machines) communicate information in various native formats. Embodiments of the present disclosure provide a model for assessing physiological state, affective state, and other notions of wellness and qualitative experience associated with biological machines. Physiological state of a biological machine includes sensory data obtained from the biological machine. Affective state of a biological machine describes an emotional experience of the biological machine. For example, two humans can consume alcohol with both the first human and the second human having a blood alcohol level of 0.08. Even though the blood alcohol level of both humans provide a physiological state of a 0.08 blood alcohol level, both humans can experience the 0.08 blood alcohol level differently. The first human can be quite unaffected cognitively and/or emotionally, thus showing no outward indication of intoxication. The second human on the other hand can be very garrulous, smiling, and using multiple facial expressions. Although the physiological response to alcohol is similar in both humans, their affective response (or state) is very different.
[0020] Affective states or responses of biological machines communicate information that may not be captured in physiological states of the biological machines. Affective states are easily communicated in native formats of biological machines and are understandable by other biological machines but can be difficult to decipher with computing devices. FIG. 1A is a block diagram of a generic system 100 involving an interaction between a biological machine 102 and a computing device 106, according to some implementations of the present disclosure. The system 100 includes an interface 104 that allows information capture from the biological machine 102.
[0021] Optimized use of native data formats on machines can also yield superior capabilities. For example, when developing an app for a phone, a developer can use a host of potential tools and languages. At the moment, the two dominant formats are iOS and Android. In order to reduce software development costs, developers can use tools such React Native which creates a single programming layer in which the developers can develop the app. React Native then translates the data into a form that can simulate and interact with the native iOS or Android language on the phone. However, the savings associated with reduced developer hours can come at a cost beyond the expected increase in processing cycles, energy and time. These non-native layers often inhibit the refinement and control of the phone, making lower level access and the phone’s full feature set less accessible. The non-native layer reduces functional potential and capabilities. Likewise, interactions with humans, which as biological systems are in a sense biological machines, though less native channels not only increases the
processing cycles, energy consumption and time, but also reduces the function potential and capabilities of the human interaction.
[0022] While the present disclosure presents systems and methods for creating processing cycle, energy, and time efficiencies for data that describes the physiological state, affective state, and other notions of wellness and qualitative experience, these methods could be used for a broader array of data types. Further, as non-biological machines continue to evolutionarily advance in a manner outpacing biological machines both in terms of their capabilities and their efficiency, the described systems and methods will derive an even greater efficiency through time by placing the processing burden on the non-biological machine, particularly when translation of data occurs.
[0023] FIG. IB illustrates a block diagram of a system 101 involving an interaction between a human 103 and a server 107, according to some implementations of the present disclosure. FIG. IB is a subset of FIG. 1A, where the human 103 is an example of a biological machine 102, the sensor 105 is an example of the interface 104, and the computing device 106 is an example of the server 107. For example, the sensor 105 can include a camera that captures movements or facial expressions of the human 103. The server 107 can apply algorithms to analyze the movements or facial expressions of the human 103 to determine an affective state of the human 103. As such, the human 103 is able to communicate the affective state of the human 103 to the server 107, via the sensor 105. A biological machine is thus able to be understood by a computing device. This dynamic should be thought of as extensible to other native layers of human interface and transmission protocols, such as, but not limited to the acoustical qualities of vocal production as well.
[0024] While those skilled in the art commonly recognize the native data formats, languages and transmission protocols of non-biological machine, those of biological machines are less recognized. Within the physiological and affective domains, humans generate output data which will be referred to as human generated output data (HGOD). HGOD is generated in an array of raw forms that can be processed into or translated into forms more native to non- biological machines (i.e., the computing device 106 or the sever 107). HGOD can collected via the sensor 105. Examples of data that the sensor 105 can collect include electrodermal activity (EDA), cardiac time series such as electrocardiogram (ECG) and heart rate variability (HRV), electroencephalogram (EEG), electromyogram (EMG), photoplethsmography (PPG), pupillometry, facial landmark data, etc. HGOD collected by the sensor 105 can be further processed by non-biological machines (e.g., the server 107) for signal decomposition and
feature extraction. The resulting data sets, in isolation or in a conjoined manner, can be used for objective classification of affective state of the human 103.
[0025] While affective state can be mapped in a lower dimensional manner, such as the 2- dimension valence by arousal continua, it can be described by any number of dimensions. In some embodiments, the system 101 uses more than two dimensions in describing affective state. The server 107 can then map the processed HGOD, either in isolation or conjoined, to the dimensions of affective state to generate an objective and quantitative description of what the human 103 is communicating via the HGOD.
[0026] In an embodiment, facial landmark data can contain rich yet descriptive affective and physiological data. Many of these data types not only appear stable cross-culturally but also across species implying an evolutionarily stable hardwired rooting. Unfortunately, some of the observed changes in facial landmark observation can vary depending upon context. As such, the server 107 performing both conjoined analysis and contextual data improves accuracy of conclusions drawn from facial landmark data.
[0027] As such, in some implementations, the system 101 collects contextual data along with HGOD. Contextual data can include data describing an aggregate setting of the human 103. For example, this can include whether the human 103 is around other humans, whether the human 103 is consuming a substance, whether the human 103 recently exercised, etc.
[0028] Contextual data can also lead to determining that the human 103 is communicating subtle information. For example, in addition to conveying affective and physiological information, HGOD including facial landmarks can be hijacked by a higher layer of processing to convey intention and to deceive for social signaling. Thus, while micro-expressions from the lower processing layers of the human 103 can briefly emerge for milliseconds, the higher layer that sits atop the lower layers can mask the affective and physiological data at times. Anecdotally, one can note the facial transformation that occurs when people pose for photographs, compared to the richer emotional content of candid photographs in which subjects do not realize that they are being photographed. Facial landmark data is provided as an example of HGOD that can mask affective state, but other forms of HGOD can be relatively more or relatively less susceptible to signal interference.
[0029] In some implementations, the system 101 leverages facial landmark data format and communication channel as the native format to communicate information from the non- biological machine (i.e., the computing device 106 or the server 107) to the biological machine 102 or the human 103. Unlike affective information data derived from EDA, HGOD from facial landmarks can be readily transmitted and processed from a distance and with minimal
translation, much like the example above where Machine 1 and Machine 2 operate most efficiently maintaining communication in binary. Transmitting facial landmark data without translation reduces processing cycles, energy and time required when communicating from the server 107 to the human 103. This dynamic should be thought of as extensible to other native layers of human interface and transmission protocols, such as, but not limited to the acoustical qualities of vocal production as well.
[0030] Humans can process facial expressions (or faces) in about 40 ms, approximately one- fifth the time for a human to process a conscious thought. While the number and types of facial landmarks can vary, in some implementations of the present disclosure, the server 107 can use 128 facial landmarks and can generate a three dimensional model of their relative positions, including capturing textural elements (such as, but not limited to wrinkles, as well as movement). In some implementations, in addition to potentially using facial landmark data (facial HGOD), the system 101 can construct a model of an affective space of the human 103 using other affective HGODs, such as EDA and HRV. The use of other affective HGODs can be used in rendering an estimated facial expression or facial landmark data such that interference from higher layer processing is eliminated. For example, when an individual is taking a picture, the facial HGOD data can be ignored, and other HGOD data can be used to construct an estimated facial HGOD that conveys the individual’s actual affective state, as if the individual were not forcing a smile.
[0031] In some implementations, the system 101 can be used by a doctor to monitor a large number of patients efficiently and effectively, or to readily observe material changes in a given patient’s status immediately without significant mental processing cycles or delays. When the system 101 is applied to multiple patients, each of the humans 103 can be coupled to multiple sensors 105. The multiple sensors 105 can generate HGOD data provided to the server 107 for feature extraction. The server 107 can run the extracted features though a machine learning model (ML) which would map the features into a multidimensional affective space. The server 107 can then map the affective state or affective space migration on to a model of facial HGOD. The server 107 can then render the model on a screen of the server 107 or on a remote display. In some implementations, the server 107 can render the model by adjusting 128 landmark features on a three dimensional output of a facial avatar.
[0032] Using embodiments of the present disclosure, a doctor can visually scan a number of facial avatars rapidly, for example, at rates of up to 25 faces per second, to look for any anomalies that the doctor might want to address. Having the doctor quickly scan facial avatars
allows the doctor to process information much quicker than painstakingly reading patient data records one at a time.
[0033] In some implementations, a doctor who is evaluating a patient attempting to maintain a stoic demeanor can otherwise be fooled by the patient’s attempts to control their outward appearance. Using different HGOD data including facial HGOD, the doctor can note any divergent affect and address the patient’s needs more effectively.
[0034] In some implementations, a series of avatars can be rendered, each one reflecting the patient’s affective state in a moment in time. These renderings can be linked, creating a time series of affective states. The doctor can then view the time series either in an animated fashion, or avatar by avatar, to quickly apprehend the progression of the patient’s status over time.
[0035] In some implementations, a user can efficiently review their own affective status without technical knowledge. The user would be capturing HGOD, from a wearable for example, which would be passed via Bluetooth or other means of transmission to a mobile device (e.g., cell phone). The mobile device would conduct feature extraction and machine learning on the HGOD and display the facial avatar on the screen of the mobile device. As with the doctor example above, the user without technical knowledge can review a time-series of facial avatar to efficiently understand the patterns and trends in their own affective state.
[0036] In some implementations, the user without technical knowledge can share their facial avatar or facial avatar time series with another user via text or other means of transmission, allowing the recipient to efficiently apprehend the affective state of the sender. In some embodiments, the shared avatar can be contextualized, with reference points such as, but not limited to, events, activities, locations, and other users referenced in the avatar or time series avatar.
[0037] In some implementations, groups of people, grouped along any dimension, can have their facial avatars compared, contrasted or aggregated, efficiently providing the recipient of the avatar or time series avatar with insight into a collective. For example, the server 101 in the system 101 can group facial avatars of a people of a same group to determine similar or dissimilar features within the people of the same group. A collective facial avatar can then be developed from the feature analysis to represent the people of the same group. The server 107 can send this collective facial avatar, which can be a time series avatar, to a recipient. Thus, the recipient can efficiently apprehend an affective state or an affective state time series, synchronously and asynchronously, of the people of the same group. Examples of people of a same group include a group of people within a room, a series of riders in the same seat of a roller coaster, a group of people sipping from their cups of coffee or inhaling from their
vaporizers, etc. The facial avatars or visualizations generated can convey affective response from stimuli within these contexts for rapid apprehension by a biological machine (e.g., the biological machine 102 or the human 103) with fewer processing cycles, less energy and time.
[0038] In some implementations, affective visualization data (i.e., the facial avatar described above in connection to some implementations of the present disclosure) can be used to map social cascades that form and spread within groups of humans. Like other networked machines, humans often solve individual problems collectively with great success. At times, information processing is performed on a single biological machine and is then aggregated as illustrated through“the wisdom of the crowds.” At other times, the information processing is performed in a localized manner on one or more biological machines and then later spread through other machines. An example of this information processing can be observed in herding responses that can prevent a biological machine from ending at the hands of a predator. Yet at other times, networked biological machines solve problems in an iterative manner through the cascades of machine-influence-weighted information spread across a network of machines. Solving individual problems collectively not only determines a solution to discrete problems, but also serves to generate consensus affective states of a network of biological machines.
[0039] In some implementations, in addition to efficiently transmitting affective information, human machines use facial HGOD to send social signals for deception and negotiation. Thus, by measuring and displaying the difference in the facial avatars and the actual facial state and dynamics, a new facial avatar can be generated flagging the divergence. The new avatar could be used, for example in an augmented reality application on a mobile device (e.g., a phone or a pair of smart glasses), to flag the points of divergence. The new avatar can be displayed in a manner where a user can immediately apprehend the points of divergence. Similarly, time series or live representations of these divergences and their resulting cascades can be used in an augmented reality application that would allow a user to observe influence and her impact across a network of biological machines in real-time. Identification of thought or mood leaders with augmented reality through a native format can enable a user to navigate an evolving social context more adeptly, because when the user is flashed a facial avatar image for 40ms, their brain would mark significant point before becoming consciously aware of it. In addition, this system can be used with augmented reality (AR) to determine whether a person’s attempt to influence or persuade are being met with resonance. This could be done in person or virtually, such as over video conferencing or a like medium. These dynamics or specific elements of these dynamics could be extracted or
presented in a number of ways, such as, but not limited to, rudimentary displays such bar graphs as well.
[0040] In some implementations, a facial avatar can be generated that reflects a particular affective state and flashed to a human to present a faster response. For example, if a machine mounted on a car were to detect ice below the car with LIDAR and wanted to send a signal to a human driver, the machine can flash a facial avatar that conveys an affective state with either significant concern or alertness to the human driver for a faster driver response. The driver would not have to be consciously aware of the facial avatar to respond appropriately.
[0041] In some implementations, a cascading facial avatars of affective states can be generated by a machine and inserted into media such as a movie or a videogame to elicit a cascading response from a biological machine watching the movie or playing the videogame, thereby guiding affective state of the biological machine in a precise and calculated manner. Guiding the biological machine in this manner can also mitigate impact of the uncanny valley by changing the immersiveness of experience by connecting to the biological machine watching the movie or playing the videogame preattentively through a native channel. Further, the system could observe and dynamically respond to the potentially changing affective state of the biological machine in an iterative or guiding manner.
[0042] The facial avatar can take on an array of realistic human forms, including representations of the human users themselves. Alternatively, the facial avatar can take on the images of notable or imaginary persons, or even non-persons such as a creature with a highly detailed anthropomorphic face. The facial avatar can use a specially constructed neutral face as a basal avatar in order to mitigate impact of biases associated with perceptions of trust or status.
Evaluating Nature and Intensity of Substance Intoxication and Affective Response
[0043] As described above in connection with some implementations of the present disclosure, facial avatars can be generated by the server 107 based on facial 1TGOD obtained by the sensors 105. The facial avatars can be provided to humans via their electronic devices (e.g., smartphone, laptops, desktops, televisions, etc.). The facial avatars are chosen as a native format to communicate information relating to affective state quickly to the humans. The humans can potentially absorb and process the communicated information within 40 ms. The fast information processing indicates that humans can subconsciously process such information. Although described in connection with presenting facial avatars to humans, computing devices 106 can analyze features of facial HGODs, or in some cases can analyze
generated facial avatars, to determine affective states of humans. Embodiments of the present disclosure further provide systems and methods for analyzing HGOD data produced in response to one or more humans consuming a substance that can alter the one or more humans’ affective states.
[0044] Affective state altering substances are of interest because many of these substances are consumed by humans without specific understanding of how impairment can occur. For example, while general mechanism and nature of impairment of alcohol consumption are well studied and understood, the mechanisms and nature of impairment from other compounds, such as but not limited to cannabinoids and terpenes, are less well understood. While tools and techniques for objectively identifying alcohol intoxication are readily available and used by law enforcement amongst other groups, such as breathalyzers or blood test that determine blood alcohol level, no such effective analogue exists for cannabis. This is in part due to the complexity in the chemical composition of cannabis, but also due to the persistence of these compound for period of time after post intoxication. Further, combinations of cannabinoids, such as tetrahydrocannabinol (THC) and cannabidiol (CBD), yield different physiological and intoxication responses than single cannabinoids in isolation. Other phytochemicals, such as terpenes, can add to the complexity. Beyond intoxication, the understanding of how these compounds that impact a user’s affective state and a nature of the user’s affective migration are even less well understood than intoxication.
[0045] FIG. 2 illustrates an example vaporizing device 200 for delivering a substance to a user. The user inserts a cartridge 230 containing a substance 240 into the vaporizing device 200 The vaporizing device 200 uses an array of onboard cartridge sensors to collect data on one or more of the cartridge, substance, and label specification. The cartridge sensors quantify the attributes of the substance 240 along multiple dimensions, including, but not limited to, any one or more of direct and indirect measures of turbidity, color, chemical composition, viscosity, and flavor. For example, the vaporizing device 200 can be used with cannabis.
[0046] In one embodiment of the present disclosure, the substance turbidity is measured using one or more optical sensors emitting light and measuring refraction. In another embodiment of the present disclosure, the substance color is quantified using a spectrometer sensor, detecting absorption. In another embodiment of the present disclosure, the chemical composition of the substance 240 is measured using a nondispersive infrared sensor to identify specific compounds, such as, but not limited to, cannabinoids, terpenes, terpenoids, and flavonoids, by their resonance frequency. In another embodiment of the present disclosure, the
substance capacitance is measured using capacitive sensors. In another embodiment the sensing can be supplemented with a reference data set (associated with, for example, an RFID tag).
[0047] The one or more sensors can be activated by a number of triggers. In an embodiment of the present disclosure, the one or more sensors are triggered by a pressure-sensitive electrical or mechanical switch that is activated through the process of cartridge insertion. In another embodiment of the present disclosure, the sensors are triggered by an on/off switch 220 on the vaporizing device 200. In another embodiment of the present disclosure, the sensors are triggered by a computing device, e.g., a cellphone, a virtual assistant, a laptop, a desktop, a server, etc. In another embodiment of the present disclosure, the cartridge sensors are triggered by the activation of an onboard accelerometer within the vaporizing device 200.
[0048] Upon activation, the cartridge sensors generate data describing the physical attributes of the substance 240 contained within the cartridge 230 along multiple dimensions, such as those described above. The data from each of these dimensions generates a distinctive pattern of data for a given substance 240, which can be analyzed.
[0049] Cartridge and substance data generated by the cartridge sensors can be recorded. Brand data and label specifications for the cartridge 230 and substance 240 can be recorded via the vaporizing device 200. In one embodiment of the present disclosure, the vaporizing device 200 uses a sensor tag, such as, but not limited to, an RFID chip embedded in the cartridge 230. In another embodiment of the present disclosure, an app running on a smartphone of the user enables a drop-down menu for selection. In another embodiment of the present disclosure, package or sales transaction receipt information embedded in machine-readable optical label is captured with sensors, such as a camera or scanner, linking the information to the label specifications. In another embodiment of the present disclosure, brand and product information is captured by obtaining an image of the packaging or cartridge 230, and communication sensors 264 then transmit the image data to a server for further processing.
[0050] The cartridge sensors also, directly or indirectly, measure the volume of the substance 240 within the cartridge 230. In one embodiment of the present disclosure, an ultrasonic emitter generates a sonic chirp into the cartridge 230; the ultrasonic receiver captures the sonic response as the waves travel through the air bubble within the cartridge 230. As the substance 240 within the cartridge 230 decreases, the air within the cartridge 230 increases. The sensors record the changing nature sonic chirp as it passes the air bubble, thereby collecting data for subsequent processing by a server to determine an amount of remaining substance 240. In another embodiment of the present disclosure, a light sensor can measure the growing size of the air bubble. In another embodiment of the present disclosure, a light sensor can measure
the substance 240 directly. In another embodiment of the present disclosure, an ultrasonic sensor can measure the substance 240 directly. In another embodiment, the quantity can be determined by an estimate developed by an trained machine learning (ML) model using visual input.
[0051] Upon insertion of the cartridge 230 into the vaporizing device 200, the user may begin vaping. The user activates the vaporizing device 200, which turns on a vaporizing element 280 and vapor sensors 210b, through a number of potential triggers. A thermal blanket 270 can be provided to protect sensors and other sensitive electronics to heat produced by the vaporizing element 280. In an embodiment of the present disclosure, the vaporizing device 200 can house a cartridge warmer 250 that can dynamically adjust substance viscosity to a desired level depending upon factors such as substance fingerprint, ambient temperature, or other factors.
[0052] In one embodiment of the present disclosure, the vaporizing element 280 and vapor sensors 210b are triggered by a sensor on a mouthpiece 210a of the vaporizing device 200. The sensor can be, a pH sensor, an airflow sensor, pressure sensor, or other sensor. In another embodiment of the present disclosure, the vaporizing element 280 and vapor sensors 210b are triggered by an on/off switch 220 on the vaporizing device 200. In another embodiment of the present disclosure, the sensors are triggered by a computing device, e.g., a smartphone, or some other device. In another embodiment of the present disclosure, the vaporizing element 280 and vapor sensors 210b are triggered by the activation of an onboard accelerometer sensor within the vaporizing element 200. The vapor sensors 210b can record a continuous description of the inhalation of vapor 290 from the vaporizing device 200 into the user’ s respiratory apparatus or lungs.
[0053] In addition, activation of the vaporizing element 200 can trigger the activation of cartridge sensors, vapor sensors 210b, physiological sensors 263, environmental sensors 262, performance sensors 261, affectivity sensors, and/or communication sensors 264, the activation of which may in turn prompt data transfer and storage between system components.
[0054] In addition, a specific request generated by another component of the system can trigger the activation of cartridge sensors, vapor sensors 210b, physiological sensors 263, environmental sensors 262, performance sensors 261, affectivity sensors, and/or communication sensors 264, the activation of which may in turn prompt data transfer and storage between system components.
[0055] The vaporizing device 200 of FIG. 2 can be incorporated into an overall system 300 depicted in FIG. 3 for detecting, classifying, and reporting intoxication and affective change
associated with a substance ingested by a user 340. The system 300 includes the user 340 who consumes a substance 240. The substance 240 can get into a body of the user 340 via contact with a solid/liquid form of the substance 240 or by inhalation of the vapor 290 of the substance 240. The substance 240 can be a combination of multiple chemical substances or a combination of different chemical substances delivered via different cartridges.
[0056] The user 340 can provide physiological data and/or affective data to a logic circuitry 350. The logic circuitry 350 is a computing device, e.g., a server, an application specific integrated circuit, a laptop computer, a cloud server, etc. The logic circuitry 350 can collect other information, e.g., contextual data, to further facilitate classifying intoxication or affective change of the user 340. The other information collected can include data from social media accounts and other internet-available information 351 of the user 340, environmental characteristics 352, personal characteristics 353, user subjective responses 354, and data from other systems 360. User subjective responses 354 can include user subjective ratings, user response times, user survey data, etc. The logic circuitry 350 can collect information from disparate sources to contextualize affective data and physiological data obtained from sensors measuring HGOD data of the user 340.
[0057] In some implementations, the logic circuitry 350 can be used to adjust heat energy 310 of the vaporizing device 200 such that an affective state of the user 340 can be maintained at a certain level. In some implementations, the logic circuitry 350 can adjust a composition of the solid/liquid substance 240 being delivered to the user 340 to maintain the affective state of the user 340 at the certain level.
[0058] Embodiments of the present disclosure provide a system and method for detecting, classifying and reporting intoxication and affective change associated with cannabis consumption. The system 300 can include one or more sensors that provide physiological data and/or affective data to the logic circuitry 350. The logic circuitry 350 can ingest the data from the one or more sensors, extract features from the data, apply one or more machine learning algorithms to the data to obtain an output for informing the user 340 of her intoxication level or for maintaining and controlling the intoxication level of the user 340 via adjusting delivery of cannabis to the user 340. Although described in the context of one user, each component of the system 300 can be one or more components such that the logic circuitry 350 can monitor multiple users using multiple substance delivery devices or vaporizing devices 200.
[0059] In some implementations, the sensors can initially be used to build data sets that record a number of physiological and affective measures such as, but not limited to, ECG, PPG, EMG, EDA, HRV, pupilometry, facial landmark data, facial texture data, and the movement
and dynamics of each. The data can be obtained by affixing or training the sensors on human subjects in a laboratory-like setting, or gathering them in an unobtrusive manner in naturalistic setting where subj ects are less cognizant of the sensing.
[0060] A given human subject can be monitored to gain baseline physiological and affect data. The subject is then given cannabis, with a specific phytochemical profile, including but not limited to specific cannabinoids and terpenes in known concentrations and ratios, and new readings are taken. This process is repeated over a larger number of subjects. Next, the phytochemical profile is changed, reflecting new concentrations and ratios, and a group of human subj ects consumes the compound after generating baseline data. The new response data is recorded. The process is then repeated with another specific phytochemical profile.
[0061] The logic circuitry 300 ingests the response data from multiple recordings and extracts and stores relevant features within the data. The logic circuitry 300 can then apply machine learning to analyze the recorded data. The machine learning applied can involve using a neural network to analyze the data sets. The logic circuitry 350 can then identify patterns of physiological and affective response which it associates with affective and intoxication states. These data sets can supplement reference data sets which can include subject surveys, questionnaires, or less obtrusive observation.
[0062] In some implementations of the present disclosure, the subjects (or users) are handed a vaporizer (e.g., the vaporizing device 200) with onboard physiological and affective sensors such as, but not limited to an EDA sensor, PPG sensor and a camera. The vaporizer can record phytochemicals, concentration and ratios of the phytochemicals, and consumption behavior of each subject. Consumption behavior includes a schedule of consumption, amount consumed per consumption event, etc. Referring to FIG. 3, as the user 340 holds and uses the vaporizing device 200, baseline physiological and affective measures are taken.
[0063] In some embodiments, as the consumer prepares to consume the phytochemicals, bringing the vaporizing device 200 toward her face, an onboard camera system captures facial landmark data. The facial landmark data can be used to for biometric identification, assigning the captured data to the appropriate subject, and enabling personalized features and mode of system access. In some embodiments, the captured facial data can be used in a manner that classifies the subject to prevent unauthorized usage, for example, because the system determines that it is highly probable that the user 340 is underaged for consumption.
[0064] In some embodiments, the same facial image data can be used to establish a baseline facial affect. That is, at the beginning of a consumption event, the facial image data captured by the onboard camera system can be used to determine a baseline prior to consuming the
substance 240. After consumption, the sensors can collect, in a time series, additional facial image data for the logic circuitry 350 to establish a transformation of an affective state of the subject. In another embodiment. The camera or cameras can be on an another device or multitude of devices.
[0065] For example, a pattern in individual measures such as HRV and skin conductance or facial landmark migration can be classified reflecting a common pattern of response to a phytochemical compound. The pattern in the union of individual measures of subject response can also be classified, providing additional affective and intoxication insights. The nature of these responses and insights can be trained to specific phytocannabinoid compounds, and particular subject cohorts that exhibit patterns of distinct response.
[0066] In some embodiments, the subject can have their data collected from the sensor laden vaporizer while consuming, for example, from the camera collecting facial data, and then have subsequent facial data collected from another device, such as a smartphone of the subject. The smartphone can obtain subsequent facial data at an opportune time when the subject checks the smartphone to inquire how the affective state has changed. In some embodiments, the physiological sensors could be embedded in wearables, a phone, or other peripheral device or networked sensors.
[0067] In some embodiments, the onboard camera could be used as a novel means to facilitate the establishment of a data account. The owner or possessor of a sensor laden vaporizer, e.g., the vaporizing device 200, can pair the vaporizer to an app running on a smartphone of the owner. In-app offerings from a data analytics service can collect and analyze data collected by the vaporizer, allowing the owner to share data from the vaporizer with a friend and also potentially share the vaporizer with the friend. Rather than requiring the friend to download the app beforehand, thereby disrupting the consumption ritual, the vaporizer and app can enable the owner to consume and record her facial data as well as separately allow the friend to consume and record facial data. Facial data can be recorded in, for example, a 128 landmark three dimensional representation using a 640x480 pixel image, amongst other means.
[0068] The owner of the vaporizer can subsequently be prompted in-app with an image of her friend, amongst other means, and a question asking whether to text a link to her friend enabling the automatic set up of an account. The friend, who has already enj oyed the vaporizer experience can then be able to easily download the app populated with their own data, at their convenience. This would improve the convenience and experience for all parties and increase the vitality of adoption.
[0069] After collecting, preprocessing, and applying machine learning, such as but not limited to a neural network approach, to the physiological and affective data, the logic circuitry 350 can be used for descriptive purposes. By descriptive purposes, the logic circuitry 350 can take physiological and affective data and classify a state of the user 340. The logic circuitry 350 can describe a nature and intensity of cannabis intoxication and the affective state of the user 340. The description can be provided to the user 340 in a real time basis. The description can be used by the user 340 or another party (e.g., a friend of the user 340 sharing the vaporizing device 200 with the user 340) to understand a nature and intensity of a person’s intoxication for a multitude of purposes. In some embodiments, the logic circuitry 350 can make predictions on future affective states of the user 340, as well as a future nature and magnitude of an intoxication level of the user 340. The prediction can be used to guide consumers of the substance 240 such that the consumers make informed decisions prior to consumption of the substance 240.
[0070] While collection of multiple data types may be preferred, a nature and magnitude of cannabis intoxication can be determined from a single data type. For example, patterns in the facial landmark and texture data, the facial landmark migration, movement, and kinesthetic patterns can be used to characterize the nature and intensity of cannabis intoxication. In some implementations, classification of the nature and magnitude of intoxication can be accomplished on an app on a smartphone, or by an Internet of Things (IoT) device. In some implementations, the classification of the nature and magnitude of intoxication can be realized via a camera within a facility, such as but not limited to a security camera. In some implementations, classification of the nature and magnitude of intoxication can be determined via EDA conductance sensors in a steering wheel, or a combination of EDA conductance sensors and a camera or other sensors within a car. In another embodiment, the system can be used as the basis for an alternative to a breathalyzer or blood test to determine cannabis intoxication.
[0071] By receiving individual data associated with cannabis intoxication, the logic circuitry can also classify or refine a classification of cannabis intoxication through the collection of one or more subjects’ observation of affective stimuli. For example, the logic circuitry 350 shows the user 340 facial affect data in the form of a face, either real or manufactured. The logic circuitry 350 can then observe and record responses of the user 340 made post-consumption to the stimuli (the shown facial affect data). Although described in the context of cannabis, some embodiments of the present disclosure can be applied to other compounds or chemicals, such as, coffee, etc.
Dynamic Modulation of Vapor to Change an Affective State of a User
[0072] Embodiments of the present disclosure provide systems and methods for dynamic modulation of affective state and/or intoxication/sobriety of the user 340. The system 300 of FIG. 3 can include one or more drug delivery systems (e.g., the vaporizing device 200), one or more sensors for gathering physiological and affective data, the logic circuitry 350 including one or more data storage devices and processors, and output components. The output components can be in a single form or distributed. For example, the output devices can include lights, visual displays, speakers, haptic components, or any combination thereof.
[0073] In some implementations, the system 300 further includes a mobile device of the user 340. The mobile device can execute an app. The system 300 can further include wearable devices (e.g., a wearable smart watch, wearable smart headband, wearable smart jewelry like rings, braclets, etc., or any combination thereof). The system 300 can include any other IoT device, including those owned and/or managed by others apart from the user 340.
[0074] In some such implementations, the system 300 includes the vaporizing device 200 with multiple cartridges (e.g., two, three, five, ten, fifty, etc.) that each contains one of a variety of compounds such as, for example, cannabinoids, terpenes, nicotine, other phytochemicals and mycochemicals, and/or other compounds, or any combination thereof. That is, each cartridge includes a single substance therein. Alternatively, one or more of the cartridges can included a mixture of two or more substances.
[0075] The system 300 can dynamically modulate affective state and/or degree or nature of intoxication/sobriety across a set of changing needs of the user 340. For example, the changing needs of the user 340 can include a diurnal cycle, a need for sleep or wakefulness, management of work, a sudden need to parent, or a sudden need to address other responsibilities, dynamic shifts in interests, such as a desire to workout or socialize while maintaining a desired affective state or level or nature of intoxication/sobriety.
[0076] In some implementations, the system 300 can automatically vary ratios of substances in a vapor (e.g., the vapor 290) based on the environmental characteristics 352 which can include a time of day, a geographical location (e.g., at a home location of the user 340, at a work location of the user 340, at a home location of a parent of the user 340, at a home location of a friend of the user 340, at a mall, at a movie theater, etc.).
[0077] The system 300 can automatically vary ratios of substances in the vapor based on other factors such as patterns of past use, with or without user input or with or without user modifications. For example, the logic circuitry 350 can set a vapor mixture in the morning to
have a ratio of THC to CBD of 1 to 1, whereas the logic circuitry 350 can set a vapor mixture in the evening to have a ratio of THC to CBD of 10 to 1. In another example, the logic circuitry 350 can set a vapor mixture when the logic circuitry 350 determines that the user 340 is at the home location of the user 340. The vapor mixture at the home location of the user 340 can be set to a ratio of THC to CBD of 20 to 1. The logic circuitry 350 can set the vapor mixture when it determines that the user 340 is at the home location of the parents of the user 340. The vapor mixture at the home location of the parents of the user 340 can be set to a ratio of THC to CBD of 5 to 1, etc. That way, the system 300 can dynamically adjust a compound that the user 340 consumes based on the environmental characteristics 352 obtained. For example, environmental stressors on the affective state of the consumer can become determining factors for a model.
[0078] In some implementations, a vaporizer (e.g., the vaporizing device 200) uses two or more cartridges together to create an infinite variability of ratios of compounds that can be delivered by the vaporizer. For example, two cartridges, one containing THC, the other containing CBD, can be coupled in the vaporizing device 200. When the user 340 inhales a given ratio of THC to CBD, such as 10 to 1, the vaporizing device 200 drives two vaporizing elements associated with respective ones of the cartridges. The vaporizing device 200 can drive the vaporizing elements using pulse width modulation, with distinct duty cycles for each, generating different rate levels of vaporization for each of the two cartridges.
[0079] Subsequently, if the user 340 were to then to decide to invert the ratio of THC to CBD consumption to 1 to 10, the vaporizing device 200 can adjust the duty cycles to establish the new vapor mixture. The system 300 can allow the user 340 to consume an infinite number of different ratios of compounds without having to buy or insert additional cartridges (i.e., additional cartridges in addition to the two already in the vaporizing device 200). In some implementations, a level or granularity of adjustment of the ratio of the two cartridges can be continuous. In some implementations, the level or granularity of adjustment of the ratio of the two cartridges can be discontinuous. In some implementations, the level or granularity of adjustment of the ratio of the two cartridges can be at discrete intervals or even binary, or any combination thereof. The multi-cartridge setup for the vaporizing device 200 that supports an infinite ratio vapor mixing enables the system 300 to precisely, efficiently, and conveniently deliver vapor to meet a modelled need of the user 340.
[0080] Some cannabinoids may have agonist or antagonist effects on the user 340. Thus, generation of one vapor mixture versus another vapor mixture can have an inverse impact on the user 340. In some embodiments, the system 300 can include substances or compounds that
mitigate and/or reverse impacts of intoxicants. For example, the user 340 first consumes a first vapor mixture with a relatively high ratio of the intoxicating cannabinoid THC and subsequently becomes intoxicated. Shortly thereafter, the user 340 desires sobriety and/or a lesser level of intoxication. The user 340 then consumes a second vapor mixture with a relatively high level of THC antagonistic compound, thereby speeding up or accelerating sobriety. That is, in some implementations, antagonistic compound consumption can be used to mitigate and/or reduce an affective state of a user of THC, thereby, for example, resulting in a sobering impact/effect. Other compounds or combinations of compounds can also be used to mitigate and/or reduce an affective state of a consumer/user of THC. Examples of such other compounds include cannabinoids, terpenoids, and other compounds.
[0081] Benefits of being able to accelerate sobriety are immeasurable. For example, a parent can enjoy a high THC vapor socially at a party. When the parent returns home, he is surprised by a sick child that needs care. The parent can use the vaporizing device 200, according to some implementations of the present disclosure, to increase a ratio of a sobering substance so as to consume a second vapor with a different ratio of substances to aid in sobering the parent.
[0082] The adjustment in vapor mixture ratios to aid a consumer/user from switching from insobriety to sobriety can be automatically or manually initiated and supported with guidance from the system 300 via an app executing on a mobile device of the consumer/user and/or via the vaporizing device itself. For example, the system 300 via the logic circuitry 350, which operates as an integrated distributed classification, prediction, and response system, can leverage affective, physiological, environmental, and consumption data collected by sensors and be analyzed with machine learning techniques to provide personalized decision support.
[0083] The system 300 can generate a modelled prediction of an amount of time needed for the user 340 to return to or achieve a given level of sobriety. The system 300 can provide live behavioral guidance based, at least in part, on vapor mixture(s) consumed by the user 340 and the vapor mixture to be consumed by the consumer to achieve sobriety. In some implementations, the user 340 can input a level of intoxication and adjust the desired sobriety levels without modelled assistance. In some implementations, such adjustments can be implemented via the vaporizing device 200. In some implementations, the system 300 can use machine learning to classify the user 340 into a like cohort of users whose data informs personalized prediction for dosing, impact and timing. The system 300 can obtain current affective, physiological, and environmental data of the user 340 to generate modeled prediction of timing of intoxication, timing of sobriety, etc.
[0084] In addition to mixing vapor in ratios, the system 300 can use pulse width modulation or like techniques to modify an intensity of a vapor mixture and a time duration and inhalation amount required to consume a desired level of a compound or compounds. Pulse width modulation is a technique that is used for electrical signal manipulation in which continuously increasing or decreasing levels of output can be generated from a binary signal input in purporting to the duty cycle. Thus, the pulse width modulation approach is able to generate a continuum of variable vaporization. The system 300 can apply independent/different pulse width modulation to each of the cartridges in the vaporizing device 200 such that, the output vaporization from each cartridge can be set independently or set in coordination with any other cartridge.
De-Virtualization of a Social Network
[0085] Embodiments of the present disclosure provide a system and method for facilitating a navigation of a complex social space with an array of constraints and opportunities. Most social networks are largely virtual where participants engage remotely, often from their own homes, using an electronic medium (e.g., mobile device, tablet, computer, etc.). There is a need for in-person direct connection aided by the technologies that have powered traditional electronic social networks. Social media interactions typically require a reason or rationalization for connection and purpose. For example, Facebook “friends” may be connections with whom you went to school and want to share life experiences with images, comments, and other posts or likes. Instagram connections tend to a be a broad collection of people with whom one wants to share common aesthetic interests, passions, or hobbies, with whom one wants to share your curated collection of aesthetically appealing or otherwise interesting images and videos. Embodiments of the present disclosure provide an electronic social network that facilitates de-virtualization of social connections.
[0086] As reported by Partnership for Drug Free America, 38.8% of outdoor concert-goers admit to inhaling cannabinoids at concerts. Further inhaling cannabinoids at or around an event often tends to be social experience, involving sharing of cannabinoids and delivery systems with other members of your social network. However, finding an effective means to draw these social network nodes together, synchronizing them in the same physical space and time can be challenging. The challenges can be exacerbated by the challenges of seeing through dense crowds of people, the constant physically churning movements of a crowd, the desire of the managers of a venue to inhibit loitering, and challenging environmental variables, such as excessive heat, light, or darkness, as well as noise. Embodiments of the present disclosure
brings people who are nodes on a shared social network together geospatially to foster a live de-virtualized social experience.
[0087] In some implementations, the vaporizing device 340 includes assisted GPS to approximately place the user 340 in a mapped physical region geospatially. The vaporizing device 340 can use sensors to refine the approximate placement for very precise location (e.g., sub decimeter location) on a three dimensional geospatial map, relative to other objects, landmarks, people, and/or others who are nodes of the social network of the user 340. Node information can be determined using the social media accounts 351 of the user 340. By determining the three dimensional mapping, the system 300 via the logic circuitry 350 can enable nodes of the social network 340 to efficiently converge.
[0088] For example, using vaporizers according to some embodiments of the present disclosure, one node (e.g., a user of a vaporizer) of the social network can invite another member to meet. The meeting location can be, for example, at a concert. The system 300 can provide instructions and/or guidance for the two nodes to converge (e.g., by providing directions using an app executing on a mobile device). This convergence and de-virtualization of the social network can be used to enjoy a cannabis sharing ritual in person or for another purpose, for example, meeting for lunch in a park.
[0089] The system 300 can use a number of sensing methods to determine the geolocation of a node/user. In an example, the system 300 can use a Bluetooth sensor (or any wireless sensor technology) in one location to gauge signal strength to another Bluetooth sensor at another node. As a distance between the Bluetooth sensors change, the system 300 can obtain signal strength between the Bluetooth sensors over time. As such a time staggered triangulation is accomplished using the Bluetooth sensors. Additionally, or alternatively, multiple antennas on a single Bluetooth device or multiple Bluetooth sensors on a single vaporizer, or the union of multiple Bluetooth sensors across multiple devices (e.g., mobile devices, vaporizers, etc. or any combination thereof) can be used. This would allow the nodes to efficiently converge with precision at distance of more than, for example, 100 meters. The assisted GPS would guide the nodes within to the 100-meter range, and the sub decimeter geolocation on the three dimensional map would enable the system 300 to guide the nodes to convergence.
[0090] The nodes can experience the guidance from the system 300 through any number of modalities. For example, as the nodes converge on a common location, a haptic element (e.g., a vibrating motor) in the vaporizer can signal a reduction in distance or which direction a user should head. Similarly, light or sound emitted from the vaporizer can aid in guiding the
nodes/users to converge together. Alternatively, a smart phone or a wearable device can aid in providing guidance.
[0091] All of the calculations required to provide guidance could be done remotely in the cloud, e.g., at the logic circuitry 350. Under conditions in which high throughput data streaming becomes challenging, calculations can be performed on the vaporizer (e.g., the vaporizing device 200). The vaporizing device 200 can include, for example, a tensor processing unit (TPU) for performing calculations. Alternatively, the calculations can be performed by another computing device, such as, for example, a mobile device, a tablet, etc. In some implementations, the system 300 calculates the guidance using a distributed approach, leveraging the collective processing power of all devices associated with the system 300 and the social network. That is, the social network and the system 300 can also function as a mesh network thereby optimizing data flow and communication among nodes within the social network. The social network used in this context includes computing devices like mobile phones, tablets, vaporizers of other users associated with the user 340 via the social media account 351 of the user 340.
[0092] Nodes of the social network, both direct (e.g., friends) and indirect nodes (e.g., friend of friends), can be guided to meet at a particular space at a precise time, by guiding nodes to a location using other devices or beacons as reference points. As such, meeting a friend (or a friend of a friend that the user 340 has not yet met) can be more convenient.
[0093] In some implementations, sensors, such as, for example, optical sensors and RFID sensors, onboard the vaporizer, mobile phone, or other device can allow users to authenticate the identity of another node to make meeting safer. Verification of the identity of the other party can be performed, for example, by confirming that one or more biometrical measurements (e.g., fingerprint, face scan, iris scan, etc., or any combination thereof) of the other party matches a corresponding biometric measurement that is stored in the system 300 and is associated with the node. Such verification can aid in reducing the probability that a third party has hijacked the other node, potentially with malign intensions.
[0094] In some implementations, the system 300 is configured to authenticate users to aid in enabling safe purchases of restricted goods (e.g., cannabis, alcohol, etc.) as well as enabling financial transitions (e.g., using credit cards or the like). In some implementations, the system 300 authenticates the user 340 by using the vaporizer, mobile device, or other component of the system 300 to capture an image of a credit card, identification (ID) card, or other official document of the user 340. This image is fed through a machine learning model with optical character recognition as well as other visual pattern recognition to digitally transcribe and
ingest the requisite features and information to compare reference datasets. This process authenticates the person, documents, credit card, and status on the system 300.
[0095] In some implementations, a node of the system can select a social identifier, such as, for example, a personalized music snippet or song or other audio output. In some such implementations, the social identifier of the node can be displayed/played upon a given trigger, such as, for example, upon vapor inhalation by the node using a connected vaporizer. In such an example, the node is able to express its social identity through“walk-on music” when consuming the vapor. Such a social identifier can be used to help other nodes identify and/or authenticate the node/user.
[0096] While vaporizers deliver a smoother inhalation experience than smoking, it can often be hard to know how much the user 340 has consumed because of the physical ease of consumption. In some implementations, the vaporizer (e.g., the vaporizing device 200) is associated with the user 340 such that the vaporizer is able to communicate with the user 340 as the user 340 consumes vapor. The vaporizer can signal an amount of vapor that the user 340 has consumed through a live output element, such as a haptic component or light. The output element provides sensory guidance on the vaporizer and/or a computing device (e.g., a mobile device, tablet, etc.). An app executing on a mobile device can also reflect consumption data, but with less sensory immediacy than a live vaporizer feedback provided by the vaporizer.
[0097] In some implementations, complementary sensor data from the devices of other users can help improve accuracy. The system 300 can also use complementary sensor types, such as but not limited to accelerometer and gyroscopic data, to further refine a user’s geolocation as the user moves. Amongst other things, this would provide precise triangulation even in dynamic contexts.
[0098] Further, in addition to bringing nodes together at a given place and time, the system 300 can use sensor data to estimate the progression of the other node(s) toward convergence and indicate such estimate to at least some of the nodes (e.g., via an app, the vaporizer, etc. or a combination thereof). For example, if the accelerometer data (e.g., from a vaporizer) indicates a particular set of features, and the convergence were slowed, a model could indicate that the other node was slowed or stopped (e.g., the other node/user slipped on the ice and needs help, the other node ran into another friend on the way to converge/meet, etc.). The system 300 can provide support by suggesting a status of the other node (e.g., fallen, meeting with another node, etc.), by providing a current location of the other node (e.g., the site of the modelled fall, etc.).
[0099] In some implementations, the system is able to bring together nodes of a social network in a synchronized collective experience through coordinated communications. For example, a collection of nodes at a concert can experience a collective response to a stimulus, such as musical rhythm or drum beat, causing all vaporizers, phones, or ancillary device to light up, vibrate or generate another output in unison.
[00100] In some implementations, an application programming interface (API) can allow a party, such as a musician, to dynamically control experiential elements for nodes of a social network or components of the system 300. For example, the musician can dynamically control the color of lights or intensity on vaporizers, phones or other devices within the social network. Thus, the system itself would become part of the show as wave of light wash over segments of the audience in which the nodes are present.
[00101] In some implementations, nodes/users themselves initiate a shared collective experience such as a geospatially manifested output, such as, for example, a wave of light spreading across a crowd or emanating like a ripple from themselves, much like people standing and then sitting at a football game generating an engaging illusion.
[00102] In some implementations, shared experiential data can be collectively experienced across a group of people, such as, for example, sound, music, or system driven vaporization. In some implementations, collections of nodes can consume a substance in-person together (e.g., by using their respective vaporizers). In some implementations, collections of nodes can consume the substance at a same time but at different locations via an electronic network. The nodes can share data and other common experiential elements such as dynamic information, games, entertainment, messaging, video, audio and other forms of communication.
[00103] In some implementations, geospatial node density maps can be used to drive people to gather at a single location or multiple locations for a spontaneous social event (e.g., involving consumption of substances via vaporizers). In some implementations, vendors of experience, such as, for example, restaurants, could guide nodes to a geospatial and temporal convergence.
[00104] In some implementations, a single node or a group of nodes can guide the collective nodes to a geospatial and temporal convergence for any reason. In some implementations, affective or other data types derived from a node, or modelled from a collection of nodes, can be used for the modulation of devices within a relevant geospatial and temporal parameter, such as lights, temperature, noise, and sound in a venue.
Hemp and Cannabis Production
[00105] Embodiments of the present disclosure provide a system and method for dynamically optimizing industrial hemp and cannabis production. Optimizing production can significantly increase cannabis crop efficiency and economic yield. FIG. 4 illustrates a system 400 for optimizing industrial hemp and cannabis production, according to some embodiments of the present disclosure. The system 400 includes a standard array of agronomic sensors 402 and data collection elements, such as, but not limited to, grow light data collection elements, nutrient data collection elements, watering data collection elements, ambient air data collection elements, as well as non-standard data collection devices in the form of optical sensors, and in some embodiments electronic nose chemo-sensors. These optical sensors can collect light from both visible and non-visible spectra. In the non-visible spectra, these optical sensors can be used to collect data for generation of growth and plant health metrics such as normalized difference vegetation index (NDVI). Further, these optical sensors can be trained on specific parts of the hemp or cannabis plant, for example, the optical sensors can be trained on inflorescence of the plant. The agronomic sensors 402 can collect data for further processing by the computing device 406 The data collected by the agronomic sensors 402 can be used for chemical composition of the inflorescence of the plant and the resins that the inflorescence produces. In an example, lean spectrographic sampling with infrared or ultraviolet light can be used to capture the nature of chemical bonds indicative of the chemical composition of the inflorescence and resins. It can also indicate the concentration of phytochemicals and provide guidance for harvest timing or necessary change of environmental variables. The actuators 404 in the system 400 can automate watering, lighting, fertilizer, etc., of the plants based on computations by the computing device 406
[00106] Every cannabis plant cultivar, often referred to as “strains” by industry professionals, is different. As the strains grow and pass through different phenological states, they exhibit different sensitivities to inputs, and exhibit materially different traits. For example, as a given cultivar matures to point of flowering, the inflorescence and associated resins can differ wildly from another cultivar, with respect to color, texture, shape and scent. One cultivar may consistently develop a purple hue in the early stages of flowering, while another may develop a white hue. Furthermore, as they continue to mature phenologically, the characteristics of the inflorescence and resin continue to change in a material way. For example, the cultivar exhibiting a white hue to the inflorescence and resin may begin to exhibit a golden hue instead. For example, in this instance the color change may correspond with the THC oxidizing into CBN.
[00107] The array of sensors (e.g., the agronomic sensors 402) within the system 400 can record the changes collected by optical sensors and store this data in a database. This data would also associate with other data identifying the identity of the particularly cultivar and the phenological state. In another embodiment, olfactory chemo-sensors would also record and store the data quantitatively describing the scent. Along with the optical data, the scent data would be stored in a database. This data would also associate with other data identifying the identity of the particularly cultivar and phenological state.
[00108] The difference in observed characteristics of the inflorescence and the resins from one cultivar to the next reflect the differences in phytochemicals, which include, but are not limited to, terpenoids and cannabinoids. Further, within a cultivar other observable phytochemicals correspond to changes in terpenes and cannabinoids. Thus with the combination of optical sensor observation data, or the combination of optical sensor observation data and chemo-sensors data, which transformed with feature extraction and processed with machine learning, the types and quantities of terpenes, cannabinoids and other phytochemicals can be determined with a high degree of accuracy.
[00109] Industrial cannabis production has focused on the yield and sale of whole inflorescence, often referred to as“flower” or“buds” by industry professionals. However, in an alternative approach, the inflorescence can be transformed by extracting the constituent phytochemicals, for example through extraction of terpenes and cannabinoids with solvents and distillation. These distillates can then be sold individually or recombined and sold. Each of these distillates are distinctive, having different utility, organoleptic properties and applications, and are present in different concentrations within the inflorescence. Thus with differing levels of supply and demand for the distillates, the prices of the distillates vary considerably. Further, both the relative supply and the relative demand for the distillates, and thus the relative pricing, of the distillates vary over time.
[00110] Within a given cannabis cultivation and production facility, multiple cultivars are commonly grown. The inflorescence and resins from each cultivar has a tendency, ceteris paribus, to contain a distinctive set of phytoche icals, including, but not limited to terpenes and cannabinoids. Further, the harvest timing and degree of maturity within the phenological state of the inflorescence and resins of a given cultivar can materially change the types and quantities of phytochemicals.
[00111] In some embodiments, the system 400 uses electromagnetic radiation, optical sensors, or optical sensors and chemo-sensors, to create time series records of each plant’s inflorescence and resins, as well as the standard set of agronomic sensors 402 tracking each
plant’s growth, and storing data on the plant’s growth in a database that associates the plant with its cultivar identity. Further, the system 400 contains a reference database with phytochemical composition confirmed with additional high resolution chemical testing. After extracting the relevant feature from each sensor data type, the system 400 applies machine learning to determine the current and future phytochemical composition of each plant available for extraction and distillation. The systems then aggregate the level of distillates across all of the plant in production.
[00112] In some embodiments, the system 400 also calculates the potential range in the levels of each distillate, depending upon harvest time and other factors. In some embodiments, the system performs optimization for harvest time and other factors of plants and group of plants, for example, factoring market pricing or projected pricing of individual distillates, for the optimization of production profits. The system can optimize the production of distillates for any number of factors, or combinations of factors, including but not limited the assurance of yield of a level of a given distillate or distillates and maximal profitability from remaining distillate yield.
[00113] In addition to enhancing predictability and profitability in the yield of distillates, the system would also reduce the need for labor and increase standardization and quality control across multiple facilities. Currently, cannabis production companies rely on lead cultivation personnel to observe plant growth and subjectively determine when to commence harvesting. The subordinate cultivation personnel then begin the harvest. If a given cannabis production company were to open a second cultivation facility in an area far away, the lead cultivator would then need to travel to the new facility and begin the process of observation and assessment, ideally training the staff before returning to oversee the original facility. In the preferred embodiment, the system would be able to provide cultivation and harvest decision support across 10,000 facilities as readily as it could one. In another embodiment, the system 400 would optimize the production facility for distillates and robotic harvesters would harvest the plants. In another embodiment, the system would optimize the yield for a combination of distillates and“flower.” In another embodiment, the system would incorporate addition exogenous supply and demand factors, such as satellite data of outdoor cannabis production to determine the optimal cultivation and harvest path.
[00114] In another embodiment, the system 400 would make recommendations for the planting or removal of specific cultivars, in order to optimize the production facility or groups of production facilities (new or existing). For example, the system could recommend the planting or removal of sets of cultivars in certain numbers and/or ratios to optimize a given
facility for maximal flexibility with respect to changes, either real or projected or potential, in distillate pricing while maintaining maximal profitability of distillate sales. Similarly, the system could recommend the planting or removal of sets of cultivars in certain numbers and/or ratios to optimize a given facility for maximal modelled potential with respect to changes, either real or projected or potential, in distillate pricing while maintaining maximal profitability of distillate sales. The system 400 would also be able to make the above recommendations with constraints, such as, but limited to, base production levels of given distillates, base production levels of flower, combinations of base levels of flower and given distillates, availability of cultivars, input costs, timing, and legal and geopolitical constraints. In an alternative embodiment, the system 400 could optimize the production of phytochemicals, such as, but not limited to, cannabinoid and terpenes, that remain within the inflorescence and resin, and are sold as flower instead being extracted as distillate.
[00115] These planting recommendations can be tailored to new production facility in isolation, a facility, or set of facilities, expansion or contraction, or incrementally added facilities. In addition, the system could make these recommendations with respect to the replacement of existing plants that are aging or maturing, performing sub-optimally or likely to perform sub-optimally along a given dimension of performance. The system could also tailor the recommendations to global, local, or the optimized intersection of global and local supply and demand dynamics.
[00116] In addition to the optimization of production of individual distillates for economic profitability as described above, the system could optimize the production of distillates in a conjoined manner to yield optimal production of any number of conjoined units defined along any dimension. For example, groups of cannabinoid and terpenes in certain ratios could be defined by units of affective and or medical response by consumers, which the system could optimize.
[00117] While this disclosure is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the present disclosure is not intended to be limited to the particular forms disclosed. Rather, the present disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure.
[00118] In the claims, any reference signs placed between parentheses shall not be constructed as limiting the claim. The word“comprising” does not exclude the presence of
elements or steps other than those listed in a claim. The word“a” or“an” preceding an element does not exclude the presence of a plurality of such elements.
[00119] The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Claims
1. A human-machine interface system, comprising:
a display device;
a camera configured to capture an image;
a communication module configured to transmit and receive data between a first remote user device and a second user remote device; and
one or more storage mediums having stored thereon, individually or in combination, code sections for causing a computing device to, when executed by one or more processors:
detect a face of the image captured by the camera;
extract a plurality of physical features from the face detected in the image; based at least in part on the extracted physical features, generate a first avatar; determine an affective state of the face captured by the camera; based at least in part on the determined affective state, generate a set of micro expression avatar parameters;
convert the first avatar to a second avatar representative of the generated set of micro-expression avatar parameters; and
display the second avatar on the display device.
2. A computer-readable medium having stored thereon a computer program executable by a computing device to process facial landmark data, the computer program comprising code sections for causing the computing device to:
receive, from a camera, images;
detect a face of a user in an image captured by the camera;
extract a plurality of physical features from the face detected in the image;
determine an affective state of the user;
based at least in part on the plurality of physical features, generate an avatar representative of the determined affective state of the user; and
cause a remote device to display the avatar.
3. A hemp and cannabis production system, comprising:
a hemp and cannabis plant;
an agronomic sensor for monitoring the plant, the agronomic sensor configured to generate agronomic data associated with the plant field;
one or more processors configured to analyze the generated agronomic data, to determine a chemical composition of the plant; and
a communication module coupled to the one or more processors and configured to transmit at least a portion of the determined chemical composition of the plant.
4. The hemp and cannabis production system of claim 3, wherein the generated agronomic data include grow light data, nutrient data, watering data, ambient air data, optical data, electric nose chemo data, or any combination thereof.
5. The hemp and cannabis production system of claim 3, further comprising one or more memory devices coupled to the communication module and configured store received chemical composition of the plant.
6. The hemp and cannabis production system of claim 3, further comprising a computerized machine learning system configured to compare the generated agronomic data of the plant to a reference database associated with different cultivar identities, thereby determining a current and a future phytochemical composition of the plant.
7. A system for evaluating compound intoxication and affective response in a user, the system including a processor and a non-transitory computer readable medium storing instructions thereon such that executing the instructions causes the system to perform the steps comprising:
receiving, from a set of sensors, physiological measures describing a first state of the user;
monitoring the physiological measures to determine a baseline associated with the user;
receiving, from the set of sensors, physiological measures describing a second state of the user,
wherein the second state of the user is a state after a dosage of the compound is consumed by the user and the first state is a state before the dosage of the compound is consumed by the user; and
determining features of the affective response in the user based on the baseline associated with the user, the physiological measures describing the first state, and the physiological measures describing the second state.
8. The system according to claim 7, wherein executing the instructions causes the system to further perform the steps comprising:
receiving, from the set of sensors, successive physiological measures during the second state of the user; and
determining, from the successive physiological measures, a trend in the affective response in the user.
9. The system according to claim 7, wherein executing the instructions causes the system to further perform the steps comprising:
receiving, from a second set of sensors, physiological measures describing a first state of a second user;
monitoring the physiological measures describing the first state of the second user to determine a baseline associated with the second user; and
receiving, from the second set of sensors, physiological measures describing a second state of the second user,
wherein the second state of the second user is a state after the dosage of the compound is consumed by the second user and the first state of the second user is a state before the dosage of the compound is consumed by the second user,
wherein the features of the affective response in the user is further determined based on the baseline associated with the second user and the physiological measures describing the second state of the second user.
10. The system according to claim 7, wherein executing the instructions causes the system to further perform the steps comprising:
receiving, from the set of sensors, dosage information including historical consumption of the compound, a concentration of the compound, or chemical composition of the compound; and
determining the dosage of the compound and the first and second states based on the received dosage information.
11. The system according to claim 7, wherein executing the instructions causes the system to further perform the steps comprising:
receiving photographic data from a camera in a second set of sensors;
receiving physiological measures from the second set of sensors;
determining whether the photographic data pertains to the user or to a second user; and
in response to the photographic data pertaining to the user,
associating the physiological measures from the second set of sensors with a profile of the user, wherein the profile of the user includes the determined features of the affective response of the user, and determining the features of the affective response in the user based on the profile of the user and the physiological measures from the second set of sensors.
12. The system according to claim 7, wherein the physiological measures comprise one or more selected from the group consisting of: electrocardiogram (ECG), a PPG,
electromyography (EMG), electrodermal activity (EDA), heart-rate variation (TIRV), pupilometry, facial landmark data, and facial texture data.
13. The system according to claim 7, wherein the compound comprises one or more selected from the group consisting of: cannabinoids, terpenes, coffee, tetrahydrocannabinol (THC), and cannabidiol (CBD).
14. The system according to claim 7, wherein executing the instructions causes the system to further perform the steps comprising:
determining a recommended dosage of the compound based on the baseline associated with the user; and
sending, to a mobile device of the user, the recommended dosage.
15. The system according to claim 14, wherein the recommended dosage is further determined based on a location of the mobile device of the user, a time of day, a calendar event on the mobile device of the user, and historical consumption of the compound.
16. The system according to claim 14, wherein the recommended dosage is further determined based on an amount of time the user should achieve a given level of sobriety.
17. A system for de-virtualizing a social network, the system comprising:
a first vaporizer configured to:
deliver a compound via a vapor to a user,
determine a physical location of the first vaporizer,
determine locations of a group of vaporizers, and
send a meetup signal for initiating a convergence of a subset of vaporizers in the group of vaporizers and the first vaporizer; and a mobile device configured to:
receive the meetup signal, and
broadcast a meetup message to the group of vaporizers.
18. The system according to claim 17, wherein:
the first vaporizer determines the physical location of the first vaporizer via assisted GPS, geospatially placing the first vaporizer in a mapped physical region;
the meetup message includes directions in the mapped physical region for each vaporizer in the group of vaporizers to converge with the first vaporizer.
19. The system according to claim 17, wherein the locations of the group of vaporizers are determined based on short-range communication protocols.
20. The system according to claim 17, wherein the first vaporizer comprises a tensor processing unit (TPU) for determining the locations of the group of vaporizers.
21. The system according to claim 17, further comprising:
a remote server configured to determine the meetup message broadcasted to the group of vaporizers and track the change in the physical location of the first vaporizer or the change in the locations of the group of vaporizers.
22. The system according to claim 21, wherein the first vaporizer provides haptic or visual feedback as a function of a distance between a location of the subset of vaporizers and the physical location of the first device.
23. The system according to claim 21, wherein the remote server is further configured to authenticate each vaporizer in the group of vaporizers and only track the change in locations for authenticated vaporizers in the group of vaporizers.
24. The system according to claim 17, wherein the first vaporizer is further configured to track a total amount of the compound delivered during a timeframe and dynamically adjust a next dosage or a composition of the compound.
25. The system according to claim 17, wherein the compound comprises THC and CBD, and the first vaporizer is further configured to dynamically adjust a ratio of THC to CBD in the compound based on the physical location of the first vaporizer.
26. The system according to claim 17, wherein the first vaporizer comprises one or more sensors including a camera, EDA sensors, and PPG sensors.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862760773P | 2018-11-13 | 2018-11-13 | |
US201862760731P | 2018-11-13 | 2018-11-13 | |
US62/760,773 | 2018-11-13 | ||
US62/760,731 | 2018-11-13 | ||
US201962819294P | 2019-03-15 | 2019-03-15 | |
US62/819,294 | 2019-03-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020102459A1 true WO2020102459A1 (en) | 2020-05-22 |
Family
ID=70731173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2019/061331 WO2020102459A1 (en) | 2018-11-13 | 2019-11-13 | Systems and methods for evaluating affective response in a user via human generated output data |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020102459A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024189369A1 (en) * | 2023-03-15 | 2024-09-19 | Nicoventures Trading Limited | Mood state |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267544A1 (en) * | 2013-03-15 | 2014-09-18 | Intel Corporation | Scalable avatar messaging |
KR101743763B1 (en) * | 2015-06-29 | 2017-06-05 | (주)참빛솔루션 | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same |
WO2017152673A1 (en) * | 2016-03-10 | 2017-09-14 | 腾讯科技(深圳)有限公司 | Expression animation generation method and apparatus for human face model |
-
2019
- 2019-11-13 WO PCT/US2019/061331 patent/WO2020102459A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267544A1 (en) * | 2013-03-15 | 2014-09-18 | Intel Corporation | Scalable avatar messaging |
KR101743763B1 (en) * | 2015-06-29 | 2017-06-05 | (주)참빛솔루션 | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same |
WO2017152673A1 (en) * | 2016-03-10 | 2017-09-14 | 腾讯科技(深圳)有限公司 | Expression animation generation method and apparatus for human face model |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024189369A1 (en) * | 2023-03-15 | 2024-09-19 | Nicoventures Trading Limited | Mood state |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11743527B2 (en) | System and method for enhancing content using brain-state data | |
US11587272B2 (en) | Intelligent interactive and augmented reality cloud platform | |
US20220084055A1 (en) | Software agents and smart contracts to control disclosure of crowd-based results calculated based on measurements of affective response | |
US11200964B2 (en) | Short imagery task (SIT) research method | |
US20220147535A1 (en) | Software agents facilitating affective computing applications | |
US11494390B2 (en) | Crowd-based scores for hotels from measurements of affective response | |
CN109564706B (en) | User interaction platform based on intelligent interactive augmented reality | |
US9805381B2 (en) | Crowd-based scores for food from measurements of affective response | |
US20210248656A1 (en) | Method and system for an interface for personalization or recommendation of products | |
US10198505B2 (en) | Personalized experience scores based on measurements of affective response | |
US10365716B2 (en) | Wearable computing apparatus and method | |
US20230034337A1 (en) | Animal data prediction system | |
US20160224803A1 (en) | Privacy-guided disclosure of crowd-based scores computed based on measurements of affective response | |
US11483618B2 (en) | Methods and systems for improving user experience | |
US20180115802A1 (en) | Methods and systems for generating media viewing behavioral data | |
KR20190020513A (en) | Pet care method and system using the same | |
US20180109828A1 (en) | Methods and systems for media experience data exchange | |
CA3189350A1 (en) | Method and system for an interface for personalization or recommendation of products | |
WO2020102459A1 (en) | Systems and methods for evaluating affective response in a user via human generated output data | |
WO2022181080A1 (en) | Tendency determination device, display device with reflective body, tendency display system device, tendency determination method, display processing method, tendency display method, program, and recording medium | |
KR20240085646A (en) | Apparatus and method for providing customized content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19885127 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 11/10/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19885127 Country of ref document: EP Kind code of ref document: A1 |