US20210398412A9 - Multi-sensor device for environment state estimation and prediction by sampling its own sensors and other devices - Google Patents
Multi-sensor device for environment state estimation and prediction by sampling its own sensors and other devices Download PDFInfo
- Publication number
- US20210398412A9 US20210398412A9 US16/719,828 US201916719828A US2021398412A9 US 20210398412 A9 US20210398412 A9 US 20210398412A9 US 201916719828 A US201916719828 A US 201916719828A US 2021398412 A9 US2021398412 A9 US 2021398412A9
- Authority
- US
- United States
- Prior art keywords
- primary device
- devices
- environment
- agent
- sensors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005070 sampling Methods 0.000 title claims description 4
- 238000004891 communication Methods 0.000 claims abstract description 25
- 230000009471 action Effects 0.000 claims abstract description 22
- 230000007613 environmental effect Effects 0.000 claims abstract description 8
- 230000036541 health Effects 0.000 claims description 5
- 230000036772 blood pressure Effects 0.000 claims description 3
- 230000036760 body temperature Effects 0.000 claims description 2
- 238000009826 distribution Methods 0.000 description 12
- 238000012544 monitoring process Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 238000003860 storage Methods 0.000 description 11
- 230000001953 sensory effect Effects 0.000 description 9
- 230000006399 behavior Effects 0.000 description 7
- 101000622137 Homo sapiens P-selectin Proteins 0.000 description 5
- 102100023472 P-selectin Human genes 0.000 description 5
- 101000873420 Simian virus 40 SV40 early leader protein Proteins 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000000034 method Methods 0.000 description 5
- 210000002569 neuron Anatomy 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000001364 causal effect Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000000272 proprioceptive effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000007115 recruitment Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000037361 pathway Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009023 proprioceptive sensation Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010001488 Aggression Diseases 0.000 description 1
- 206010010144 Completed suicide Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000001994 activation Methods 0.000 description 1
- 230000016571 aggressive behavior Effects 0.000 description 1
- 208000012761 aggressive behavior Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0453—Sensor means for detecting worn on the body to detect health condition by physiological monitoring, e.g. electrocardiogram, temperature, breathing
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/045—Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0469—Presence detectors to detect unsafe condition, e.g. infrared sensor, microphone
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0476—Cameras to detect unsafe condition, e.g. video cameras
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0133—Traffic data processing for classifying traffic situation
Definitions
- This invention relates to a device, system and related methods for estimating, predicting and determining the state of an environment. More particularly, this invention relates to a multi-sensor device for estimating, predicting and determining the state of an environment through sampling its own sensors and other devices.
- a myriad of sensors static and dynamic
- sensors are used to monitor the environment, each of which sends its sensed signals/data to a central entity where algorithms and humans process all the data to infer the state of the environment and decide on actions.
- An example of this central decision-maker system is Google Maps traffic state estimation. It uses a central decision-maker (server) that receives data from three kinds of sources: static sensors in the streets; GPS data from Android smartphones; and crowdsourcing data through the Waze mobile device application.
- the estimated traffic state information is sent back to individual Google Maps users by coloring (red, yellow, green, and the like) the roads in the map display and notifying users of delays.
- This centralized decision-maker has a presumably complete view of the space spanned by the sensors, hence it can predict the state at any location and time.
- drawbacks include the consumption of significant network resources due to transference of large volumes of data, and the fact that the central decision-maker acts as a bottleneck for processing this data (which is received frequently around-the-clock). For example, more than 50% of a typical university's internet bandwidth is used only for transferring video data from several hundred surveillance cameras in a main campus to the central server, which is accessible to law enforcement.
- extreme computational resources are required to process the video feed from this number of cameras and make decisions in near real-time.
- the present invention comprises a system with multiple agents or sensors (including, but not limited to, devices, appliances or other “things” that can provide data relevant to environmental state), where each agent or sensor estimates and predicts the state of its environment by, among other things, communicating with other agents or sensors in the system.
- Embodiments of the present invention determines what, when, how, and with what/whom to communicate, which allows predictive, proactive action before any unintended situation occurs in the environment.
- each sensor or agent in the system is, and is modeled as, an autonomous agent with the ability to (1) sense its environment, (2) infer the causes of the sensed data (a.k.a. “explanation”), (3) perform at least two kinds of actions: selectively sample the environment and communicate with other agents, and (4) learn from the data and its explanation.
- FIG. 1 shows a diagram of an agent's general interactions with its environment.
- FIG. 2 shows a diagram of an exemplary architecture of an agent observing its environment using one sensory modality.
- FIG. 3 shows a diagram of an exemplary architecture of an agent observing its environment using multiple sensory modalities.
- FIG. 4 shows a diagram of the SELP predictive operation of an agent for input data varying in space and time.
- FIG. 5 shows a diagram of the Explanation cycle of FIG. 4 .
- the present invention comprises a system with multiple agents or sensors (including, but not limited to, devices, appliances or other “things” that can provide data relevant to environmental state), where each agent or sensor estimates and predicts the state of its environment by, among other things, communicating with other agents or sensors in the system.
- agents or sensors including, but not limited to, devices, appliances or other “things” that can provide data relevant to environmental state
- each agent or sensor estimates and predicts the state of its environment by, among other things, communicating with other agents or sensors in the system.
- the present invention determines what, when, how, and with what/whom to communicate, which allows predictive, proactive action before any unintended situation occurs in the environment.
- each sensor or agent in the system is, and is modeled as, an autonomous agent with the ability to (1) sense its environment, (2) infer the causes of the sensed data (a.k.a. “explanation”), (3) perform at least two kinds of actions: selectively sample the environment and communicate with other agents, and (4) learn from the data and its explanation.
- an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.
- a percept is the agent's perceptual inputs at any given instant.
- An agent's percept sequence is the complete history of everything the agent has ever perceived.
- An agent function maps any given percept sequence to an action, thereby mathematically describing the agent's behavior.
- the agent function for an artificial agent is internally implemented by an agent program.
- Predicting the state of an agent's (or sensor's) partially-observable environment is a problem of interest in many domains.
- a real-world environment comprises multiple agents, not necessarily working towards a common goal. Though the goal and sensory observation for each agent is unique, one agent might have acquired some knowledge that may benefit the other. In essence, the knowledge base regarding the environment is distributed among the agents.
- An agent can sample this distributed knowledge base by communicating with other agents. Since an agent is not storing the entire knowledge base, its model can be small and its inference can be efficient and fault-tolerant. However, the agent needs to learn: (1) when, with whom and what to communicate in different situations, and (2) how to use its own body to accomplish such communication.
- Sensors or agents with these capabilities may be achieved by embedding or incorporating in a sensor a microprocessor or processor with WiFi and/or Bluetooth (or other near field communications or wireless communications), and installing and operating a system agent program in the microprocessor or processor.
- an agent 10 may be implemented as a device with multiple sensors 12 and actuators 14 .
- Each agent may have a unique set of sensors and/or actuators, a unique environment, and a unique goal or goals.
- the agent's “body” refers to the parameters for controlling the sensors and actuators.
- the present invention addresses how an agent may optimally communicate with other agents to predict the state of its environment, and how the agent learns and executes communication and other policies in a localized manner (i.e., the agent communicates neither with a central or global controller or decision-maker, nor with all other agents, all of the time).
- a “policy” of an agent is a function or mapping from the states to the actions.
- a particular agent's (referred to as the primary agent) 10 environment 20 includes its own body and other agents (which may be some or all other agents).
- the primary agent learns how to use its body and other agents to reach its goal state.
- the primary agent is predictive in nature. It has expectations about its sensory observations generated from the environment. Since the environment includes its own body, the agent has expectations 22 regarding how different parts of its body (e.g., actuators) will activate. These activations are fed back to the agent via proprioception 24 . Since the environment consists of other agents, the agent has expectations regarding how the other agents will behave or act when presented with an observation. An observation can be generated by the natural environment or by one or more agents in the environment. The other agents' behaviors are fed back as observations to the agent via perception.
- FIG. 2 shows an exemplary embodiment of the architecture of an agent that observes its environment using only one sensory modality (in this case, visual).
- the sensory modality comprises a perceptual 32 and a proprioceptive 34 pathway implementing a perception-action loop and an action-perception loop.
- the perceptual 42 and proprioceptive 44 patterns are completed after each observation.
- the perceptual prediction error provides the observation for proprioception, thereby allowing it to learn a policy without any reinforcement (reward/punishment) signal.
- the problem of pattern completion for this predictive agent is defined as follows. At any time t, compute the probability distribution p(e 1 , e 2 , . . . , e T
- the objective function optimized by this agent is its sensory prediction error.
- the objective can be stated in many ways. In non-probabilistic form, the objective is given by:
- ⁇ . ⁇ 2 denotes squared l 2 norm
- e t+1 ⁇ (e , e , . . . , e t ) is the predicted observation for time t+1
- ⁇ is the prediction or pattern completion function with parameters ⁇ and latent (or hidden) variables h
- ⁇ t+1 is the representation of k t+1 in terms of latent variables h
- g is a regularization function that imposes a sparsity constraint on a for better generalization (i.e., g penalizes model complexity such that less complex models are preferred over more complex ones); and is used to adjust the relative importance of the prediction error and the regularizer.
- g is a regularization function that imposes a sparsity constraint on a for better generalization (i.e., g penalizes model complexity such that less complex models are preferred over more complex ones); and is used to adjust the relative importance of the prediction error and the regularizer.
- the goal is to learn a model distribution p model that approximates the true, but unknown, data distribution p data .
- a widely-used objective is to maximize log-likelihood, log p model (e
- JS Jensen-Shannon
- D p data ⁇ p model
- the prediction/pattern completion can be accomplished as follows. Compute the distribution p(e t+1
- the distribution e.g., Gaussian
- variable length sequences can be computed (there is no need to assume a maximum length of the sequences); and (2) the prediction function ⁇ will be simpler because it is always predicting the distribution for only one time instant in the future.
- FIG. 3 shows an exemplary embodiment of an agent that observes its environment using multiple (n) sensory modalities.
- Err indicates the error calculation module (see FIG. 2 ).
- This architecture is a straightforward generalization of the agent architecture from FIG. 2 to multiple modalities, so all the unique properties of the agent architecture in FIG. 2 are present here as well.
- Each sensory modality comprises a perceptual and a proprioceptive pathway implementing perception-action and action-perception loops.
- the agent completes the perceptual and proprioceptive patterns jointly in all modalities after each observation.
- the errors from all modalities are jointly minimized via learning.
- the objective of this multimodal architecture is to jointly minimize the prediction error from all modalities.
- Each modality has its own set of latent variables.
- Let h i be the set of latent variables for the i-th modality.
- the problem of jointly completing the pattern in n modalities requires learning the joint distribution p(h 1 , h 2 , . . . , h ) which is an intractable problem.
- a number of approximations have been used in the literature, such as factorization and assuming specific classes of distribution (e.g., Gaussian). Such approximations can be used here as long as they are consistent with the data.
- the agent is predictive, i.e., its goal is to learn an internal model of its environment such that it can accurately predict the environment at any time and location. Making inferences (predictive and causal), acting and learning are achieved by minimizing prediction errors.
- This can be conceptually understood as the SELP cycle whereby an agent interacts with its environment by relentlessly executing four functions cyclically: Surprise 110 (observe and compute prediction errors), Explain 120 (infer causes of surprise), Learn 130 (update internal model using the surprise and inferred causes), and Predict 140 (predict the next observation or expected input) (see FIG. 4 ).
- each agent learns a causal model of its environment and the interaction of its neighboring agents with the environment. This causal model allows it to predict the environment and the behavior of other agents within its field of view.
- the agent initiates a communication.
- the agent will communicate with that other agent who generated the highest prediction error. This is a greedy approach to minimizing total prediction error.
- An agent initiates communication to minimize its own surprise and to maximize the other agent's surprise (otherwise the other agent might not respond).
- an agent communicates to the other agent that part of the internal model that is related to but is maximally different from the content of the incoming communication; i.e., it points out where the other agent's prediction is most incorrect instead of explaining everything. This is a greedy approach to minimizing total message size and hence communication bandwidth.
- the internal model is generative. It is implemented as a probabilistic graphical model that represents the joint distribution of the observable and hidden variables. At any time, the values of the observable variables constitute the data or observation. During communication, a partial observation (i.e., values of a subset of observable variables) is passed on from one agent to another. Then the receiving agent has to figure out how this observation can be generated using its own internal model without creating a conflict with any prior observations. If it can figure out, it updates its internal model (a.k.a. learning).
- FIG. 4 shows an exemplary embodiment of the SELP predictive operation of the agent for input data 100 varying in space and time. It assumes that any real-world data generally is observed in parts and seldom in its entirety. For data varying only in space (e.g., an image), it is observed using a sequences of glimpses. Hence the observations appear in a sequence in the same way as in time-varying data. Thus, this prediction cycle applies to both data that does and does not vary with time.
- pattern completion has been formulated as an optimization problem.
- the goal of pattern completion is to compute the probability distribution:
- the prediction cycle searches the space of observable and latent variables efficiently.
- any relatively complex real-world observation is composed of simpler and smaller observations, each of which varies in space and/or time.
- many of the smaller ones need to be inferred first.
- the time-varying observations are inferred by prediction, while the stationary observations are inferred by explanation.
- prediction is useful to infer invariance to different transformations.
- a multilayered neural network model with such neuronal receptive fields that observations over space and time are stationary to neurons in a layer while the same observations are non-stationary (i.e., vary over space and time) to neurons in its lower layer. Since efficiency is key, lower layers are recruited opportunistically to infer smaller observations by explanation and/or prediction.
- RFs of neurons in lower layers are such that the same observations are non-stationary, i.e., they vary over space and time. It is the task of these lower layers to explain and predict the objects and actions such that the neurons in L i can make the inference in the most efficient manner.
- L i it is important for L i to have the information whether dancing is one of the actions in the observed environment since dancing is a discriminative feature between dining and partying.
- L i dictates a lower layer L i , J ⁇ t, to make that inference. Since efficiency is key, the complex sublayer in, along with lateral connections in the simple sublayer will predict every instance of a person's movement only until his action is inferred when it will be reported to L i . This operation will require L i 's explanation cycle to employ L i 's prediction cycle within it. L i 's prediction cycle runs on a faster time scale than L k 's explanation cycle.
- L i 's prediction cycle has to employ a lower layer L k 's (k ⁇ ) explanation cycle in order to infer a set of light intensities as a human.
- L k 's explanation cycle runs on a faster time scale than L i 's prediction cycle.
- this opportunistic recruitment is referred to as an action which is not limited to recruiting lower layers, but also extends to recruiting sensors (using appropriate actuators) and other agents (via communication).
- each sensor has a model which is learned by the agent using the objectives discussed before. At any time, that sensor (or modality) is chosen that maximizes the information content in the signal. This is achieved using the agent model described above (see FIGS. 2 and 3 ) whereby the agent samples the location (in this case, a sensor or modality) generating the highest prediction error.
- each of the other agents has a model which is learned by the primary agent using the objectives discussed above. At any time, that other agent is chosen which maximizes the information content in the signal. This is achieved using the agent model described above (see FIGS. 2 and 3 ) whereby the primary agent samples the location (in this case, another agent) whose behavior for a particular observation is generating the highest prediction error from the perspective of the primary agent.
- FIG. 5 shows a detail of the explanation cycle 200 from the SELP cycle of FIG. 4 .
- Communication between an agent A i and other agents ⁇ E, A 2 , . . . , A n ⁇ is shown, where E is the environment (treated as an agent), partially observable to all communicating agents.
- the internal architecture of A i is shown.
- the dotted/broken lines with arrows indicate directed communication links.
- the black (darker) lines indicate the chosen path for flow of information while the grey (lighter) lines are the alternative paths.
- Each agent's knowledge comprises models of all agents, including its own model. For ease of depiction, each agent A i 's model in A i is shown separately, denoted by A′ j .
- agents may be represented efficiently using a single model which can be implemented as a hierarchical probabilistic graphical model. All agents have similar architecture except that their sensors for observing E and/or their actuators may be unique. At any time instant, an agent predicts the environment as well as the behavior of all agents within its perceptual field. The prediction error is used to update the belief of the state of the environment and all the agents. Based on this belief, the agent acts by sampling the environment or another agent's internal model to further minimize prediction error. This cycle continues until all surprises (e.g., due to prediction errors) have been explained; then the agent learns.
- the present invention possesses significant advantages over the prior art, including, but not limited to, the following:
- Each sensor in the system is an autonomous agent with the ability to sense, make causal inferences, act, and learn.
- the sensors can independently contact concerned authorities when certain events occur.
- a thermostat sensor/agent can contact the appropriate fire department in the event of a fire (e.g., detection of an abnormal rise in temperature) when no one is at home or everybody is asleep.
- Each sensor is predictive, so proactive action can be taken before any unintended situation occurs in the environment.
- the present invention can be used in a wide variety of systems or applications, including but not limited to the following:
- Law enforcement/police department such as for monitoring an officer's own personal environment for safety when in a potentially unsafe location or situation.
- Safety monitoring of public or private areas e.g., schools, offices, theaters, airports, shopping malls, or other locations with possibility of crime, mass shootings, or the like).
- an individually-worn device comprises a novel “SmartCap” comprising a cap, hat, or other form of headgear with multiple sensors as described herein for monitoring the individual and the individual's environment for safety or other reasons.
- the SmartCap comprises a processor or microprocessor with Bluetooth and/or Wi-Fi wireless and/or cellular network communications and various sensors (e.g., cameras, microphones, gas sensor, temperature/pressure/humidity sensors, smart health sensors (such as heartrate, body temperature, blood pressure), and so on).
- the SmartCap may also communicate with individual health sensors that are located elsewhere on (or in) the individual (e.g., a band or smartwatch that detects blood pressure and pulse rate; a pacemaker with communications capability, and so on).
- SmartCaps can communicate with each other, as well as with smartphones, mobile computing devices, computer networks, and other computing devices.
- data can be communicated by a SmartCap to appropriate individuals or persons (e.g., security personnel, the user's family members, friends, or designated recipients or contacts).
- a SmartCap may be used for monitoring individuals with mental illness who manifest aggressive behavior from time to time, along with their environment.
- a criminal under house arrest or limited mobility (as ordered by the court) and his environment can be monitored by the law enforcement or family members if the criminal is required by law to wear a SmartCap (or other wearable device with SmartCap elements).
- SmartCaps can communicate, they can be used for crowdsourcing the state of traffic (replacing the role of humans in the Waze application) at current locations and times, which will help to predict traffic state and avoid congestion. SmartCaps can also crowdsource other data, such as the price of gas at a location and time (thus replacing the role of humans in GasBuddy.com).
- a computing system environment is one example of a suitable computing environment, but is not intended to suggest any limitation as to the scope of use or functionality of the invention.
- a computing environment may contain any one or combination of components discussed below, and may contain additional components, or some of the illustrated components may be absent.
- Various embodiments of the invention are operational with numerous general purpose or special purpose computing systems, environments or configurations.
- Examples of computing systems, environments, or configurations that may be suitable for use with various embodiments of the invention include, but are not limited to, personal computers, laptop computers, computer servers, computer notebooks, hand-held devices, microprocessor-based systems, multiprocessor systems, TV set-top boxes and devices, programmable consumer electronics, cell phones, personal digital assistants (PDAs), tablets, smart phones, touch screen devices, smart TV, internet enabled appliances, internet enabled security systems, internet enabled gaming systems, internet enabled watches; internet enabled cars (or transportation), network PCs, minicomputers, mainframe computers, embedded systems, virtual systems, distributed computing environments, streaming environments, volatile environments, and the like.
- PDAs personal digital assistants
- smart phones touch screen devices
- smart TV internet enabled appliances, internet enabled security systems, internet enabled gaming systems, internet enabled watches; internet enabled cars (or transportation), network PCs, minicomputers, mainframe computers, embedded systems, virtual systems, distributed computing environments, streaming environments, volatile environments, and the like.
- Embodiments of the invention may be implemented in the form of computer-executable instructions, such as program code or program modules, being executed by a computer, virtual computer, or computing device.
- Program code or modules may include programs, objects, components, data elements and structures, routines, subroutines, functions and the like. These are used to perform or implement particular tasks or functions.
- Embodiments of the invention also may be implemented in distributed computing environments. In such environments, tasks are performed by remote processing devices linked via a communications network or other data transmission medium, and data and program code or modules may be located in both local and remote computer storage media including memory storage devices such as, but not limited to, hard drives, solid state drives (SSD), flash drives, USB drives, optical drives, and internet-based storage (e.g., “cloud” storage).
- memory storage devices such as, but not limited to, hard drives, solid state drives (SSD), flash drives, USB drives, optical drives, and internet-based storage (e.g., “cloud” storage).
- a computer system comprises multiple client devices in communication with one or more server devices through or over a network, although in some cases no server device is used.
- the network may comprise the Internet, an intranet, Wide Area Network (WAN), or Local Area Network (LAN). It should be noted that many of the methods of the present invention are operable within a single computing device.
- a client device may be any type of processor-based platform that is connected to a network and that interacts with one or more application programs.
- the client devices each comprise a computer-readable medium in the form of volatile and/or nonvolatile memory such as read only memory (ROM) and random access memory (RAM) in communication with a processor.
- the processor executes computer-executable program instructions stored in memory. Examples of such processors include, but are not limited to, microprocessors, ASICs, and the like.
- Client devices may further comprise computer-readable media in communication with the processor, said media storing program code, modules and instructions that, when executed by the processor, cause the processor to execute the program and perform the steps described herein.
- Computer readable media can be any available media that can be accessed by computer or computing device and includes both volatile and nonvolatile media, and removable and non-removable media. Computer-readable media may further comprise computer storage media and communication media. Computer storage media comprises media for storage of information, such as computer readable instructions, data, data structures, or program code or modules.
- Examples of computer-readable media include, but are not limited to, any electronic, optical, magnetic, or other storage or transmission device, a floppy disk, hard disk drive, CD-ROM, DVD, magnetic disk, memory chip, ROM, RAM, EEPROM, flash memory or other memory technology, an ASIC, a configured processor, CDROM, DVD or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium from which a computer processor can read instructions or that can store desired information.
- Communication media comprises media that may transmit or carry instructions to a computer, including, but not limited to, a router, private or public network, wired network, direct wired connection, wireless network, other wireless media (such as acoustic, RF, infrared, or the like) or other transmission device or channel.
- This may include computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism. Said transmission may be wired, wireless, or both. Combinations of any of the above should also be included within the scope of computer readable media.
- the instructions may comprise code from any computer-programming language, including, for example, C, C++, C#, Visual Basic, Java, and the like.
- Components of a general purpose client or computing device may further include a system bus that connects various system components, including the memory and processor.
- a system bus may be any of several types of bus structures, including, but not limited to, a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- Such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
- Computing and client devices also may include a basic input/output system (BIOS), which contains the basic routines that help to transfer information between elements within a computer, such as during start-up.
- BIOS typically is stored in ROM.
- RAM typically contains data or program code or modules that are accessible to or presently being operated on by processor, such as, but not limited to, the operating system, application program, and data.
- Client devices also may comprise a variety of other internal or external components, such as a monitor or display, a keyboard, a mouse, a trackball, a pointing device, touch pad, microphone, joystick, satellite dish, scanner, a disk drive, a CD-ROM or DVD drive, or other input or output devices.
- a monitor or other type of display device is typically connected to the system bus via a video interface.
- client devices may also include other peripheral output devices such as speakers and printer, which may be connected through an output peripheral interface.
- Client devices may operate on any operating system capable of supporting an application of the type disclosed herein. Client devices also may support a browser or browser-enabled application. Examples of client devices include, but are not limited to, personal computers, laptop computers, personal digital assistants, computer notebooks, hand-held devices, cellular phones, mobile phones, smart phones, pagers, digital tablets, Internet appliances, and other processor-based devices. Users may communicate with each other, and with other systems, networks, and devices, over the network through the respective client devices.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Medical Informatics (AREA)
- Human Computer Interaction (AREA)
- Gerontology & Geriatric Medicine (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Heart & Thoracic Surgery (AREA)
- Pulmonology (AREA)
- Physiology (AREA)
- Physical Education & Sports Medicine (AREA)
- Cardiology (AREA)
- Computational Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Algebra (AREA)
- Mathematical Optimization (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Mathematical Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/719,828 US12094315B2 (en) | 2018-12-18 | 2019-12-18 | Multi-sensor device for environment state estimation and prediction by sampling its own sensors and other devices |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862780973P | 2018-12-18 | 2018-12-18 | |
US201962933538P | 2019-11-11 | 2019-11-11 | |
US16/719,828 US12094315B2 (en) | 2018-12-18 | 2019-12-18 | Multi-sensor device for environment state estimation and prediction by sampling its own sensors and other devices |
Publications (3)
Publication Number | Publication Date |
---|---|
US20200193793A1 US20200193793A1 (en) | 2020-06-18 |
US20210398412A9 true US20210398412A9 (en) | 2021-12-23 |
US12094315B2 US12094315B2 (en) | 2024-09-17 |
Family
ID=71101942
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/719,828 Active US12094315B2 (en) | 2018-12-18 | 2019-12-18 | Multi-sensor device for environment state estimation and prediction by sampling its own sensors and other devices |
Country Status (2)
Country | Link |
---|---|
US (1) | US12094315B2 (fr) |
WO (1) | WO2020132134A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117392614B (zh) * | 2023-12-11 | 2024-03-29 | 广州泛美实验室系统科技股份有限公司 | 实验室安全风险智能检测方法、装置以及应急安全柜 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160180222A1 (en) * | 2014-12-23 | 2016-06-23 | Ejenta, Inc. | Intelligent Personal Agent Platform and System and Methods for Using Same |
US20170173262A1 (en) * | 2017-03-01 | 2017-06-22 | François Paul VELTZ | Medical systems, devices and methods |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011160036A2 (fr) * | 2010-06-17 | 2011-12-22 | The Regents Of The University Of California | Gestion de capteurs tenant compte des aspects énergétiques pour l'optimisation de systèmes médicaux pouvant être portés |
EP2790165A1 (fr) | 2013-04-09 | 2014-10-15 | SWARCO Traffic Systems GmbH | Détermination de qualité d'acquisition de données |
JP6313730B2 (ja) | 2015-04-10 | 2018-04-18 | タタ コンサルタンシー サービシズ リミテッドTATA Consultancy Services Limited | 異常検出システムおよび方法 |
WO2017079354A1 (fr) * | 2015-11-02 | 2017-05-11 | Rapidsos, Inc. | Procédé et système de perception situationnelle pour une réponse d'urgence |
US10910106B2 (en) | 2015-11-23 | 2021-02-02 | The Regents Of The University Of Colorado | Personalized health care wearable sensor system |
-
2019
- 2019-12-18 US US16/719,828 patent/US12094315B2/en active Active
- 2019-12-18 WO PCT/US2019/067275 patent/WO2020132134A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160180222A1 (en) * | 2014-12-23 | 2016-06-23 | Ejenta, Inc. | Intelligent Personal Agent Platform and System and Methods for Using Same |
US20170173262A1 (en) * | 2017-03-01 | 2017-06-22 | François Paul VELTZ | Medical systems, devices and methods |
Also Published As
Publication number | Publication date |
---|---|
US20200193793A1 (en) | 2020-06-18 |
US12094315B2 (en) | 2024-09-17 |
WO2020132134A1 (fr) | 2020-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10043591B1 (en) | System, server and method for preventing suicide | |
Sharma et al. | DeTrAs: deep learning-based healthcare framework for IoT-based assistance of Alzheimer patients | |
Sanchez et al. | A review of smart house analysis methods for assisting older people living alone | |
US20170069216A1 (en) | Methods and apparatus to determine developmental progress with artificial intelligence and user input | |
US20180197094A1 (en) | Apparatus and method for processing content | |
Pandey et al. | Artificial intelligence and machine learning for EDGE computing | |
US20230138557A1 (en) | System, server and method for preventing suicide cross-reference to related applications | |
Ghosh et al. | Feel: Federated learning framework for elderly healthcare using edge-iomt | |
KR20190088128A (ko) | 전자 장치 및 그의 제어 방법 | |
Sardar et al. | Mobile sensors based platform of Human Physical Activities Recognition for COVID-19 spread minimization | |
Tammemäe et al. | Self-aware fog computing in private and secure spheres | |
US12094315B2 (en) | Multi-sensor device for environment state estimation and prediction by sampling its own sensors and other devices | |
Choi et al. | Human behavioral pattern analysis-based anomaly detection system in residential space | |
Rezazadeh et al. | Computer-aided methods for combating Covid-19 in prevention, detection, and service provision approaches | |
Lundström et al. | Halmstad intelligent home-capabilities and opportunities | |
Nandi et al. | Model selection approach for distributed fault detection in wireless sensor networks | |
Kumar et al. | Design of cuckoo search optimization with deep belief network for human activity recognition and classification | |
Ganesan et al. | Sensor-based fog-cloud integrated human fall detection system using regression-based gait pattern recognition | |
CHAMASEMANI et al. | IMPACT OF MOBILE CONTEXT-AWARE APPLICATIONS ON HUMAN COMPUTER INTERACTION. | |
Karthick et al. | Architecting IoT based healthcare systems using machine learning algorithms: cloud-oriented healthcare model, streaming data analytics architecture, and case study | |
Kareem et al. | Hybrid Approach for Fall Detection Based on Machine Learning | |
US11416734B2 (en) | Integrated sensing system | |
Chen et al. | Federated multi-task hierarchical attention model for sensor analytics | |
Lupión et al. | Detection of unconsciousness in falls using thermal vision sensors | |
US20230306238A1 (en) | Multi-level coordinated internet of things artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO MICRO (ORIGINAL EVENT CODE: MICR); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
FEPP | Fee payment procedure |
Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PTGR); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
AS | Assignment |
Owner name: THE UNIVERSITY OF MEMPHIS, TENNESSEE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BANERJEE, BONNY;REEL/FRAME:063235/0248 Effective date: 20220926 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |