US20190251138A1 - Detecting events from features derived from multiple ingested signals - Google Patents

Detecting events from features derived from multiple ingested signals Download PDF

Info

Publication number
US20190251138A1
US20190251138A1 US16/029,481 US201816029481A US2019251138A1 US 20190251138 A1 US20190251138 A1 US 20190251138A1 US 201816029481 A US201816029481 A US 201816029481A US 2019251138 A1 US2019251138 A1 US 2019251138A1
Authority
US
United States
Prior art keywords
signal
features
event
normalized
normalized signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/029,481
Inventor
Damien Patton
Rish Mehta
Tilmann Bruckhaus
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banjo Inc
Original Assignee
Banjo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US16/029,481 priority Critical patent/US20190251138A1/en
Application filed by Banjo Inc filed Critical Banjo Inc
Assigned to Banjo, Inc. reassignment Banjo, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRUCKHAUS, TILMANN, MEHTA, RISH, PATTON, DAMIEN
Priority to US16/203,792 priority patent/US10311129B1/en
Priority to PCT/US2019/025982 priority patent/WO2019195674A1/en
Priority to US16/379,401 priority patent/US10474733B2/en
Priority to US16/516,684 priority patent/US10628601B2/en
Publication of US20190251138A1 publication Critical patent/US20190251138A1/en
Priority to US16/784,897 priority patent/US10839095B2/en
Priority to US16/806,423 priority patent/US20200265061A1/en
Priority to US16/838,031 priority patent/US10970184B2/en
Priority to US16/867,285 priority patent/US20200265236A1/en
Priority to US17/074,563 priority patent/US20210081559A1/en
Priority to US16/950,073 priority patent/US20210081556A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/90Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]
    • G06F15/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • G06F17/30241
    • G06F17/30598
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]

Definitions

  • Entities may desire to be made aware of relevant events (e.g., fires, accidents, police presence, shootings, etc.) as close as possible to the events' occurrence.
  • relevant events e.g., fires, accidents, police presence, shootings, etc.
  • entities typically are not made aware of an event until after a person observes the event (or the event aftermath) and calls authorities.
  • textual comparisons use textual comparisons to compare textual content (e.g., keywords) in a data stream to event templates in a database. If text in a data stream matches keywords in an event template, the data stream is labeled as indicating an event.
  • textual content e.g., keywords
  • Additional techniques use event specific sensors to detect specified types of event.
  • earthquake detectors can be used to detect earthquakes.
  • Examples extend to methods, systems, and computer program products for detecting events from features derived from multiple signals.
  • signal ingestion modules ingest different types of raw structured and unstructured signals on an ongoing basis.
  • the signal ingestion modules normalize raw signals into normalized signals having a Time, Location, Context (or “TLC”) format.
  • Time can be a time of origin or “event time” of a signal.
  • Location can be anywhere across a geographic area, such as, a country (e.g., the United States), a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.
  • Context indicates circumstances surrounding formation/origination of a raw signal in terms that facilitate understanding and assessment of the raw signal.
  • the context of a raw signal can be derived from express as well as inferred signal features of the raw signal.
  • Signal ingestion modules can include one or more single source classifiers.
  • a single source classifier can compute a single source probability for a raw signal from (inferred and/or express) signal features of the raw signal.
  • a single source probability can reflect a mathematical probability or approximation of a mathematical probability of an event (e.g., fire, accident, weather, police presence, etc.) actually occurring.
  • a single source classifier can be configured to compute a single source probability for a single event type or to compute a single source probability for each of a plurality of different event types.
  • Probability details can indicate (e.g., can include a hash field indicating) a probability version and (express and/or inferred) signal features considered in a signal source probability calculation.
  • an event detection infrastructure considers features of different combinations of normalized signals to attempt to identify events of interest to various parties. For example, the event detection infrastructure can determine that features of multiple different signals collectively indicate an event of interest to one or more parties. Alternately, the event detection infrastructure can determine that features of one or more signals indicate a possible event of interest to one or more parties. The event detection infrastructure then determines that features of one or more other signals validate the possible event as an actual event of interest to the one or more parties.
  • Signal features can include: signal type, signal source, signal content, signal time (T), signal location (L), signal context (C), other circumstances of signal creation, etc.
  • the event detection infrastructure can group signals having sufficient temporal similarity and sufficient spatial similarity to one another in a signal sequence.
  • any signal having sufficient temporal and spatial similarity to another signal can be added to a signal sequence.
  • a single source probability for a signal is computed from features of the signal.
  • the single source probability can reflect a mathematical probability or approximation of a mathematical probability of an event actually occurring.
  • a signal having a signal source probability above a threshold can be indicated as an “elevated” signal. Elevated signals can be used to initiate and/or can be added to a signal sequence. On the other hand, non-elevated signals may not be added to a signal sequence.
  • a multi-source probability can be computed from features of multiple normalized signals, including normalized signals in a signal sequence.
  • Features used to compute a multi-source probability can include multiple single source probabilities as well as other features derived from multiple signals.
  • the multi-source probability can reflect a mathematical probability or approximation of a mathematical probability of an event actually occurring based on multiple normalized signals (e.g., a signal sequence).
  • a multi-source probability can change over time as normalized signals age or when a new normalized signal is received (e.g., added to a signal sequence).
  • FIG. 1 illustrates an example computer architecture that facilitates ingesting signals.
  • FIG. 2 illustrates an example computer architecture that facilitates detecting an event from features derived from multiple signals.
  • FIG. 3 illustrates a flow chart of an example method for detecting an event from features derived from multiple signals.
  • FIG. 4 illustrates an example computer architecture that facilitates detecting an event from features derived from multiple signals.
  • FIG. 5 illustrates a flow chart of an example method for detecting an event from features derived from multiple signals
  • FIG. 6A illustrates an example computer architecture that facilitates forming a signal sequence.
  • FIG. 6B illustrates an example computer architecture that facilitates detecting an event from features of a signal sequence.
  • FIG. 6C illustrates an example computer architecture that facilitates detecting an event from features of a signal sequence.
  • FIG. 6D illustrates an example computer architecture that facilitates detecting an event from a multisource probability.
  • FIG. 6E illustrates an example computer architecture that facilitates detecting an event from a multisource probability.
  • FIG. 7 illustrates a flow chart of an example method for forming a signal sequence.
  • FIG. 8 illustrates a flow of an example method for detecting an event from a signal sequence.
  • FIG. 9 illustrates an example computer architecture that facilitates detecting events.
  • Examples extend to methods, systems, and computer program products for detecting events from features derived from multiple signals.
  • Time, Location, and Context (or “TLC”) format Per signal type, signal ingestion modules identify and/or infer a time, a location, and a context associated with a signal. Different ingestion modules can be utilized/tailored to identify time, location, and context for different signal types.
  • Time (T) can be a time of origin or “event time” of a signal.
  • Location (L) can be anywhere across a geographic area, such as, a country (e.g., the United States), a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.
  • Context indicates circumstances surrounding formation/origination of a raw signal in terms that facilitate understanding and assessment of the raw signal.
  • the context of a raw signal can be derived from express as well as inferred signal features of the raw signal.
  • Signal ingestion modules can include one or more single source classifiers.
  • a single source classifier can compute a single source probability for a raw signal from features of the raw signal.
  • a single source probability can reflect a mathematical probability or approximation of a mathematical probability (e.g., a percentage between 0%-100%) of an event actually occurring.
  • a single source classifier can be configured to compute a single source probability for a single event type or to compute a single source probability for each of a plurality of different event types.
  • a single source classifier can compute a single source probability using artificial intelligence, machine learning, neural networks, logic, heuristics, etc.
  • Probability details can indicate (e.g., can include a hash field indicating) a probability version and (express and/or inferred) signal features considered in a signal source probability calculation.
  • the event detection infrastructure Concurrently with signal ingestion, the event detection infrastructure considers features of different combinations of normalized signals to attempt to identify events of interest to various parties.
  • Features can be derived from an individual signal and/or from a group of signals.
  • the event detection infrastructure can derive first features of a first normalized signal and can derive second features of a second normalized signal.
  • Individual signal features can include: signal type, signal source, signal content, signal time (T), signal location (L), signal context (C), other circumstances of signal creation, etc.
  • the event detection infrastructure can detect an event of interest to one or more parties from the first features and the second features collectively.
  • the event detection infrastructure can derive first features of each normalized signal included in a first one or more normalized individual signals.
  • the event detection infrastructure can detect a possible event of interest to one or more parties from the first features.
  • the event detection infrastructure can derive second features of each normalized signal included in a second one or more individual signals.
  • the event detection infrastructure can validate the possible event of interest as an actual event of interest to the one or more parties from the second features.
  • the event detection infrastructure can use single source probabilities to detect and/or validate events.
  • the event detection infrastructure can detect an event of interest to one or more parties based on a single source probability of a first signal and a single source probability of second signal collectively.
  • the event detection infrastructure can detect a possible event of interest to one or more parties based on single source probabilities of a first one or more signals.
  • the event detection infrastructure can validate the possible event as an actual event of interest to one or more parties based on single source probabilities of a second one or more signals.
  • the event detection infrastructure can group normalized signals having sufficient temporal similarity and/or sufficient spatial similarity to one another in a signal sequence.
  • Temporal similarity of normalized signals can be determined by comparing Time (T) of the normalized signals.
  • temporal similarity of a normalized signal and another normalized signal is sufficient when the Time (T) of the normalized signal is within a specified time of the Time (T) of the other normalized signal.
  • a specified time can be virtually any time value, such as, for example, ten seconds, 30 seconds, one minute, two minutes, five minutes, ten minutes, 30 minutes, one hour, two hours, four hours, etc.
  • a specified time can vary by detection type. For example, some event types (e.g., a fire) inherently last longer than other types of events (e.g., a shooting). Specified times can be tailored per detection type.
  • Spatial similarity of normalized signals can be determined by comparing Location (L) of the normalized signals.
  • spatial similarity of a normalized signal and another normalized signal is sufficient when the Location (L) of the normalized signal is within a specified distance of the Location (L) of the other normalized signal.
  • a specified distance can be virtually any distance value, such as, for example, a linear distance or radius (a number of feet, meters, miles, kilometers, etc.), within a specified number of geo cells of specified precision, etc.
  • any normalized signal having sufficient temporal and spatial similarity to another normalized signal can be added to a signal sequence.
  • a single source probability for a signal is computed from features of the signal.
  • the single source probability can reflect a mathematical probability or approximation of a mathematical probability of an event actually occurring.
  • a normalized signal having a signal source probability above a threshold e.g., greater than 4%) is indicated as an “elevated” signal. Elevated signals can be used to initiate and/or can be added to a signal sequence. On the other hand, non-elevated signals may not be added to a signal sequence.
  • a first threshold is considered for signal sequence initiation and a second threshold is considered for adding additional signals to an existing signal sequence.
  • a normalized signal having a single source probability above the first threshold can be used to initiate a signal sequence. After a signal sequence is initiated, any normalized signal having a single source probability above the second threshold can be added to the signal sequence.
  • the first threshold can be greater than the second threshold.
  • the first threshold can be 4% or 5% and the second threshold can be 2% or 3%.
  • the event detection infrastructure can derive features of a signal grouping, such as, a signal sequence.
  • Features of a signal sequence can include features of signals in the signal sequence, including single source probabilities.
  • Features of a signal sequence can also include percentages, histograms, counts, durations, etc. derived from features of the signals included in the signal sequence.
  • the event detection infrastructure can detect an event of interest to one or more parties from signal sequence features.
  • the event detection infrastructure can include one or more multi-source classifiers.
  • a multi-source classifier can compute a multi-source probability for a signal sequence from features of the signal sequence.
  • the multi-source probability can reflect a mathematical probability or approximation of a mathematical probability of an event (e.g., fire, accident, weather, police presence, etc.) actually occurring based on multiple normalized signals (e.g., the signal sequence).
  • the multi-source probability can be assigned as an additional signal sequence feature.
  • a multi-source classifier can be configured to compute a multi-source probability for a single event type or to compute a multi-source probability for each of a plurality of different event types.
  • a multi-source classifier can compute a multi-source probability using artificial intelligence, machine learning, neural networks, etc.
  • a multi-source probability can change over time as a signal sequence ages or when a new signal is added to a signal sequence. For example, a multi-source probability for a signal sequence can decay over time. A multi-source probability for a signal sequence can also be recomputed when a new normalized signal is added to the signal sequence.
  • Multi-source probability decay can start after a specified period of time (e.g., 3 minutes) and decay can occur in accordance with a defined decay equation.
  • a decay equation defines exponential decay of multi-source probabilities. Different decay rates can be used for different classes. Decay can be similar to radioactive decay, with different tau (i.e., mean lifetime) values used to calculate the “half life” of multi-source probability for different event types.
  • Implementations can comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more computer and/or hardware processors (including any of Central Processing Units (CPUs), and/or Graphical Processing Units (GPUs), general-purpose GPUs (GPGPUs), Field Programmable Gate Arrays (FPGAs), application specific integrated circuits (ASICs), Tensor Processing Units (TPUs)) and system memory, as discussed in greater detail below. Implementations also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
  • implementations can comprise
  • Computer storage media includes RAM, ROM, EEPROM, CD-ROM, Solid State Drives (“SSDs”) (e.g., RAM-based or Flash-based), Shingled Magnetic Recording (“SMR”) devices, Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • SSDs Solid State Drives
  • SMR Shingled Magnetic Recording
  • PCM phase-change memory
  • one or more processors are configured to execute instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) to perform any of a plurality of described operations.
  • the one or more processors can access information from system memory and/or store information in system memory.
  • the one or more processors can (e.g., automatically) transform information between different formats, such as, for example, between any of: raw signals, normalized signals, signal features, single source probabilities, possible events, events, signal sequences, signal sequence features, multisource probabilities, etc.
  • System memory can be coupled to the one or more processors and can store instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) executed by the one or more processors.
  • the system memory can also be configured to store any of a plurality of other types of data generated and/or transformed by the described components, such as, for example, raw signals, normalized signals, signal features, single source probabilities, possible events, events, signal sequences, signal sequence features, multisource probabilities, etc.
  • a “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices.
  • a network or another communications connection can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
  • program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (devices) (or vice versa).
  • computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system.
  • a network interface module e.g., a “NIC”
  • NIC network interface module
  • computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
  • Computer-executable instructions comprise, for example, instructions and data which, in response to execution at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • the computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.
  • the described aspects may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, wearable devices, multicore processor systems, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, routers, switches, and the like.
  • the described aspects may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks.
  • program modules may be located in both local and remote memory storage devices.
  • functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components.
  • FPGAs Field Programmable Gate Arrays
  • ASICs application specific integrated circuits
  • TPUs Tensor Processing Units
  • Hardware, software, firmware, digital components, or analog components can be specifically tailor-designed for a higher speed detection or artificial intelligence that can enable signal processing.
  • computer code is configured for execution in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code.
  • cloud computing is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources.
  • cloud computing can be employed in the marketplace to offer ubiquitous and convenient on-demand access to the shared pool of configurable computing resources (e.g., compute resources, networking resources, and storage resources).
  • the shared pool of configurable computing resources can be provisioned via virtualization and released with low effort or service provider interaction, and then scaled accordingly.
  • a cloud computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth.
  • a cloud computing model can also expose various service models, such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”).
  • SaaS Software as a Service
  • PaaS Platform as a Service
  • IaaS Infrastructure as a Service
  • a cloud computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth.
  • a “cloud computing environment” is an environment in which cloud computing is employed.
  • a “geo cell” is defined as a piece of “cell” in a grid in any form.
  • geo cells are arranged in a hierarchical structure. Cells of different geometries can be used.
  • a “geohash” is an example of a “geo cell”.
  • Geohash is defined as a geocoding system which encodes a geographic location into a short string of letters and digits. Geohash is a hierarchical spatial data structure which subdivides space into buckets of grid shape (e.g., a square). Geohashes offer properties like arbitrary precision and the possibility of gradually removing characters from the end of the code to reduce its size (and gradually lose precision). As a consequence of the gradual precision degradation, nearby places will often (but not always) present similar prefixes. The longer a shared prefix is, the closer the two places are. geo cells can be used as a unique identifier and to represent point data (e.g., in databases).
  • a “geohash” is used to refer to a string encoding of an area or point on the Earth.
  • the area or point on the Earth may be represented (among other possible coordinate systems) as a latitude/longitude or Easting/Northing—the choice of which is dependent on the coordinate system chosen to represent an area or point on the Earth.
  • geo cell can refer to an encoding of this area or point, where the geo cell may be a binary string comprised of 0 s and 1 s corresponding to the area or point, or a string comprised of 0 s, 1 s, and a ternary character (such as X)—which is used to refer to a don't care character (0 or 1).
  • a geo cell can also be represented as a string encoding of the area or point, for example, one possible encoding is base-32, where every 5 binary characters are encoded as an ASCII character.
  • the size of an area defined at a specified geo cell precision can vary.
  • the areas defined at various geo cell precisions are approximately:
  • the H3 geospatial indexing system is a multi-precision hexagonal tiling of a sphere (such as the Earth) indexed with hierarchical linear indexes.
  • geo cells are a hierarchical decomposition of a sphere (such as the Earth) into representations of regions or points based a Hilbert curve (e.g., the S2 hierarchy or other hierarchies). Regions/points of the sphere can be projected into a cube and each face of the cube includes a quad-tree where the sphere point is projected into. After that, transformations can be applied and the space discretized. The geo cells are then enumerated on a Hilbert Curve (a space-filling curve that converts multiple dimensions into one dimension and preserves the locality).
  • a Hilbert Curve a space-filling curve that converts multiple dimensions into one dimension and preserves the locality.
  • any signal, event, entity, etc., associated with a geo cell of a specified precision is by default associated with any less precise geo cells that contain the geo cell. For example, if a signal is associated with a geo cell of precision 9, the signal is by default also associated with corresponding geo cells of precisions 1, 2, 3, 4, 5, 6, 7, and 8. Similar mechanisms are applicable to other tiling and geo cell arrangements.
  • S2 has a cell level hierarchy ranging from level zero (85,011,012 km 2 ) to level 30 (between 0.48 cm 2 to 0.96 cm 2 ).
  • Raw signals can include social posts, live broadcasts, traffic camera feeds, other camera feeds (e.g., from other public cameras or from CCTV cameras), listening device feeds, 911 calls, weather data, planned events, IoT device data, crowd sourced traffic and road information, satellite data, air quality sensor data, smart city sensor data, public radio communication (e.g., among first responders and/or dispatchers, between air traffic controllers and pilots), etc.
  • the content of raw signals can include images, video, audio, text, etc.
  • the signal ingestion modules normalize raw signals into normalized signals, for example, having a Time, Location, Context (or “TLC”) format.
  • Different types of ingested signals can be used to identify events.
  • Different types of signals can include different data types and different data formats.
  • Data types can include audio, video, image, and text.
  • Different formats can include text in XML, text in JavaScript Object Notation (JSON), text in RSS feed, plain text, video stream in Dynamic Adaptive Streaming over HTTP (DASH), video stream in HTTP Live Streaming (HLS), video stream in Real-Time Messaging Protocol (RTMP), etc.
  • JSON JavaScript Object Notation
  • RSS feed plain text
  • plain text video stream in Dynamic Adaptive Streaming over HTTP
  • HLS Dynamic Adaptive Streaming over HTTP
  • HLS HTTP Live Streaming
  • RTMP Real-Time Messaging Protocol
  • Time (T) can be a time of origin or “event time” of a signal.
  • a raw signal includes a time stamp and the time stamp is used to calculate Time (T).
  • Location (L) can be anywhere across a geographic area, such as, a country (e.g., the United States), a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.
  • Context indicates circumstances surrounding formation/origination of a raw signal in terms that facilitate understanding and assessment of the raw signal.
  • the context of a raw signal can be derived from express as well as inferred signal features of the raw signal.
  • Signal ingestion modules can include one or more single source classifiers.
  • a single source classifier can compute a single source probability for a raw signal from features of the raw signal.
  • a single source probability can reflect a mathematical probability or approximation of a mathematical probability (e.g., a percentage between 0%-100%) of an event (e.g., fire, accident, weather, police presence, shooting, etc.) actually occurring.
  • a single source classifier can be configured to compute a single source probability for a single event type or to compute a single source probability for each of a plurality of different event types.
  • a single source classifier can compute a single source probability using artificial intelligence, machine learning, neural networks, logic, heuristics, etc.
  • Probability details can indicate (e.g., can include a hash field indicating) a probability version and (express and/or inferred) signal features considered in a signal source probability calculation.
  • normalization modules can be used to extract, derive, infer, etc. time, location, and context from/for a raw signal.
  • one set of normalization modules can be configured to extract/derive/infer time, location and context from/for social signals.
  • Another set of normalization modules can be configured to extract/derive/infer time, location and context from/for Web signals.
  • a further set of normalization modules can be configured to extract/derive/infer time, location and context from/for streaming signals.
  • Normalization modules for extracting/deriving/inferring time, location, and context can include text processing modules, NLP modules, image processing modules, video processing modules, etc.
  • the modules can be used to extract/derive/infer data representative of time, location, and context for a signal.
  • Time, Location, and Context for a signal can be extracted/derived/inferred from metadata and/or content of the signal.
  • NLP modules can analyze metadata and content of a sound clip to identify a time, location, and keywords (e.g., fire, shooter, etc.).
  • An acoustic listener can also interpret the meaning of sounds in a sound clip (e.g., a gunshot, vehicle collision, etc.) and convert to relevant context.
  • Live acoustic listeners can determine the distance and direction of a sound.
  • image processing modules can analyze metadata and pixels in an image to identify a time, location and keywords (e.g., fire, shooter, etc.).
  • Image processing modules can also interpret the meaning of parts of an image (e.g., a person holding a gun, flames, a store logo, etc.) and convert to relevant context.
  • Other modules can perform similar operations for other types of content including text and video.
  • each set of normalization modules can differ but may include at least some similar modules or may share some common modules.
  • similar (or the same) image analysis modules can be used to extract named entities from social signal images and public camera feeds.
  • similar (or the same) NLP modules can be used to extract named entities from social signal text and web text.
  • an ingested signal includes expressly defined Time, Location, and Context upon ingestion.
  • an ingested signal lacks an expressly defined Location and/or an expressly defined Context upon ingestion.
  • Location and/or Context can be inferred from features of an ingested signal and/or through reference to other data sources.
  • Time may not be included, or an included time may not be given with high precision and is inferred.
  • a user may post an image to a social network which had been taken some indeterminate time earlier.
  • Normalization modules can use named entity recognition and reference to a geo cell database to infer location.
  • Named entities can be recognized in text, images, video, audio, or sensor data.
  • the recognized named entities can be compared to named entities in geo cell entries. Matches indicate possible signal origination in a geographic area defined by a geo cell.
  • a normalized signal can include a Time, a Location, a Context (e.g., single source probabilities and probability details), a signal type, a signal source, and content.
  • Context e.g., single source probabilities and probability details
  • frequentist inference technique is used to determine a single source probability.
  • a database maintains mappings between different combinations of signal properties and ratios of signals turning into events (a probability) for that combination of signal properties.
  • the database is queried with the combination of signal properties.
  • the database returns a ratio of signals having the signal properties turning into events. The ratio is assigned to the signal.
  • a combination of signal properties can include: (1) event class (e.g., fire, accident, weather, etc.), (2) media type (e.g., text, image, audio, etc.), (3) source (e.g., twitter, traffic camera, first responder radio traffic, etc.), and (4) geo type (e.g., geo cell, region, or non-geo).
  • a single source probability is calculated by single source classifiers (e.g., machine learning models, artificial intelligence, neural networks, etc.) that consider hundreds, thousands, or even more signal features of a signal.
  • Single source classifiers can be based on binary models and/or multi-class models.
  • Output from a single source classifier can be adjusted to more accurately represent a probability that a signal is a “true positive”. For example, 1,000 signals with classifier output of 0.9 may include 80% as true positives. Thus, single source probability can be adjusted to 0.8 to more accurately reflect probability of the signal being a True event. “Calibration” can be done in such a way that for any “calibrated score” the score reflects the true probability of a true positive outcome.
  • FIG. 1 depicts computer architecture 100 that facilitates ingesting and normalizing signals.
  • computer architecture 100 includes signal ingestion modules 101 , social signals 171 , Web signals 172 , and streaming signals 173 .
  • Signal ingestion modules 101 , social signals 171 , Web signals 172 , and streaming signals 173 can be connected to (or be part of) a network, such as, for example, a system bus, a Local Area Network (“LAN”), a Wide Area Network (“WAN”), and even the Internet.
  • LAN Local Area Network
  • WAN Wide Area Network
  • signal ingestion modules 101 can create and exchange message related data (e.g., Internet Protocol (“IP”) datagrams and other higher layer protocols that utilize IP datagrams, such as, Transmission Control Protocol (“TCP”), Hypertext Transfer Protocol (“HTTP”), Simple Mail Transfer Protocol (“SMTP”), Simple Object Access Protocol (SOAP), etc. or using other non-datagram protocols) over the network.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • HTTP Hypertext Transfer Protocol
  • SMTP Simple Mail Transfer Protocol
  • SOAP Simple Object Access Protocol
  • Signal ingestion module(s) 101 can ingest raw signals 121 , including social signals 171 , web signals 172 , and streaming signals 173 (e.g., social posts, traffic camera feeds, other camera feeds, listening device feeds, 911 calls, weather data, planned events, IoT device data, crowd sourced traffic and road information, satellite data, air quality sensor data, smart city sensor data, public radio communication, etc.) on going basis and in essentially real-time.
  • Signal ingestion module(s) 101 include social content ingestion modules 174 , web content ingestion modules 176 , stream content ingestion modules 177 , and signal formatter 180 .
  • Signal formatter 180 further includes social signal processing module 181 , web signal processing module 182 , and stream signal processing modules 183 .
  • a corresponding ingestion module and signal processing module can interoperate to normalize the signal into a Time, Location, Context (TLC) format.
  • TLC Time, Location, Context
  • social content ingestion modules 174 and social signal processing module 181 can interoperate to normalize social signals 171 into the TLC format.
  • web content ingestion modules 176 and web signal processing module 182 can interoperate to normalize web signals 172 into the TLC format.
  • stream content ingestion modules 177 and stream signal processing modules 183 can interoperate to normalize streaming signals 173 into the TLC format.
  • signal content exceeding specified size requirements is cached upon ingestion.
  • Signal ingestion modules 101 include a URL or other identifier to the cached content within the context for the signal.
  • Signal formatter 180 can include one or more single signal classifiers classifying ingested signals.
  • the one or more single signal classifiers can assign one or more signal source probabilities (e.g., between 0%-100%) to each ingested signal.
  • signal formatter 180 includes modules for determining a single source probability as a ratio of signals turning into events based on the following signal properties: (1) event class (e.g., fire, accident, weather, etc.), (2) media type (e.g., text, image, audio, etc.), (3) source (e.g., twitter, traffic camera, first responder radio traffic, etc.), and (4) geo type (e.g., geo cell, region, or non-geo). Probabilities can be stored in a lookup table for different combinations of the signal properties. Features of a signal can be derived and used to query the lookup table. For example, the lookup table can be queried with terms (“accident”, “image”, “twitter”, “region”). The corresponding ratio (probability) can be returned from the table.
  • event class e.g., fire, accident, weather, etc.
  • media type e.g., text, image, audio, etc.
  • source e.g., twitter, traffic camera, first responder radio traffic, etc.
  • geo type
  • signal formatter 180 includes a plurality of single source classifiers (e.g., artificial intelligence, machine learning modules, neural networks, etc.). Each single source classifier can consider hundreds, thousands, or even more signal features of a signal. Signal features of a signal can be derived and submitted to a signal source classifier. The single source classifier can return a probability that a signal indicates a type of event. Single source classifiers can be binary classifiers or multi-source classifiers.
  • Raw classifier output can be adjusted to more accurately represent a probability that a signal is a “true positive”. For example, 1,000 signals whose raw classifier output is 0.9 may include 80% as true positives. Thus, probability can be adjusted to 0.8 to reflect true probability of the signal being a true positive. “Calibration” can be done in such a way that for any “calibrated score” this score reflects the true probability of a true positive outcome.
  • Signal ingestion modules 101 can include one or more single source probabilities and corresponding probability details in the context of a normalized signal. Probability details can indicate a probability version and features used to calculate the probability. In one aspect, a probability version and signal feature are contained in a hash field.
  • any of the received raw signals can be normalized into normalized signals including Time, Location, Context, signal source, signal type, and content.
  • Signal ingestion modules 101 can send normalized signals 122 to event detection infrastructure 103 .
  • signal ingestion modules 101 can send normalized signal 122 A, including time 123 A, location 124 A, context 126 A, content 127 A, type 128 A, and source 129 A to event detection infrastructure 103 .
  • signal ingestion modules 101 can send normalized signal 122 B, including time 123 B, location 124 B, context 126 B, content 127 B, type 128 B, and source 129 B to event detection infrastructure 103 .
  • Signal ingestion modules 101 can also send normalized signal 122 C (depicted in FIG. 6 ), including time 123 C, location 124 C, context 126 C, content 127 C , type 128 C, and source 129 C to event detection infrastructure 103 .
  • FIG. 2 illustrates an example computer architecture 200 that facilitates detecting an event from features derived from multiple signals.
  • computer architecture 200 further includes event detection infrastructure 103 .
  • Event infrastructure 103 can be connected to (or be part of) a network with signal ingestion modules 101 .
  • signal ingestion modules 101 and event detection infrastructure 103 can create and exchange message related data over the network.
  • event detection infrastructure 103 further includes evaluation module 206 .
  • Evaluation module 206 is configured to determine if features of a plurality of normalized signals collectively indicate an event. Evaluation module 206 can detect (or not detect) an event based on one or more features of one normalized signal in combination with one or more features of another normalized signal.
  • FIG. 3 illustrates a flow chart of an example method 300 for detecting an event from features derived from multiple signals. Method 300 will be described with respect to the components and data in computer architecture 200 .
  • Method 300 includes receiving a first signal ( 301 ).
  • event detection infrastructure 103 can receive normalized signal 122 B.
  • Method 300 includes deriving first one or more features of the first signal ( 302 ).
  • event detection infrastructure 103 can derive features 201 of normalized signal 122 B.
  • Features 201 can include and/or be derived from time 123 B, location 124 B, context 126 B, content 127 B, type 128 B, and source 129 B.
  • Event detection infrastructure 103 can also derive features 201 from one or more single source probabilities assigned to normalized signal 122 B.
  • Method 300 includes determining that the first one or more features do not satisfy conditions to be identified as an event ( 303 ).
  • evaluation module 206 can determine that features 201 do not satisfy conditions to be identified as an event. That is, the one or more features of normalized signal 122 B do not alone provide sufficient evidence of an event.
  • one or more single source probabilities assigned to normalized signal 122 B do not satisfy probability thresholds in thresholds 226 .
  • Method 300 includes receiving a second signal ( 304 ).
  • event detection infrastructure 103 can receive normalized signal 122 A.
  • Method 300 includes deriving second one or more features of the second signal ( 305 ).
  • event detection infrastructure 103 can derive features 202 of normalized signal 122 A.
  • Features 202 can include and/or be derived from time 123 A, location 124 A, context 126 A, content 127 A, type 128 A, and source 129 A.
  • Event detection infrastructure 103 can also derive features 202 from one or more single source probabilities assigned to normalized signal 122 A.
  • Method 300 includes aggregating the first one or more features with the second one or more features into aggregated features ( 306 ).
  • evaluation module 206 can aggregate features 201 with features 202 into aggregated features 203 .
  • Evaluation module 206 can include an algorithm that defines and aggregates individual contributions of different signal features into aggregated features.
  • Aggregating features 201 and 202 can include aggregating a single source probability assigned to normalized signal 122 B for an event type with a signal source probability assigned to normalized signal 122 A for the event type into a multisource probability for the event type.
  • Method 300 includes detecting an event from the aggregated features ( 307 ). For example, evaluation module 206 can determine that aggregated features 203 satisfy conditions to be detected as an event. Evaluation module 206 can detect event 224 , such as, for example, a fire, an accident, a shooting, a protest, etc. based on satisfaction of the conditions.
  • event 224 such as, for example, a fire, an accident, a shooting, a protest, etc. based on satisfaction of the conditions.
  • conditions for event identification can be included in thresholds 226 .
  • Conditions can include threshold probabilities per event type.
  • evaluation module 106 can detect an event.
  • a probability can be a single signal probability or a multisource (aggregated) probability.
  • evaluation module 206 can detect an event based on a multisource probability exceeding a probability threshold in thresholds 226 .
  • FIG. 4 illustrates an example computer architecture 400 that facilitates detecting an event from features derived from multiple signals.
  • event detection infrastructure 103 further includes evaluation module 206 and validator 204 .
  • Evaluation module 206 is configured to determine if features of a plurality of normalized signals indicate a possible event.
  • Evaluation module 206 can detect (or not detect) a possible event based on one or more features of a normalized signal.
  • Validator 204 is configured to validate (or not validate) a possible event as an actual event based on one or more features of another normalized signal.
  • FIG. 5 illustrates a flow chart of an example method 500 for detecting an event from features derived from multiple signals. Method 500 will be described with respect to the components and data in computer architecture 400 .
  • Method 500 includes receiving a first signal ( 501 ).
  • event detection infrastructure 103 can receive normalized signal 122 B.
  • Method 500 includes deriving first one or more features of the first signal ( 502 ).
  • event detection infrastructure 103 can derive features 401 of normalized signal 122 B.
  • Features 401 can include and/or be derived from time 123 B, location 124 B, context 126 B, content 127 B, type 128 B, and source 129 B.
  • Event detection infrastructure 103 can also derive features 401 from one or more single source probabilities assigned to normalized signal 122 B.
  • Method 500 includes detecting a possible event from the first one or more features ( 503 ).
  • evaluation module 206 can detect possible event 423 from features 401 .
  • event detection infrastructure 103 can determine that the evidence in features 401 is not confirming of an event but is sufficient to warrant further investigation of an event type.
  • a single source probability assigned to normalized signal 122 B for an event type does not satisfy a probability threshold for full event detection but does satisfy a probability threshold for further investigation.
  • Method 500 includes receiving a second signal ( 504 ).
  • event detection infrastructure 103 can receive normalized signal 122 A.
  • Method 500 includes deriving second one or more features of the second signal ( 505 ).
  • event detection infrastructure 103 can derive features 402 of normalized signal 122 A.
  • Features 402 can include and/or be derived from time 123 A, location 124 A, context 126 A, content 127 A, type 128 A, and source 129 A.
  • Event detection infrastructure 103 can also derive features 402 from one or more single source probabilities assigned to normalized signal 122 A.
  • Method 500 includes validating the possible event as an actual event based on the second one or more features ( 506 ).
  • validator 204 can determine that possible event 423 in combination with features 402 provide sufficient evidence of an actual event.
  • Validator 204 can validate possible event 423 as event 424 based on features 402 .
  • validator 204 considers a single source probability assigned to normalized signal 122 B in view of a single source probability assigned to normalized signal 122 B.
  • Validator 204 determines that the signal source probabilities, when considered collectively satisfy a probability threshold for detecting an event.
  • a plurality of normalized (e.g., TLC) signals can be grouped together in a signal group based on spatial similarity and/or temporal similarity among the plurality of normalized signals and/or corresponding raw (non-normalized) signals.
  • a feature extractor can derive features (e.g., percentages, counts, durations, histograms, etc.) of the signal group from the plurality of normalized signals.
  • An event detector can attempt to detect events from signal group features.
  • event detection infrastructure 103 can include sequence manager 604 , feature extractor 609 , and sequence storage 613 .
  • Sequence manager 604 further includes time comparator 606 , location comparator 607 , and deduplicator 608 .
  • Time comparator 606 is configured to determine temporal similarity between a normalized signal and a signal sequence.
  • Time comparator 606 can compare a signal time of a received normalized signal to a time associated with existing signal sequences (e.g., the time of the first signal in the signal sequence).
  • Temporal similarity can be defined by a specified time period, such as, for example, 5 minutes, 10 minutes, 20 minutes, 30 minutes, etc.
  • the normalized signal can be considered temporally similar to signal sequence.
  • location comparator 607 is configured to determine spatial similarity between a normalized signal and a signal sequence.
  • Location comparator 607 can compare a signal location of a received normalized signal to a location associated with existing signal sequences (e.g., the location of the first signal in the signal sequence).
  • Spatial similarity can be defined by a geographic area, such as, for example, a distance radius (e.g., meters, miles, etc.), a number of geo cells of a specified precision, an Area of Interest (AoI), etc.
  • AoI Area of Interest
  • Deduplicator 608 is configured to determine if a signal is a duplicate of a previously received signal. Deduplicator 608 can detect a duplicate when a normalized signal includes content (e.g., text, image, etc.) that is essentially identical to previously received content (previously received text, a previously received image, etc.). Deduplicator 608 can also detect a duplicate when a normalized signal is a repost or rebroadcast of a previously received normalized signal. Sequence manager 604 can ignore duplicate normalized signals.
  • content e.g., text, image, etc.
  • Sequence manager 604 can ignore duplicate normalized signals.
  • Sequence manager 604 can include a signal having sufficient temporal and spatial similarity to a signal sequence (and that is not a duplicate) in that signal sequence. Sequence manager 604 can include a signal that lacks sufficient temporal and/or spatial similarity to any signal sequence (and that is not a duplicate) in a new signal sequence.
  • a signal can be encoded into a signal sequence as a vector using any of a variety of algorithms including recurrent neural networks (RNN) (Long Short Term Memory (LSTM) networks and Gated Recurrent Units (GRUs)), convolutional neural networks, or other algorithms.
  • RNN recurrent neural networks
  • LSTM Long Short Term Memory
  • GRUs Gated Recurrent Units
  • Feature extractor 609 is configured to derive features of a signal sequence from signal data contained in the signal sequence. Derived features can include a percentage of normalized signals per geohash, a count of signals per time of day (hours:minutes), a signal gap histogram indicating a history of signal gap lengths (e.g., with bins for 1 s, 5 s, 10 s, 1 m, 5 m, 10 m, 30 m), a count of signals per signal source, model output histograms indicating model scores, a sequent duration, count of signals per signal type, a number of unique users that posted social content, etc.
  • feature extractor 609 can derive a variety of other features as well. Additionally, the described features can be of different shapes to include more or less information, such as, for example, gap lengths, provider signal counts, histogram bins, sequence durations, category counts, etc.
  • FIG. 7 illustrates a flow chart of an example method 700 for forming a signal sequence. Method 700 will be described with respect to the components and data in computer architecture 600 .
  • Method 700 includes receiving a normalized signal including time, location, context, and content ( 701 ).
  • sequence manager 604 can receive normalized signal 622 A.
  • Method 700 includes forming a signal sequence including the normalized signal ( 702 ).
  • time comparator 606 can compare time 623 A to times associated with existing signal sequences.
  • location comparator 607 can compare location 124 A to locations associated with existing signal sequences.
  • Time comparator 606 and/or location comparator 607 can determine that normalized signal 122 A lacks sufficient temporal similarity and/or lacks sufficient spatial similarity respectively to existing signal sequences.
  • Deduplicator 608 can determine that normalized signal 122 A is not a duplicate normalized signal.
  • sequence manager 604 can form signal sequence 631 , include normalized signal 122 A in signal sequence 631 , and store signal sequence 631 in sequence storage 613 .
  • Method 700 includes receiving another normalized signal including another time, another location, another context, and other content ( 703 ).
  • sequence manager 604 can receive normalized signal 622 B.
  • Method 700 includes determining that there is sufficient temporal similarity between the time and the other time ( 704 ). For example, time comparator 606 can compare time 123 B to time 123 A. Time comparator 606 can determine that time 123 B is sufficiently similar to time 123 A. Method 700 includes determining that there is sufficient spatial similarity between the location and the other location ( 705 ). For example, location comparator 607 can compare location 124 B to location 124 A. Location comparator 607 can determine that location 124 B has sufficient similarity to location 124 A.
  • Method 700 includes including the other normalized signal in the signal sequence based on the sufficient temporal similarity and the sufficient spatial similarity ( 706 ).
  • sequence manager 604 can include normalized signal 124 B in signal sequence 631 and update signal sequence 631 in sequence storage 613 .
  • sequence manager 604 can receive normalized signal 122 C.
  • Time comparator 606 can compare time 123 C to time 123 A and location comparator 607 can compare location 124 C to location 124 A. If there is sufficient temporal and spatial similarity between normalized signal 122 C and normalized signal 122 A, sequence manager 604 can include normalized signal 122 C in signal sequence 631 . On the other hand, if there is insufficient temporal similarity and/or insufficient spatial similarity between normalized signal 122 C and normalized signal 122 A, sequence manager 604 can form signal sequence 632 . Sequence manager 604 can include normalized signal 122 C in signal sequence 632 and store signal sequence 631 in sequence storage 613 .
  • event detection infrastructure 103 further includes event detector 611 .
  • Event detector 611 is configured to determine if features extracted from a signal sequence are indicative of an event.
  • FIG. 8 illustrates a flow chart of an example method 800 for detecting an event. Method 800 will be described with respect to the components and data in computer architecture 600 .
  • Method 800 includes accessing a signal sequence ( 801 ).
  • feature extractor 609 can access signal sequence 631 .
  • Method 800 includes extracting features from the signal sequence ( 802 ).
  • feature extractor 609 can extract features 633 from signal sequence 631 .
  • Method 800 includes detecting an event based on the extracted features ( 803 ).
  • event detector 611 can attempt to detect an event from features 633 .
  • event detector 611 detects event 636 from features 633 .
  • event detector 611 does not detect an event from features 633 .
  • sequence manager 604 can subsequently add normalized signal 122 C to signal sequence 631 changing the signal data contained in signal sequence 631 .
  • Feature extractor 609 can again access signal sequence 631 .
  • Feature extractor 609 can derive features 634 (which differ from features 633 at least due to inclusion of normalized signal 122 C) from signal sequence 631 .
  • Event detector 611 can attempt to detect an event from features 634 . In one aspect, event detector 611 detects event 636 from features 634 . In another aspect, event detector 611 does not detect an event from features 634 .
  • event detector 611 does not detect an event from features 633 . Subsequently, event detector 611 detects event 636 from features 634 .
  • An event detection can include one or more of a detection identifier, a sequence identifier, and an event type (e.g., accident, hazard, fire, traffic, weather, etc.).
  • a detection identifier can include a description and features.
  • the description can be a hash of the signal with the earliest timestamp in a signal sequence.
  • Features can include features of the signal sequence. Including features provides understanding of how a multisource detection evolves over time as normalized signals are added.
  • a detection identifier can be shared by multiple detections derived from the same signal sequence.
  • a sequence identifier can include a description and features.
  • the description can be a hash of all the signals included in the signal sequence.
  • Features can include features of the signal sequence. Including features permits multisource detections to be linked to human event curations.
  • a sequence identifier can be unique to a group of signals included in a signal sequence. When signals in a signal sequence change (e.g., when a new normalized signal is added), the sequence identifier is changed.
  • event detection infrastructure 103 also includes one or more multisource classifiers.
  • Feature extractor 609 can send extracted features to the one or more multisource classifiers.
  • Per event type the one or more multisource classifiers compute a probability (e.g., using artificial intelligence, machine learning, neural networks, etc.) that the extracted features indicate the type of event.
  • Event detector 611 can detect (or not detect) an event from the computed probabilities.
  • multi-source classifier 612 is configured to assign a probability that a signal sequence is a type of event.
  • Multi-source classifier 612 formulate a detection from signal sequence features.
  • Multi-source classifier 612 can implement any of a variety of algorithms including: logistic regression, random forest (RF), support vector machines (SVM), gradient boosting (GBDT), linear, regression, etc.
  • multi-source classifier 612 can formulate detection 641 from features 633 .
  • detection 641 includes detection ID 642 , sequence ID 643 , category 644 , and probability 646 .
  • Detection 641 can be forwarded to event detector 611 .
  • Event detector 611 can determine that probability 646 does not satisfy a detection threshold for category 644 to be indicated as an event.
  • Detection 641 can also be stored in sequence storage 613 .
  • multi-source classifier 612 can formulate detection 651 from features 634 .
  • detection 651 includes detection ID 642 , sequence ID 647 , category 644 , and probability 648 .
  • Detection 651 can be forwarded to event detector 611 .
  • Event detector 611 can determine that probability 648 does satisfy a detection threshold for category 644 to be indicated as an event.
  • Detection 641 can also be stored in sequence storage 613 .
  • Event detector 611 can output event 636 .
  • a multi-source probability for a signal sequence up to the last available signal, can be decayed over time.
  • the signal sequence can be extended by the new signal.
  • the multi-source probability is recalculated for the new, extended signal sequence, and decay begins again.
  • decay can also be calculated “ahead of time” when a detection is created and a probability assigned.
  • pre-calculating decay for future points in time, downstream systems do not have to perform calculations to update decayed probabilities.
  • different event classes can decay at different rates. For example, a fire detection can decay more slowly than a crash detection because these types of events tend to resolve at different speeds. If a new signal is added to update a sequence, the pre-calculated decay values may be discarded. A multi-source probability can be re-calculated for the updated sequence and new pre-calculated decay values can be assigned.
  • Multi-source probability decay can start after a specified period of time (e.g., 3 minutes) and decay can occur in accordance with a defined decay equation.
  • modeling multi-source probability decay can include an initial static phase, a decay phase, and a final static phase.
  • decay is initially more pronounced and then weakens.
  • a newer detection begins to age (e.g., by one minute) it is more indicative of a possible “false positive” relative to an older event that ages by an additional minute.
  • a decay equation defines exponential decay of multi-source probabilities. Different decay rates can be used for different classes. Decay can be similar to radioactive decay, with different tau values used to calculate the “half life” of multi-source probability for a class. Tau values can vary by event type.
  • decay for signal sequence 631 can be defined in decay parameters 114 .
  • Sequence manager 104 can decay multisource probabilities computed for signal sequence 631 in accordance with decay parameters 614 .
  • evaluation module 206 and/or validator 204 can include and/or interoperate with one or more of: a sequence manager, a feature extractor, multi-source classifiers, or an event detector.
  • FIG. 9 illustrates an example computer architecture 900 that facilitates detecting events.
  • the components and data described with respect to FIGS. 1-8 can also be integrated with and/or can interoperate with the data and components of computer architecture 900 to detect events.
  • computer architecture 900 includes geo cell database 911 and even notification 916 .
  • Geo cell database 911 and even notification 916 can be connected to (or be part of) a network with signal ingestion modules 101 and event detection infrastructure 103 . As such, geo cell database 911 and even notification 916 can create and exchange message related data over the network.
  • event detection infrastructure 103 detects different categories of (planned and unplanned) events (e.g., fire, police response, mass shooting, traffic accident, natural disaster, storm, active shooter, concerts, protests, etc.) in different locations (e.g., anywhere across a geographic area, such as, the United States, a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.), at different times from time, location, and context included in normalized signals.
  • events e.g., fire, police response, mass shooting, traffic accident, natural disaster, storm, active shooter, concerts, protests, etc.
  • locations e.g., anywhere across a geographic area, such as, the United States, a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.
  • Event detection infrastructure 103 can also determine an event truthfulness, event severity, and an associated geo cell.
  • context information in a normalized signal increases the efficiency of determining truthfulness, severity, and an associated geo cell.
  • an event truthfulness indicates how likely a detected event is actually an event (vs. a hoax, fake, misinterpreted, etc.).
  • Truthfulness can range from less likely to be true to more likely to be true.
  • truthfulness is represented as a numerical value, such as, for example, from 1 (less truthful) to 10 (more truthful) or as percentage value in a percentage range, such as, for example, from 0% (less truthful) to 100% (more truthful).
  • Other truthfulness representations are also possible.
  • an event severity indicates how severe an event is (e.g., what degree of badness, what degree of damage, etc. is associated with the event). Severity can range from less severe (e.g., a single vehicle accident without injuries) to more severe (e.g., multi vehicle accident with multiple injuries and a possible fatality). As another example, a shooting event can also range from less severe (e.g., one victim without life threatening injuries) to more severe (e.g., multiple injuries and multiple fatalities). In one aspect, severity is represented as a numerical value, such as, for example, from 1 (less severe) to 5 (more severe). Other severity representations are also possible.
  • event detection infrastructure 103 can include a geo determination module including modules for processing different kinds of content including location, time, context, text, images, audio, and video into search terms.
  • the geo determination module can query a geo cell database with search terms formulated from normalized signal content.
  • the geo cell database can return any geo cells having matching supplemental information. For example, if a search term includes a street name, a subset of one or more geo cells including the street name in supplemental information can be returned to the event detection infrastructure.
  • Event detection infrastructure 103 can use the subset of geo cells to determine a geo cell associated with an event location. Events associated with a geo cell can be stored back into an entry for the geo cell in the geo cell database. Thus, over time an historical progression of events within a geo cell can be accumulated.
  • event detection infrastructure 103 can assign an event ID, an event time, an event location, an event category, an event description, an event truthfulness, and an event severity to each detected event.
  • Detected events can be sent to relevant entities, including to mobile devices, to computer systems, to APIs, to data storage, etc.
  • event detection infrastructure 103 detects events from information contained in normalized signals 122 .
  • Event detection infrastructure 103 can detect an event from a single normalized signal 122 or from multiple normalized signals 122 .
  • event detection infrastructure 103 detects an event based on information contained in one or more normalized signals 122 .
  • event detection infrastructure 103 detects a possible event based on information contained in one or more normalized signals 122 .
  • Event detection infrastructure 103 then validates the potential event as an event based on information contained in one or more other normalized signals 122 .
  • event detection infrastructure 103 includes geo determination module 904 , categorization module 906 , truthfulness determination module 907 , and severity determination module 908 .
  • Geo determination module 904 can include NLP modules, image analysis modules, etc. for identifying location information from a normalized signal. Geo determination module 904 can formulate (e.g., location) search terms 941 by using NLP modules to process audio, using image analysis modules to process images, etc. Search terms can include street addresses, building names, landmark names, location names, school names, image fingerprints, etc. Event detection infrastructure 103 can use a URL or identifier to access cached content when appropriate.
  • Categorization module 906 can categorize a detected event into one of a plurality of different categories (e.g., fire, police response, mass shooting, traffic accident, natural disaster, storm, active shooter, concerts, protests, etc.) based on the content of normalized signals used to detect and/or otherwise related to an event.
  • categories e.g., fire, police response, mass shooting, traffic accident, natural disaster, storm, active shooter, concerts, protests, etc.
  • Truthfulness determination module 907 can determine the truthfulness of a detected event based on one or more of: source, type, age, and content of normalized signals used to detect and/or otherwise related to the event.
  • Some signal types may be inherently more reliable than other signal types. For example, video from a live traffic camera feed may be more reliable than text in a social media post.
  • Some signal sources may be inherently more reliable than others. For example, a social media account of a government agency may be more reliable than a social media account of an individual. The reliability of a signal can decay over time.
  • Severity determination module 908 can determine the severity of a detected event based on or more of: location, content (e.g., dispatch codes, keywords, etc.), and volume of normalized signals used to detect and/or otherwise related to an event. Events at some locations may be inherently more severe than events at other locations. For example, an event at a hospital is potentially more severe than the same event at an abandoned warehouse. Event category can also be considered when determining severity. For example, an event categorized as a “Shooting” may be inherently more severe than an event categorized as “Police Presence” since a shooting implies that someone has been injured.
  • Geo cell database 911 includes a plurality of geo cell entries. Each geo cell entry includes a geo cell defining an area and corresponding supplemental information about things included in the defined area.
  • the corresponding supplemental information can include latitude/longitude, street names in the area defined by the geo cell, businesses in the area defined by the geo cell, other Areas of Interest (AOIs) (e.g., event venues, such as, arenas, stadiums, theaters, concert halls, etc.) in the area defined by the geo cell, image fingerprints derived from images captured in the area defined by the geo cell, and prior events that have occurred in the area defined by the geo cell.
  • AOIs Areas of Interest
  • geo cell entry 951 includes geo cell 952 , lat/lon 953 , streets 954 , businesses 955 , AOIs 956 , and prior events 957 .
  • Each event in prior events 957 can include a location (e.g., a street address), a time (event occurrence time), an event category, an event truthfulness, an event severity, and an event description.
  • geo cell entry 961 includes geo cell 962 , lat/lon 963 , streets 964 , businesses 965 , AOIs 966 , and prior events 967 .
  • Each event in prior events 967 can include a location (e.g., a street address), a time (event occurrence time), an event category, an event truthfulness, an event severity, and an event description.
  • geo cell entries can include the same or different (more or less) supplemental information, for example, depending on infrastructure density in an area.
  • a geo cell entry for an urban area can contain more diverse supplemental information than a geo cell entry for an agricultural area (e.g., in an empty field).
  • Geo cell database 911 can store geo cell entries in a hierarchical arrangement based on geo cell precision. As such, geo cell information of more precise geo cells is included in the geo cell information for any less precise geo cells that include the more precise geo cell.
  • Geo determination module 904 can query geo cell database 911 with search terms 941 .
  • Geo cell database 911 can identify any geo cells having supplemental information that matches search terms 941 . For example, if search terms 141 include a street address and a business name, geo cell database 911 can identify geo cells having the street name and business name in the area defined by the geo cell. Geo cell database 911 can return any identified geo cells to geo determination module 904 in geo cell subset 942 .
  • Geo determination module can use geo cell subset 942 to determine the location of event 935 and/or a geo cell associated with event 935 .
  • event 935 includes event ID 932 , time 933 , location 934 , description 936 , category 937 , truthfulness 938 , and severity 939 .
  • Event detection infrastructure 103 can also determine that event 935 occurred in an area defined by geo cell 962 (e.g., a geohash having precision of level 7 or level 9). For example, event detection infrastructure 103 can determine that location 934 is in the area defined by geo cell 962 . As such, event detection infrastructure 903 can store event 935 in events 967 (i.e., historical events that have occurred in the area defined by geo cell 962 ).
  • Event detection infrastructure 103 can also send event 935 to event notification module 916 .
  • Event notification module 916 can notify one or more entities about event 134 .

Abstract

The present invention extends to methods, systems, and computer program products for detecting events from features derived from multiple signals. In one aspect, an event detection infrastructure determines that characteristics of multiple signals, when considered collectively, indicate an event of interest to one or more parties. In another aspect, an evaluation module determines that characteristics of one or more signals indicate a possible event of interest to one or more parties. A validator then determines that characteristics of one or more other signals validate the possible event as an actual event of interest to the one or more parties. Signal features can be used to compute probabilities of events occurring.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/628,866, entitled “Multi Source Validation”, filed Feb. 9, 2018 which is incorporated herein in its entirety. This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/654,274, entitled “Detecting Events From Multiple Signals”, filed Apr. 6, 2018 which is incorporated herein in its entirety. This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/654,277 entitled, “Validating Possible Events With Additional Signals”, filed Apr. 6, 2018 which is incorporated herein in its entirety. This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/664,001, entitled, “Normalizing Different Types Of Ingested Signals Into A Common Format”, filed Apr. 27, 2018. This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/682,176 entitled “Detecting An Event From Multiple Sources”, filed Jun. 8, 2018 which is incorporated herein in its entirety. This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/682,177 entitled “Detecting An Event From Multi-Source Event Probability”, filed Jun. 8, 2018 which is incorporated herein in its entirety.
  • BACKGROUND 1. Background and Relevant Art
  • Entities (e.g., parents, guardians, friends, relatives, teachers, social workers, first responders, hospitals, delivery services, media outlets, government entities, etc.) may desire to be made aware of relevant events (e.g., fires, accidents, police presence, shootings, etc.) as close as possible to the events' occurrence. However, entities typically are not made aware of an event until after a person observes the event (or the event aftermath) and calls authorities.
  • In general, techniques that attempt to automate event detection are unreliable. Some techniques have attempted to mine social media data to detect the planning of events and forecast when events might occur. However, events can occur without prior planning and/or may not be detectable using social media data. Further, these techniques are not capable of meaningfully processing available data nor are these techniques capable of differentiating false data (e.g., hoax social media posts)
  • Other techniques use textual comparisons to compare textual content (e.g., keywords) in a data stream to event templates in a database. If text in a data stream matches keywords in an event template, the data stream is labeled as indicating an event.
  • Additional techniques use event specific sensors to detect specified types of event. For example, earthquake detectors can be used to detect earthquakes.
  • BRIEF SUMMARY
  • Examples extend to methods, systems, and computer program products for detecting events from features derived from multiple signals.
  • In general, signal ingestion modules ingest different types of raw structured and unstructured signals on an ongoing basis. The signal ingestion modules normalize raw signals into normalized signals having a Time, Location, Context (or “TLC”) format. Time can be a time of origin or “event time” of a signal. Location can be anywhere across a geographic area, such as, a country (e.g., the United States), a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.
  • Context indicates circumstances surrounding formation/origination of a raw signal in terms that facilitate understanding and assessment of the raw signal. The context of a raw signal can be derived from express as well as inferred signal features of the raw signal.
  • Signal ingestion modules can include one or more single source classifiers. A single source classifier can compute a single source probability for a raw signal from (inferred and/or express) signal features of the raw signal. A single source probability can reflect a mathematical probability or approximation of a mathematical probability of an event (e.g., fire, accident, weather, police presence, etc.) actually occurring. A single source classifier can be configured to compute a single source probability for a single event type or to compute a single source probability for each of a plurality of different event types.
  • As such, single source probabilities and corresponding probability details can represent Context. Probability details can indicate (e.g., can include a hash field indicating) a probability version and (express and/or inferred) signal features considered in a signal source probability calculation.
  • Concurrently with signal ingestion, an event detection infrastructure considers features of different combinations of normalized signals to attempt to identify events of interest to various parties. For example, the event detection infrastructure can determine that features of multiple different signals collectively indicate an event of interest to one or more parties. Alternately, the event detection infrastructure can determine that features of one or more signals indicate a possible event of interest to one or more parties. The event detection infrastructure then determines that features of one or more other signals validate the possible event as an actual event of interest to the one or more parties. Signal features can include: signal type, signal source, signal content, signal time (T), signal location (L), signal context (C), other circumstances of signal creation, etc.
  • The event detection infrastructure can group signals having sufficient temporal similarity and sufficient spatial similarity to one another in a signal sequence. In one aspect, any signal having sufficient temporal and spatial similarity to another signal can be added to a signal sequence.
  • In another aspect, a single source probability for a signal is computed from features of the signal. The single source probability can reflect a mathematical probability or approximation of a mathematical probability of an event actually occurring. A signal having a signal source probability above a threshold can be indicated as an “elevated” signal. Elevated signals can be used to initiate and/or can be added to a signal sequence. On the other hand, non-elevated signals may not be added to a signal sequence.
  • A multi-source probability can be computed from features of multiple normalized signals, including normalized signals in a signal sequence. Features used to compute a multi-source probability can include multiple single source probabilities as well as other features derived from multiple signals. The multi-source probability can reflect a mathematical probability or approximation of a mathematical probability of an event actually occurring based on multiple normalized signals (e.g., a signal sequence). A multi-source probability can change over time as normalized signals age or when a new normalized signal is received (e.g., added to a signal sequence).
  • This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice. The features and advantages may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features and advantages will become more fully apparent from the following description and appended claims, or may be learned by practice as set forth hereinafter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description will be rendered by reference to specific implementations thereof which are illustrated in the appended drawings. Understanding that these drawings depict only some implementations and are not therefore to be considered to be limiting of its scope, implementations will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
  • FIG. 1 illustrates an example computer architecture that facilitates ingesting signals.
  • FIG. 2 illustrates an example computer architecture that facilitates detecting an event from features derived from multiple signals.
  • FIG. 3 illustrates a flow chart of an example method for detecting an event from features derived from multiple signals.
  • FIG. 4 illustrates an example computer architecture that facilitates detecting an event from features derived from multiple signals.
  • FIG. 5 illustrates a flow chart of an example method for detecting an event from features derived from multiple signals
  • FIG. 6A illustrates an example computer architecture that facilitates forming a signal sequence.
  • FIG. 6B illustrates an example computer architecture that facilitates detecting an event from features of a signal sequence.
  • FIG. 6C illustrates an example computer architecture that facilitates detecting an event from features of a signal sequence.
  • FIG. 6D illustrates an example computer architecture that facilitates detecting an event from a multisource probability.
  • FIG. 6E illustrates an example computer architecture that facilitates detecting an event from a multisource probability.
  • FIG. 7 illustrates a flow chart of an example method for forming a signal sequence.
  • FIG. 8 illustrates a flow of an example method for detecting an event from a signal sequence.
  • FIG. 9 illustrates an example computer architecture that facilitates detecting events.
  • DETAILED DESCRIPTION
  • Examples extend to methods, systems, and computer program products for detecting events from features derived from multiple signals.
  • Aspects of the invention normalize raw signals into a common format that includes Time, Location, and Context (or “TLC”) format. Per signal type, signal ingestion modules identify and/or infer a time, a location, and a context associated with a signal. Different ingestion modules can be utilized/tailored to identify time, location, and context for different signal types. Time (T) can be a time of origin or “event time” of a signal. Location (L) can be anywhere across a geographic area, such as, a country (e.g., the United States), a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.
  • Context (C) indicates circumstances surrounding formation/origination of a raw signal in terms that facilitate understanding and assessment of the raw signal. The context of a raw signal can be derived from express as well as inferred signal features of the raw signal.
  • Signal ingestion modules can include one or more single source classifiers. A single source classifier can compute a single source probability for a raw signal from features of the raw signal. A single source probability can reflect a mathematical probability or approximation of a mathematical probability (e.g., a percentage between 0%-100%) of an event actually occurring. A single source classifier can be configured to compute a single source probability for a single event type or to compute a single source probability for each of a plurality of different event types. A single source classifier can compute a single source probability using artificial intelligence, machine learning, neural networks, logic, heuristics, etc.
  • As such, single source probabilities and corresponding probability details can represent Context. Probability details can indicate (e.g., can include a hash field indicating) a probability version and (express and/or inferred) signal features considered in a signal source probability calculation.
  • Concurrently with signal ingestion, the event detection infrastructure considers features of different combinations of normalized signals to attempt to identify events of interest to various parties. Features can be derived from an individual signal and/or from a group of signals.
  • For example, the event detection infrastructure can derive first features of a first normalized signal and can derive second features of a second normalized signal. Individual signal features can include: signal type, signal source, signal content, signal time (T), signal location (L), signal context (C), other circumstances of signal creation, etc. The event detection infrastructure can detect an event of interest to one or more parties from the first features and the second features collectively.
  • Alternately, the event detection infrastructure can derive first features of each normalized signal included in a first one or more normalized individual signals. The event detection infrastructure can detect a possible event of interest to one or more parties from the first features. The event detection infrastructure can derive second features of each normalized signal included in a second one or more individual signals. The event detection infrastructure can validate the possible event of interest as an actual event of interest to the one or more parties from the second features.
  • More specifically, the event detection infrastructure can use single source probabilities to detect and/or validate events. For example, the event detection infrastructure can detect an event of interest to one or more parties based on a single source probability of a first signal and a single source probability of second signal collectively. Alternately, the event detection infrastructure can detect a possible event of interest to one or more parties based on single source probabilities of a first one or more signals. The event detection infrastructure can validate the possible event as an actual event of interest to one or more parties based on single source probabilities of a second one or more signals.
  • The event detection infrastructure can group normalized signals having sufficient temporal similarity and/or sufficient spatial similarity to one another in a signal sequence. Temporal similarity of normalized signals can be determined by comparing Time (T) of the normalized signals. In one aspect, temporal similarity of a normalized signal and another normalized signal is sufficient when the Time (T) of the normalized signal is within a specified time of the Time (T) of the other normalized signal. A specified time can be virtually any time value, such as, for example, ten seconds, 30 seconds, one minute, two minutes, five minutes, ten minutes, 30 minutes, one hour, two hours, four hours, etc. A specified time can vary by detection type. For example, some event types (e.g., a fire) inherently last longer than other types of events (e.g., a shooting). Specified times can be tailored per detection type.
  • Spatial similarity of normalized signals can be determined by comparing Location (L) of the normalized signals. In one aspect, spatial similarity of a normalized signal and another normalized signal is sufficient when the Location (L) of the normalized signal is within a specified distance of the Location (L) of the other normalized signal. A specified distance can be virtually any distance value, such as, for example, a linear distance or radius (a number of feet, meters, miles, kilometers, etc.), within a specified number of geo cells of specified precision, etc.
  • In one aspect, any normalized signal having sufficient temporal and spatial similarity to another normalized signal can be added to a signal sequence.
  • In another aspect, a single source probability for a signal is computed from features of the signal. The single source probability can reflect a mathematical probability or approximation of a mathematical probability of an event actually occurring. A normalized signal having a signal source probability above a threshold (e.g., greater than 4%) is indicated as an “elevated” signal. Elevated signals can be used to initiate and/or can be added to a signal sequence. On the other hand, non-elevated signals may not be added to a signal sequence.
  • In one aspect, a first threshold is considered for signal sequence initiation and a second threshold is considered for adding additional signals to an existing signal sequence. A normalized signal having a single source probability above the first threshold can be used to initiate a signal sequence. After a signal sequence is initiated, any normalized signal having a single source probability above the second threshold can be added to the signal sequence.
  • The first threshold can be greater than the second threshold. For example, the first threshold can be 4% or 5% and the second threshold can be 2% or 3%. Thus, signals that are not necessarily reliable enough to initiate a signal sequence for an event can be considered for validating a possible event.
  • The event detection infrastructure can derive features of a signal grouping, such as, a signal sequence. Features of a signal sequence can include features of signals in the signal sequence, including single source probabilities. Features of a signal sequence can also include percentages, histograms, counts, durations, etc. derived from features of the signals included in the signal sequence. The event detection infrastructure can detect an event of interest to one or more parties from signal sequence features.
  • The event detection infrastructure can include one or more multi-source classifiers. A multi-source classifier can compute a multi-source probability for a signal sequence from features of the signal sequence. The multi-source probability can reflect a mathematical probability or approximation of a mathematical probability of an event (e.g., fire, accident, weather, police presence, etc.) actually occurring based on multiple normalized signals (e.g., the signal sequence). The multi-source probability can be assigned as an additional signal sequence feature. A multi-source classifier can be configured to compute a multi-source probability for a single event type or to compute a multi-source probability for each of a plurality of different event types. A multi-source classifier can compute a multi-source probability using artificial intelligence, machine learning, neural networks, etc.
  • A multi-source probability can change over time as a signal sequence ages or when a new signal is added to a signal sequence. For example, a multi-source probability for a signal sequence can decay over time. A multi-source probability for a signal sequence can also be recomputed when a new normalized signal is added to the signal sequence.
  • Multi-source probability decay can start after a specified period of time (e.g., 3 minutes) and decay can occur in accordance with a defined decay equation. In one aspect, a decay equation defines exponential decay of multi-source probabilities. Different decay rates can be used for different classes. Decay can be similar to radioactive decay, with different tau (i.e., mean lifetime) values used to calculate the “half life” of multi-source probability for different event types.
  • Implementations can comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more computer and/or hardware processors (including any of Central Processing Units (CPUs), and/or Graphical Processing Units (GPUs), general-purpose GPUs (GPGPUs), Field Programmable Gate Arrays (FPGAs), application specific integrated circuits (ASICs), Tensor Processing Units (TPUs)) and system memory, as discussed in greater detail below. Implementations also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
  • Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, Solid State Drives (“SSDs”) (e.g., RAM-based or Flash-based), Shingled Magnetic Recording (“SMR”) devices, Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • In one aspect, one or more processors are configured to execute instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) to perform any of a plurality of described operations. The one or more processors can access information from system memory and/or store information in system memory. The one or more processors can (e.g., automatically) transform information between different formats, such as, for example, between any of: raw signals, normalized signals, signal features, single source probabilities, possible events, events, signal sequences, signal sequence features, multisource probabilities, etc.
  • System memory can be coupled to the one or more processors and can store instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) executed by the one or more processors. The system memory can also be configured to store any of a plurality of other types of data generated and/or transformed by the described components, such as, for example, raw signals, normalized signals, signal features, single source probabilities, possible events, events, signal sequences, signal sequence features, multisource probabilities, etc.
  • A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
  • Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
  • Computer-executable instructions comprise, for example, instructions and data which, in response to execution at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
  • Those skilled in the art will appreciate that the described aspects may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, wearable devices, multicore processor systems, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, routers, switches, and the like. The described aspects may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
  • Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more Field Programmable Gate Arrays (FPGAs) and/or one or more application specific integrated circuits (ASICs) and/or one or more Tensor Processing Units (TPUs) can be programmed to carry out one or more of the systems and procedures described herein. Hardware, software, firmware, digital components, or analog components can be specifically tailor-designed for a higher speed detection or artificial intelligence that can enable signal processing. In another example, computer code is configured for execution in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices.
  • The described aspects can also be implemented in cloud computing environments. In this description and the following claims, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources. For example, cloud computing can be employed in the marketplace to offer ubiquitous and convenient on-demand access to the shared pool of configurable computing resources (e.g., compute resources, networking resources, and storage resources). The shared pool of configurable computing resources can be provisioned via virtualization and released with low effort or service provider interaction, and then scaled accordingly.
  • A cloud computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud computing model can also expose various service models, such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). A cloud computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. In this description and in the following claims, a “cloud computing environment” is an environment in which cloud computing is employed.
  • In this description and the following claims, a “geo cell” is defined as a piece of “cell” in a grid in any form. In one aspect, geo cells are arranged in a hierarchical structure. Cells of different geometries can be used.
  • A “geohash” is an example of a “geo cell”.
  • In this description and the following claims, “geohash” is defined as a geocoding system which encodes a geographic location into a short string of letters and digits. Geohash is a hierarchical spatial data structure which subdivides space into buckets of grid shape (e.g., a square). Geohashes offer properties like arbitrary precision and the possibility of gradually removing characters from the end of the code to reduce its size (and gradually lose precision). As a consequence of the gradual precision degradation, nearby places will often (but not always) present similar prefixes. The longer a shared prefix is, the closer the two places are. geo cells can be used as a unique identifier and to represent point data (e.g., in databases).
  • In one aspect, a “geohash” is used to refer to a string encoding of an area or point on the Earth. The area or point on the Earth may be represented (among other possible coordinate systems) as a latitude/longitude or Easting/Northing—the choice of which is dependent on the coordinate system chosen to represent an area or point on the Earth. geo cell can refer to an encoding of this area or point, where the geo cell may be a binary string comprised of 0 s and 1 s corresponding to the area or point, or a string comprised of 0 s, 1 s, and a ternary character (such as X)—which is used to refer to a don't care character (0 or 1). A geo cell can also be represented as a string encoding of the area or point, for example, one possible encoding is base-32, where every 5 binary characters are encoded as an ASCII character.
  • Depending on latitude, the size of an area defined at a specified geo cell precision can vary. In one aspect, the areas defined at various geo cell precisions are approximately:
  • GeoHash Length/Precision Width × Height
    1 5,009.4 km × 4,992.6 km
    2 1,252.3 km × 624.1 km  
    3 156.5 km × 156 km  
    4 39.1 km × 19.5 km
    5 4.9 km × 4.9 km
    6  1.2 km × 609.4 m
    7 152.9 m × 152.4 m
    8 38.2 m × 19 m  
    9 4.8 m × 4.8 m
    10  1.2 m × 59.5 cm
    11 14.9 cm × 14.9 cm
    12 3.7 cm × 1.9 cm
  • Other geo cell geometries, such as, hexagonal tiling, triangular tiling, etc. are also possible. For example, the H3 geospatial indexing system is a multi-precision hexagonal tiling of a sphere (such as the Earth) indexed with hierarchical linear indexes.
  • In another aspect, geo cells are a hierarchical decomposition of a sphere (such as the Earth) into representations of regions or points based a Hilbert curve (e.g., the S2 hierarchy or other hierarchies). Regions/points of the sphere can be projected into a cube and each face of the cube includes a quad-tree where the sphere point is projected into. After that, transformations can be applied and the space discretized. The geo cells are then enumerated on a Hilbert Curve (a space-filling curve that converts multiple dimensions into one dimension and preserves the locality).
  • Due to the hierarchical nature of geo cells, any signal, event, entity, etc., associated with a geo cell of a specified precision is by default associated with any less precise geo cells that contain the geo cell. For example, if a signal is associated with a geo cell of precision 9, the signal is by default also associated with corresponding geo cells of precisions 1, 2, 3, 4, 5, 6, 7, and 8. Similar mechanisms are applicable to other tiling and geo cell arrangements. For example, S2 has a cell level hierarchy ranging from level zero (85,011,012 km2) to level 30 (between 0.48 cm2 to 0.96 cm2).
  • Signal Ingestion and Normalization
  • Signal ingestion modules ingest a variety of raw structured and/or unstructured signals on an on going basis and in essentially real-time. Raw signals can include social posts, live broadcasts, traffic camera feeds, other camera feeds (e.g., from other public cameras or from CCTV cameras), listening device feeds, 911 calls, weather data, planned events, IoT device data, crowd sourced traffic and road information, satellite data, air quality sensor data, smart city sensor data, public radio communication (e.g., among first responders and/or dispatchers, between air traffic controllers and pilots), etc. The content of raw signals can include images, video, audio, text, etc. Generally, the signal ingestion modules normalize raw signals into normalized signals, for example, having a Time, Location, Context (or “TLC”) format.
  • Different types of ingested signals (e.g., social media signals, web signals, and streaming signals) can be used to identify events. Different types of signals can include different data types and different data formats. Data types can include audio, video, image, and text. Different formats can include text in XML, text in JavaScript Object Notation (JSON), text in RSS feed, plain text, video stream in Dynamic Adaptive Streaming over HTTP (DASH), video stream in HTTP Live Streaming (HLS), video stream in Real-Time Messaging Protocol (RTMP), etc.
  • Time (T) can be a time of origin or “event time” of a signal. In one aspect, a raw signal includes a time stamp and the time stamp is used to calculate Time (T). Location (L) can be anywhere across a geographic area, such as, a country (e.g., the United States), a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.
  • Context indicates circumstances surrounding formation/origination of a raw signal in terms that facilitate understanding and assessment of the raw signal. The context of a raw signal can be derived from express as well as inferred signal features of the raw signal.
  • Signal ingestion modules can include one or more single source classifiers. A single source classifier can compute a single source probability for a raw signal from features of the raw signal. A single source probability can reflect a mathematical probability or approximation of a mathematical probability (e.g., a percentage between 0%-100%) of an event (e.g., fire, accident, weather, police presence, shooting, etc.) actually occurring. A single source classifier can be configured to compute a single source probability for a single event type or to compute a single source probability for each of a plurality of different event types. A single source classifier can compute a single source probability using artificial intelligence, machine learning, neural networks, logic, heuristics, etc.
  • As such, single source probabilities and corresponding probability details can represent Context (C). Probability details can indicate (e.g., can include a hash field indicating) a probability version and (express and/or inferred) signal features considered in a signal source probability calculation.
  • Per signal type and signal content, different normalization modules can be used to extract, derive, infer, etc. time, location, and context from/for a raw signal. For example, one set of normalization modules can be configured to extract/derive/infer time, location and context from/for social signals. Another set of normalization modules can be configured to extract/derive/infer time, location and context from/for Web signals. A further set of normalization modules can be configured to extract/derive/infer time, location and context from/for streaming signals.
  • Normalization modules for extracting/deriving/inferring time, location, and context can include text processing modules, NLP modules, image processing modules, video processing modules, etc. The modules can be used to extract/derive/infer data representative of time, location, and context for a signal. Time, Location, and Context for a signal can be extracted/derived/inferred from metadata and/or content of the signal. For example, NLP modules can analyze metadata and content of a sound clip to identify a time, location, and keywords (e.g., fire, shooter, etc.). An acoustic listener can also interpret the meaning of sounds in a sound clip (e.g., a gunshot, vehicle collision, etc.) and convert to relevant context. Live acoustic listeners can determine the distance and direction of a sound. Similarly, image processing modules can analyze metadata and pixels in an image to identify a time, location and keywords (e.g., fire, shooter, etc.). Image processing modules can also interpret the meaning of parts of an image (e.g., a person holding a gun, flames, a store logo, etc.) and convert to relevant context. Other modules can perform similar operations for other types of content including text and video.
  • Per signal type, each set of normalization modules can differ but may include at least some similar modules or may share some common modules. For example, similar (or the same) image analysis modules can be used to extract named entities from social signal images and public camera feeds. Likewise, similar (or the same) NLP modules can be used to extract named entities from social signal text and web text.
  • In some aspects, an ingested signal includes expressly defined Time, Location, and Context upon ingestion. In other aspects, an ingested signal lacks an expressly defined Location and/or an expressly defined Context upon ingestion. In these other aspects, Location and/or Context can be inferred from features of an ingested signal and/or through reference to other data sources.
  • In further aspects, Time may not be included, or an included time may not be given with high precision and is inferred. For example, a user may post an image to a social network which had been taken some indeterminate time earlier.
  • Normalization modules can use named entity recognition and reference to a geo cell database to infer location. Named entities can be recognized in text, images, video, audio, or sensor data. The recognized named entities can be compared to named entities in geo cell entries. Matches indicate possible signal origination in a geographic area defined by a geo cell.
  • As such, a normalized signal can include a Time, a Location, a Context (e.g., single source probabilities and probability details), a signal type, a signal source, and content.
  • In one aspect, frequentist inference technique is used to determine a single source probability. A database maintains mappings between different combinations of signal properties and ratios of signals turning into events (a probability) for that combination of signal properties. The database is queried with the combination of signal properties. The database returns a ratio of signals having the signal properties turning into events. The ratio is assigned to the signal. A combination of signal properties can include: (1) event class (e.g., fire, accident, weather, etc.), (2) media type (e.g., text, image, audio, etc.), (3) source (e.g., twitter, traffic camera, first responder radio traffic, etc.), and (4) geo type (e.g., geo cell, region, or non-geo).
  • In another aspect, a single source probability is calculated by single source classifiers (e.g., machine learning models, artificial intelligence, neural networks, etc.) that consider hundreds, thousands, or even more signal features of a signal. Single source classifiers can be based on binary models and/or multi-class models.
  • Output from a single source classifier can be adjusted to more accurately represent a probability that a signal is a “true positive”. For example, 1,000 signals with classifier output of 0.9 may include 80% as true positives. Thus, single source probability can be adjusted to 0.8 to more accurately reflect probability of the signal being a True event. “Calibration” can be done in such a way that for any “calibrated score” the score reflects the true probability of a true positive outcome.
  • FIG. 1 depicts computer architecture 100 that facilitates ingesting and normalizing signals. As depicted, computer architecture 100 includes signal ingestion modules 101, social signals 171, Web signals 172, and streaming signals 173. Signal ingestion modules 101, social signals 171, Web signals 172, and streaming signals 173 can be connected to (or be part of) a network, such as, for example, a system bus, a Local Area Network (“LAN”), a Wide Area Network (“WAN”), and even the Internet. Accordingly, signal ingestion modules 101, social signals 171, Web signals 172, and streaming signals 173 as well as any other connected computer systems and their components can create and exchange message related data (e.g., Internet Protocol (“IP”) datagrams and other higher layer protocols that utilize IP datagrams, such as, Transmission Control Protocol (“TCP”), Hypertext Transfer Protocol (“HTTP”), Simple Mail Transfer Protocol (“SMTP”), Simple Object Access Protocol (SOAP), etc. or using other non-datagram protocols) over the network.
  • Signal ingestion module(s) 101 can ingest raw signals 121, including social signals 171, web signals 172, and streaming signals 173 (e.g., social posts, traffic camera feeds, other camera feeds, listening device feeds, 911 calls, weather data, planned events, IoT device data, crowd sourced traffic and road information, satellite data, air quality sensor data, smart city sensor data, public radio communication, etc.) on going basis and in essentially real-time. Signal ingestion module(s) 101 include social content ingestion modules 174, web content ingestion modules 176, stream content ingestion modules 177, and signal formatter 180. Signal formatter 180 further includes social signal processing module 181, web signal processing module 182, and stream signal processing modules 183.
  • For each type of signal, a corresponding ingestion module and signal processing module can interoperate to normalize the signal into a Time, Location, Context (TLC) format. For example, social content ingestion modules 174 and social signal processing module 181 can interoperate to normalize social signals 171 into the TLC format. Similarly, web content ingestion modules 176 and web signal processing module 182 can interoperate to normalize web signals 172 into the TLC format. Likewise, stream content ingestion modules 177 and stream signal processing modules 183 can interoperate to normalize streaming signals 173 into the TLC format.
  • In one aspect, signal content exceeding specified size requirements (e.g., audio or video) is cached upon ingestion. Signal ingestion modules 101 include a URL or other identifier to the cached content within the context for the signal.
  • Signal formatter 180 can include one or more single signal classifiers classifying ingested signals. The one or more single signal classifiers can assign one or more signal source probabilities (e.g., between 0%-100%) to each ingested signal. Each single source probability is a probability of the ingested signal being a particular category of event (e.g., fire, weather, medical, accident, police presence, etc.). Ingested signals with a sufficient single source probability (e.g., >=to 4%) are considered “elevated” signals.
  • In one aspect, signal formatter 180 includes modules for determining a single source probability as a ratio of signals turning into events based on the following signal properties: (1) event class (e.g., fire, accident, weather, etc.), (2) media type (e.g., text, image, audio, etc.), (3) source (e.g., twitter, traffic camera, first responder radio traffic, etc.), and (4) geo type (e.g., geo cell, region, or non-geo). Probabilities can be stored in a lookup table for different combinations of the signal properties. Features of a signal can be derived and used to query the lookup table. For example, the lookup table can be queried with terms (“accident”, “image”, “twitter”, “region”). The corresponding ratio (probability) can be returned from the table.
  • In another aspect, signal formatter 180 includes a plurality of single source classifiers (e.g., artificial intelligence, machine learning modules, neural networks, etc.). Each single source classifier can consider hundreds, thousands, or even more signal features of a signal. Signal features of a signal can be derived and submitted to a signal source classifier. The single source classifier can return a probability that a signal indicates a type of event. Single source classifiers can be binary classifiers or multi-source classifiers.
  • Raw classifier output can be adjusted to more accurately represent a probability that a signal is a “true positive”. For example, 1,000 signals whose raw classifier output is 0.9 may include 80% as true positives. Thus, probability can be adjusted to 0.8 to reflect true probability of the signal being a true positive. “Calibration” can be done in such a way that for any “calibrated score” this score reflects the true probability of a true positive outcome.
  • Signal ingestion modules 101 can include one or more single source probabilities and corresponding probability details in the context of a normalized signal. Probability details can indicate a probability version and features used to calculate the probability. In one aspect, a probability version and signal feature are contained in a hash field.
  • Thus in general, any of the received raw signals can be normalized into normalized signals including Time, Location, Context, signal source, signal type, and content. Signal ingestion modules 101 can send normalized signals 122 to event detection infrastructure 103. For example, signal ingestion modules 101 can send normalized signal 122A, including time 123A, location 124A, context 126A, content 127A, type 128A, and source 129A to event detection infrastructure 103. Similarly, signal ingestion modules 101 can send normalized signal 122B, including time 123B, location 124B, context 126B, content 127B, type 128B, and source 129B to event detection infrastructure 103. Signal ingestion modules 101 can also send normalized signal 122C (depicted in FIG. 6), including time 123C, location 124C, context 126C, content 127C , type 128C, and source 129C to event detection infrastructure 103.
  • Multi-Signal Detection
  • FIG. 2 illustrates an example computer architecture 200 that facilitates detecting an event from features derived from multiple signals. As depicted, computer architecture 200 further includes event detection infrastructure 103. Event infrastructure 103 can be connected to (or be part of) a network with signal ingestion modules 101. As such, signal ingestion modules 101 and event detection infrastructure 103 can create and exchange message related data over the network.
  • As depicted, event detection infrastructure 103 further includes evaluation module 206. Evaluation module 206 is configured to determine if features of a plurality of normalized signals collectively indicate an event. Evaluation module 206 can detect (or not detect) an event based on one or more features of one normalized signal in combination with one or more features of another normalized signal.
  • FIG. 3 illustrates a flow chart of an example method 300 for detecting an event from features derived from multiple signals. Method 300 will be described with respect to the components and data in computer architecture 200.
  • Method 300 includes receiving a first signal (301). For example, event detection infrastructure 103 can receive normalized signal 122B. Method 300 includes deriving first one or more features of the first signal (302). For example, event detection infrastructure 103 can derive features 201 of normalized signal 122B. Features 201 can include and/or be derived from time 123B, location 124B, context 126B, content 127B, type 128B, and source 129B. Event detection infrastructure 103 can also derive features 201 from one or more single source probabilities assigned to normalized signal 122B.
  • Method 300 includes determining that the first one or more features do not satisfy conditions to be identified as an event (303). For example, evaluation module 206 can determine that features 201 do not satisfy conditions to be identified as an event. That is, the one or more features of normalized signal 122B do not alone provide sufficient evidence of an event. In one aspect, one or more single source probabilities assigned to normalized signal 122B do not satisfy probability thresholds in thresholds 226.
  • Method 300 includes receiving a second signal (304). For example, event detection infrastructure 103 can receive normalized signal 122A. Method 300 includes deriving second one or more features of the second signal (305). For example, event detection infrastructure 103 can derive features 202 of normalized signal 122A. Features 202 can include and/or be derived from time 123A, location 124A, context 126A, content 127A, type 128A, and source 129A. Event detection infrastructure 103 can also derive features 202 from one or more single source probabilities assigned to normalized signal 122A.
  • Method 300 includes aggregating the first one or more features with the second one or more features into aggregated features (306). For example, evaluation module 206 can aggregate features 201 with features 202 into aggregated features 203. Evaluation module 206 can include an algorithm that defines and aggregates individual contributions of different signal features into aggregated features. Aggregating features 201 and 202 can include aggregating a single source probability assigned to normalized signal 122B for an event type with a signal source probability assigned to normalized signal 122A for the event type into a multisource probability for the event type.
  • Method 300 includes detecting an event from the aggregated features (307). For example, evaluation module 206 can determine that aggregated features 203 satisfy conditions to be detected as an event. Evaluation module 206 can detect event 224, such as, for example, a fire, an accident, a shooting, a protest, etc. based on satisfaction of the conditions.
  • In one aspect, conditions for event identification can be included in thresholds 226. Conditions can include threshold probabilities per event type. When a probability exceeds a threshold probability, evaluation module 106 can detect an event. A probability can be a single signal probability or a multisource (aggregated) probability. As such, evaluation module 206 can detect an event based on a multisource probability exceeding a probability threshold in thresholds 226.
  • FIG. 4 illustrates an example computer architecture 400 that facilitates detecting an event from features derived from multiple signals. As depicted, event detection infrastructure 103 further includes evaluation module 206 and validator 204. Evaluation module 206 is configured to determine if features of a plurality of normalized signals indicate a possible event. Evaluation module 206 can detect (or not detect) a possible event based on one or more features of a normalized signal. Validator 204 is configured to validate (or not validate) a possible event as an actual event based on one or more features of another normalized signal.
  • FIG. 5 illustrates a flow chart of an example method 500 for detecting an event from features derived from multiple signals. Method 500 will be described with respect to the components and data in computer architecture 400.
  • Method 500 includes receiving a first signal (501). For example, event detection infrastructure 103 can receive normalized signal 122B. Method 500 includes deriving first one or more features of the first signal (502). For example, event detection infrastructure 103 can derive features 401 of normalized signal 122B. Features 401 can include and/or be derived from time 123B, location 124B, context 126B, content 127B, type 128B, and source 129B. Event detection infrastructure 103 can also derive features 401 from one or more single source probabilities assigned to normalized signal 122B.
  • Method 500 includes detecting a possible event from the first one or more features (503). For example, evaluation module 206 can detect possible event 423 from features 401. Based on features 401, event detection infrastructure 103 can determine that the evidence in features 401 is not confirming of an event but is sufficient to warrant further investigation of an event type. In one aspect, a single source probability assigned to normalized signal 122B for an event type does not satisfy a probability threshold for full event detection but does satisfy a probability threshold for further investigation.
  • Method 500 includes receiving a second signal (504). For example, event detection infrastructure 103 can receive normalized signal 122A. Method 500 includes deriving second one or more features of the second signal (505). For example, event detection infrastructure 103 can derive features 402 of normalized signal 122A. Features 402 can include and/or be derived from time 123A, location 124A, context 126A, content 127A, type 128A, and source 129A. Event detection infrastructure 103 can also derive features 402 from one or more single source probabilities assigned to normalized signal 122A.
  • Method 500 includes validating the possible event as an actual event based on the second one or more features (506). For example, validator 204 can determine that possible event 423 in combination with features 402 provide sufficient evidence of an actual event. Validator 204 can validate possible event 423 as event 424 based on features 402. In one aspect, validator 204 considers a single source probability assigned to normalized signal 122B in view of a single source probability assigned to normalized signal 122B. Validator 204 determines that the signal source probabilities, when considered collectively satisfy a probability threshold for detecting an event.
  • Forming and Detecting Events from Signal Groupings
  • In general, a plurality of normalized (e.g., TLC) signals can be grouped together in a signal group based on spatial similarity and/or temporal similarity among the plurality of normalized signals and/or corresponding raw (non-normalized) signals. A feature extractor can derive features (e.g., percentages, counts, durations, histograms, etc.) of the signal group from the plurality of normalized signals. An event detector can attempt to detect events from signal group features.
  • In one aspect, a plurality of normalized (e.g., TLC) signals are included in a signal sequence. Turning to FIG. 6A, event detection infrastructure 103 can include sequence manager 604, feature extractor 609, and sequence storage 613. Sequence manager 604 further includes time comparator 606, location comparator 607, and deduplicator 608.
  • Time comparator 606 is configured to determine temporal similarity between a normalized signal and a signal sequence. Time comparator 606 can compare a signal time of a received normalized signal to a time associated with existing signal sequences (e.g., the time of the first signal in the signal sequence). Temporal similarity can be defined by a specified time period, such as, for example, 5 minutes, 10 minutes, 20 minutes, 30 minutes, etc. When a normalized signal is received within the specified time period of a time associated with a signal sequence, the normalized signal can be considered temporally similar to signal sequence.
  • Likewise, location comparator 607 is configured to determine spatial similarity between a normalized signal and a signal sequence. Location comparator 607 can compare a signal location of a received normalized signal to a location associated with existing signal sequences (e.g., the location of the first signal in the signal sequence). Spatial similarity can be defined by a geographic area, such as, for example, a distance radius (e.g., meters, miles, etc.), a number of geo cells of a specified precision, an Area of Interest (AoI), etc. When a normalized signal is received within the geographic area associated with a signal sequence, the normalized signal can be considered spatially similar to signal sequence.
  • Deduplicator 608 is configured to determine if a signal is a duplicate of a previously received signal. Deduplicator 608 can detect a duplicate when a normalized signal includes content (e.g., text, image, etc.) that is essentially identical to previously received content (previously received text, a previously received image, etc.). Deduplicator 608 can also detect a duplicate when a normalized signal is a repost or rebroadcast of a previously received normalized signal. Sequence manager 604 can ignore duplicate normalized signals.
  • Sequence manager 604 can include a signal having sufficient temporal and spatial similarity to a signal sequence (and that is not a duplicate) in that signal sequence. Sequence manager 604 can include a signal that lacks sufficient temporal and/or spatial similarity to any signal sequence (and that is not a duplicate) in a new signal sequence. A signal can be encoded into a signal sequence as a vector using any of a variety of algorithms including recurrent neural networks (RNN) (Long Short Term Memory (LSTM) networks and Gated Recurrent Units (GRUs)), convolutional neural networks, or other algorithms.
  • Feature extractor 609 is configured to derive features of a signal sequence from signal data contained in the signal sequence. Derived features can include a percentage of normalized signals per geohash, a count of signals per time of day (hours:minutes), a signal gap histogram indicating a history of signal gap lengths (e.g., with bins for 1 s, 5 s, 10 s, 1 m, 5 m, 10 m, 30 m), a count of signals per signal source, model output histograms indicating model scores, a sequent duration, count of signals per signal type, a number of unique users that posted social content, etc. However, feature extractor 609 can derive a variety of other features as well. Additionally, the described features can be of different shapes to include more or less information, such as, for example, gap lengths, provider signal counts, histogram bins, sequence durations, category counts, etc.
  • FIG. 7 illustrates a flow chart of an example method 700 for forming a signal sequence. Method 700 will be described with respect to the components and data in computer architecture 600.
  • Method 700 includes receiving a normalized signal including time, location, context, and content (701). For example, sequence manager 604 can receive normalized signal 622A. Method 700 includes forming a signal sequence including the normalized signal (702). For example, time comparator 606 can compare time 623A to times associated with existing signal sequences. Similarly, location comparator 607 can compare location 124A to locations associated with existing signal sequences. Time comparator 606 and/or location comparator 607 can determine that normalized signal 122A lacks sufficient temporal similarity and/or lacks sufficient spatial similarity respectively to existing signal sequences. Deduplicator 608 can determine that normalized signal 122A is not a duplicate normalized signal. As such, sequence manager 604 can form signal sequence 631, include normalized signal 122A in signal sequence 631, and store signal sequence 631 in sequence storage 613.
  • Method 700 includes receiving another normalized signal including another time, another location, another context, and other content (703). For example, sequence manager 604 can receive normalized signal 622B.
  • Method 700 includes determining that there is sufficient temporal similarity between the time and the other time (704). For example, time comparator 606 can compare time 123B to time 123A. Time comparator 606 can determine that time 123B is sufficiently similar to time 123A. Method 700 includes determining that there is sufficient spatial similarity between the location and the other location (705). For example, location comparator 607 can compare location 124B to location 124A. Location comparator 607 can determine that location 124B has sufficient similarity to location 124A.
  • Method 700 includes including the other normalized signal in the signal sequence based on the sufficient temporal similarity and the sufficient spatial similarity (706). For example, sequence manager 604 can include normalized signal 124B in signal sequence 631 and update signal sequence 631 in sequence storage 613.
  • Subsequently, sequence manager 604 can receive normalized signal 122C. Time comparator 606 can compare time 123C to time 123A and location comparator 607 can compare location 124C to location 124A. If there is sufficient temporal and spatial similarity between normalized signal 122C and normalized signal 122A, sequence manager 604 can include normalized signal 122C in signal sequence 631. On the other hand, if there is insufficient temporal similarity and/or insufficient spatial similarity between normalized signal 122C and normalized signal 122A, sequence manager 604 can form signal sequence 632. Sequence manager 604 can include normalized signal 122C in signal sequence 632 and store signal sequence 631 in sequence storage 613.
  • Turning to FIG. 6B, event detection infrastructure 103 further includes event detector 611. Event detector 611 is configured to determine if features extracted from a signal sequence are indicative of an event.
  • FIG. 8 illustrates a flow chart of an example method 800 for detecting an event. Method 800 will be described with respect to the components and data in computer architecture 600.
  • Method 800 includes accessing a signal sequence (801). For example, feature extractor 609 can access signal sequence 631. Method 800 includes extracting features from the signal sequence (802). For example, feature extractor 609 can extract features 633 from signal sequence 631. Method 800 includes detecting an event based on the extracted features (803). For example, event detector 611 can attempt to detect an event from features 633. In one aspect, event detector 611 detects event 636 from features 633. In another aspect, event detector 611 does not detect an event from features 633.
  • Turning to FIG. 6C, sequence manager 604 can subsequently add normalized signal 122C to signal sequence 631 changing the signal data contained in signal sequence 631. Feature extractor 609 can again access signal sequence 631. Feature extractor 609 can derive features 634 (which differ from features 633 at least due to inclusion of normalized signal 122C) from signal sequence 631. Event detector 611 can attempt to detect an event from features 634. In one aspect, event detector 611 detects event 636 from features 634. In another aspect, event detector 611 does not detect an event from features 634.
  • In a more specific aspect, event detector 611 does not detect an event from features 633. Subsequently, event detector 611 detects event 636 from features 634.
  • An event detection can include one or more of a detection identifier, a sequence identifier, and an event type (e.g., accident, hazard, fire, traffic, weather, etc.).
  • A detection identifier can include a description and features. The description can be a hash of the signal with the earliest timestamp in a signal sequence. Features can include features of the signal sequence. Including features provides understanding of how a multisource detection evolves over time as normalized signals are added. A detection identifier can be shared by multiple detections derived from the same signal sequence.
  • A sequence identifier can include a description and features. The description can be a hash of all the signals included in the signal sequence. Features can include features of the signal sequence. Including features permits multisource detections to be linked to human event curations. A sequence identifier can be unique to a group of signals included in a signal sequence. When signals in a signal sequence change (e.g., when a new normalized signal is added), the sequence identifier is changed.
  • In one aspect, event detection infrastructure 103 also includes one or more multisource classifiers. Feature extractor 609 can send extracted features to the one or more multisource classifiers. Per event type, the one or more multisource classifiers compute a probability (e.g., using artificial intelligence, machine learning, neural networks, etc.) that the extracted features indicate the type of event. Event detector 611 can detect (or not detect) an event from the computed probabilities.
  • For example, turning to FIG. 6D, multi-source classifier 612 is configured to assign a probability that a signal sequence is a type of event. Multi-source classifier 612 formulate a detection from signal sequence features. Multi-source classifier 612 can implement any of a variety of algorithms including: logistic regression, random forest (RF), support vector machines (SVM), gradient boosting (GBDT), linear, regression, etc.
  • For example, multi-source classifier 612 (e.g., using machine learning, artificial intelligence, neural networks, etc.) can formulate detection 641 from features 633. As depicted, detection 641 includes detection ID 642, sequence ID 643, category 644, and probability 646. Detection 641 can be forwarded to event detector 611. Event detector 611 can determine that probability 646 does not satisfy a detection threshold for category 644 to be indicated as an event. Detection 641 can also be stored in sequence storage 613.
  • Subsequently, turning to FIG. 6E, multi-source classifier 612 (e.g., using machine learning, artificial intelligence, neural networks, etc.) can formulate detection 651 from features 634. As depicted, detection 651 includes detection ID 642, sequence ID 647, category 644, and probability 648. Detection 651 can be forwarded to event detector 611. Event detector 611 can determine that probability 648 does satisfy a detection threshold for category 644 to be indicated as an event. Detection 641 can also be stored in sequence storage 613. Event detector 611 can output event 636.
  • As detections age and are not determined to be accurate (i.e., are not True Positives), the probability declines that signals are “True Positive” detections of actual events. As such, a multi-source probability for a signal sequence, up to the last available signal, can be decayed over time. When a new signal comes in, the signal sequence can be extended by the new signal. The multi-source probability is recalculated for the new, extended signal sequence, and decay begins again.
  • In general, decay can also be calculated “ahead of time” when a detection is created and a probability assigned. By pre-calculating decay for future points in time, downstream systems do not have to perform calculations to update decayed probabilities. Further, different event classes can decay at different rates. For example, a fire detection can decay more slowly than a crash detection because these types of events tend to resolve at different speeds. If a new signal is added to update a sequence, the pre-calculated decay values may be discarded. A multi-source probability can be re-calculated for the updated sequence and new pre-calculated decay values can be assigned.
  • Multi-source probability decay can start after a specified period of time (e.g., 3 minutes) and decay can occur in accordance with a defined decay equation. Thus, modeling multi-source probability decay can include an initial static phase, a decay phase, and a final static phase. In one aspect, decay is initially more pronounced and then weakens. Thus, as a newer detection begins to age (e.g., by one minute) it is more indicative of a possible “false positive” relative to an older event that ages by an additional minute.
  • In one aspect, a decay equation defines exponential decay of multi-source probabilities. Different decay rates can be used for different classes. Decay can be similar to radioactive decay, with different tau values used to calculate the “half life” of multi-source probability for a class. Tau values can vary by event type.
  • In FIGS. 6D and 6E, decay for signal sequence 631 can be defined in decay parameters 114. Sequence manager 104 can decay multisource probabilities computed for signal sequence 631 in accordance with decay parameters 614.
  • The components and data depicted in FIGS. 1-8 can be integrated with and/or can interoperate with one another to detect events. For example, evaluation module 206 and/or validator 204 can include and/or interoperate with one or more of: a sequence manager, a feature extractor, multi-source classifiers, or an event detector.
  • FIG. 9 illustrates an example computer architecture 900 that facilitates detecting events. The components and data described with respect to FIGS. 1-8 can also be integrated with and/or can interoperate with the data and components of computer architecture 900 to detect events.
  • As depicted, computer architecture 900 includes geo cell database 911 and even notification 916. Geo cell database 911 and even notification 916 can be connected to (or be part of) a network with signal ingestion modules 101 and event detection infrastructure 103. As such, geo cell database 911 and even notification 916 can create and exchange message related data over the network.
  • As descried, in general, on an ongoing basis, concurrently with signal ingestion (and also essentially in real-time), event detection infrastructure 103 detects different categories of (planned and unplanned) events (e.g., fire, police response, mass shooting, traffic accident, natural disaster, storm, active shooter, concerts, protests, etc.) in different locations (e.g., anywhere across a geographic area, such as, the United States, a State, a defined area, an impacted area, an area defined by a geo cell, an address, etc.), at different times from time, location, and context included in normalized signals.
  • Event detection infrastructure 103 can also determine an event truthfulness, event severity, and an associated geo cell. In one aspect, context information in a normalized signal increases the efficiency of determining truthfulness, severity, and an associated geo cell.
  • Generally, an event truthfulness indicates how likely a detected event is actually an event (vs. a hoax, fake, misinterpreted, etc.). Truthfulness can range from less likely to be true to more likely to be true. In one aspect, truthfulness is represented as a numerical value, such as, for example, from 1 (less truthful) to 10 (more truthful) or as percentage value in a percentage range, such as, for example, from 0% (less truthful) to 100% (more truthful). Other truthfulness representations are also possible.
  • Generally, an event severity indicates how severe an event is (e.g., what degree of badness, what degree of damage, etc. is associated with the event). Severity can range from less severe (e.g., a single vehicle accident without injuries) to more severe (e.g., multi vehicle accident with multiple injuries and a possible fatality). As another example, a shooting event can also range from less severe (e.g., one victim without life threatening injuries) to more severe (e.g., multiple injuries and multiple fatalities). In one aspect, severity is represented as a numerical value, such as, for example, from 1 (less severe) to 5 (more severe). Other severity representations are also possible.
  • In general, event detection infrastructure 103 can include a geo determination module including modules for processing different kinds of content including location, time, context, text, images, audio, and video into search terms. The geo determination module can query a geo cell database with search terms formulated from normalized signal content. The geo cell database can return any geo cells having matching supplemental information. For example, if a search term includes a street name, a subset of one or more geo cells including the street name in supplemental information can be returned to the event detection infrastructure.
  • Event detection infrastructure 103 can use the subset of geo cells to determine a geo cell associated with an event location. Events associated with a geo cell can be stored back into an entry for the geo cell in the geo cell database. Thus, over time an historical progression of events within a geo cell can be accumulated.
  • As such, event detection infrastructure 103 can assign an event ID, an event time, an event location, an event category, an event description, an event truthfulness, and an event severity to each detected event. Detected events can be sent to relevant entities, including to mobile devices, to computer systems, to APIs, to data storage, etc.
  • As depicted in computer architecture 900, event detection infrastructure 103 detects events from information contained in normalized signals 122. Event detection infrastructure 103 can detect an event from a single normalized signal 122 or from multiple normalized signals 122. In one aspect, event detection infrastructure 103 detects an event based on information contained in one or more normalized signals 122. In another aspect, event detection infrastructure 103 detects a possible event based on information contained in one or more normalized signals 122. Event detection infrastructure 103 then validates the potential event as an event based on information contained in one or more other normalized signals 122.
  • As depicted, event detection infrastructure 103 includes geo determination module 904, categorization module 906, truthfulness determination module 907, and severity determination module 908.
  • Geo determination module 904 can include NLP modules, image analysis modules, etc. for identifying location information from a normalized signal. Geo determination module 904 can formulate (e.g., location) search terms 941 by using NLP modules to process audio, using image analysis modules to process images, etc. Search terms can include street addresses, building names, landmark names, location names, school names, image fingerprints, etc. Event detection infrastructure 103 can use a URL or identifier to access cached content when appropriate.
  • Categorization module 906 can categorize a detected event into one of a plurality of different categories (e.g., fire, police response, mass shooting, traffic accident, natural disaster, storm, active shooter, concerts, protests, etc.) based on the content of normalized signals used to detect and/or otherwise related to an event.
  • Truthfulness determination module 907 can determine the truthfulness of a detected event based on one or more of: source, type, age, and content of normalized signals used to detect and/or otherwise related to the event. Some signal types may be inherently more reliable than other signal types. For example, video from a live traffic camera feed may be more reliable than text in a social media post. Some signal sources may be inherently more reliable than others. For example, a social media account of a government agency may be more reliable than a social media account of an individual. The reliability of a signal can decay over time.
  • Severity determination module 908 can determine the severity of a detected event based on or more of: location, content (e.g., dispatch codes, keywords, etc.), and volume of normalized signals used to detect and/or otherwise related to an event. Events at some locations may be inherently more severe than events at other locations. For example, an event at a hospital is potentially more severe than the same event at an abandoned warehouse. Event category can also be considered when determining severity. For example, an event categorized as a “Shooting” may be inherently more severe than an event categorized as “Police Presence” since a shooting implies that someone has been injured.
  • Geo cell database 911 includes a plurality of geo cell entries. Each geo cell entry includes a geo cell defining an area and corresponding supplemental information about things included in the defined area. The corresponding supplemental information can include latitude/longitude, street names in the area defined by the geo cell, businesses in the area defined by the geo cell, other Areas of Interest (AOIs) (e.g., event venues, such as, arenas, stadiums, theaters, concert halls, etc.) in the area defined by the geo cell, image fingerprints derived from images captured in the area defined by the geo cell, and prior events that have occurred in the area defined by the geo cell. For example, geo cell entry 951 includes geo cell 952, lat/lon 953, streets 954, businesses 955, AOIs 956, and prior events 957. Each event in prior events 957 can include a location (e.g., a street address), a time (event occurrence time), an event category, an event truthfulness, an event severity, and an event description. Similarly, geo cell entry 961 includes geo cell 962, lat/lon 963, streets 964, businesses 965, AOIs 966, and prior events 967. Each event in prior events 967 can include a location (e.g., a street address), a time (event occurrence time), an event category, an event truthfulness, an event severity, and an event description.
  • Other geo cell entries can include the same or different (more or less) supplemental information, for example, depending on infrastructure density in an area. For example, a geo cell entry for an urban area can contain more diverse supplemental information than a geo cell entry for an agricultural area (e.g., in an empty field).
  • Geo cell database 911 can store geo cell entries in a hierarchical arrangement based on geo cell precision. As such, geo cell information of more precise geo cells is included in the geo cell information for any less precise geo cells that include the more precise geo cell.
  • Geo determination module 904 can query geo cell database 911 with search terms 941. Geo cell database 911 can identify any geo cells having supplemental information that matches search terms 941. For example, if search terms 141 include a street address and a business name, geo cell database 911 can identify geo cells having the street name and business name in the area defined by the geo cell. Geo cell database 911 can return any identified geo cells to geo determination module 904 in geo cell subset 942.
  • Geo determination module can use geo cell subset 942 to determine the location of event 935 and/or a geo cell associated with event 935. As depicted, event 935 includes event ID 932, time 933, location 934, description 936, category 937, truthfulness 938, and severity 939.
  • Event detection infrastructure 103 can also determine that event 935 occurred in an area defined by geo cell 962 (e.g., a geohash having precision of level 7 or level 9). For example, event detection infrastructure 103 can determine that location 934 is in the area defined by geo cell 962. As such, event detection infrastructure 903 can store event 935 in events 967 (i.e., historical events that have occurred in the area defined by geo cell 962).
  • Event detection infrastructure 103 can also send event 935 to event notification module 916. Event notification module 916 can notify one or more entities about event 134.
  • The present described aspects may be implemented in other specific forms without departing from its spirit or essential characteristics. The described aspects are to be considered in all respects only as illustrative and not restrictive. The scope is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (20)

What is claimed:
1. A method comprising:
receiving a first normalized signal;
deriving first one or more features of the first normalized signal;
determining that the first one or more features do not satisfy conditions to be identified as an event;
receiving a second normalized signal;
deriving second one or more features of the second signal;
aggregating the first one or more features with the second one or more features into aggregated features; and
detecting an event from the aggregated features.
2. The method of claim 1, wherein aggregating the first one or more features with the second one or more features into aggregated features comprises:
detecting a possible event from the first one or more features;
validating the possible event as an actual event based on the second one or more features.
3. The method of claim 1, further comprising including the first normalized signal in a signal sequence;
determining that the second normalized signal has sufficient temporal similarity to the first normalized signal;
determining that the second normalized signal has sufficient spatial similarity to the first normalized signal; and
including the second normalized signal in a signal sequence that contains the first normalized signal.
4. The method of claim 3, wherein aggregating the first one or more features with the second one or more features into aggregated features comprises deriving features of the signal sequence from the first one or more features and the second one or more features.
5. The method of claim 4, wherein deriving features of the signal sequence comprises deriving one or more of: a percentage, a count, a histogram, or a duration.
6. The method of claim 1, wherein the first normalized signal is one of: a social post with geographic content, a social post without geographic content, an image from a camera feed, a 911 call, weather data, IoT device data, satellite data, satellite imagery, a sound clip from a listening device, data from air quality sensors, a sound clip from radio communication, crowd sourced traffic information, or crowd sourced road information.
7. The method of claim 6, wherein the second normalized signal is a different one of: a social post with geographic content, a social post without geographic content, an image from a traffic camera feed, a 911 call, weather data, IoT device data, satellite data, satellite imagery, a sound clip from a listening device, data from air quality sensors, a sound clip from radio communication, crowd sourced traffic information, or crowd sourced road information.
8. The method of clam 1, wherein deriving first one or more features of the first normalized signal comprises deriving the first one or more features from a first single source probability assigned to the first normalized signal;
wherein deriving second one or more features of the second normalized signal comprises deriving the first one or more features from a second single source probability assigned to the second normalized signal;
wherein aggregating the first one or more features with the second one or more features into aggregated features comprises aggregating the first single source probability and the second single source probability into a multisource probability;
wherein detecting an event from the aggregated features comprises detecting an event from the multisource probability.
9. A method, the method comprising:
receiving a normalized signal including time, location, context, and content;
forming a signal sequence including the normalized signal;
receiving another normalized signal including another time, another location, another context, and other content;
determining that there is sufficient temporal similarity between normalized signal and the other normalized signal;
determining that there is sufficient spatial similarity between the normalized and the other normalized signal; and
including the other normalized signal in the signal sequence based on the sufficient temporal similarity and the sufficient spatial similarity
10. The method of claim 9, wherein determining that there is sufficient temporal similarity between the normalized signal and the other normalized signal comprises determining that the time and the other time are within a specified time of one another.
11. The method of claim 9, wherein determining that there is sufficient spatial similarity between the normalized and the other normalized signal comprises determining that the location and the other location are within a specified distance of one another.
12. The method of claim 9, wherein determining that there is sufficient spatial similarity between the normalized and the other normalized signal comprises determining that the location and the other location are within a specified number of geo cells of one another.
13. The method of claim 9, further comprising determining that the other normalized signal is not a duplicate of the normalized signal prior to including the other normalized signal in the signal sequence.
14. The method of claim 9, further comprising:
deriving one or more features of the signal sequence based on the normalized signal and the other normalized signal;
detecting an event from the derived one or more features.
15. The method of claim of claim 14, wherein deriving one or more features of the signal sequence comprise deriving a multisource probability for the signal sequence.
16. The method of claim 15, wherein deriving a multisource probability indicating the probability of the normalized signals in the signal sequence indicate a specified type of event.
17. A method, the method comprising:
accessing a signal sequence of normalized signals, normalized signals included in the signal sequence having a sufficient temporal similarity to one another and having a sufficient spatial similarity to one another;
extracting features from the signal sequence; and
detecting an event based on the extracted features.
18. The method of claim 17, further comprising prior to detecting the event:
detecting that the extracted features do not indicate the event;
adding an additional normalized signal to the signal sequence;
extracting further features from the signal sequence based on the additional normalized signal; and
wherein detecting an event based on the extracted features comprises detecting an event based on the further extracted features.
19. The method of claim 17, further comprising deriving a multisource probability from the extracted features; and
wherein detecting an event based on the extracted features comprises detecting an event based on the multisource probability.
20. The method of claim 17, wherein extracting features of the signal sequence comprises deriving one or more of: a percentage, a count, a histogram, or a duration.
US16/029,481 2017-08-28 2018-07-06 Detecting events from features derived from multiple ingested signals Abandoned US20190251138A1 (en)

Priority Applications (11)

Application Number Priority Date Filing Date Title
US16/029,481 US20190251138A1 (en) 2018-02-09 2018-07-06 Detecting events from features derived from multiple ingested signals
US16/203,792 US10311129B1 (en) 2018-02-09 2018-11-29 Detecting events from features derived from multiple ingested signals
PCT/US2019/025982 WO2019195674A1 (en) 2018-04-06 2019-04-05 Detecting events from features derived from multiple ingested signals
US16/379,401 US10474733B2 (en) 2018-02-09 2019-04-09 Detecting events from features derived from multiple ingested signals
US16/516,684 US10628601B2 (en) 2018-02-09 2019-07-19 Detecting events from features derived from ingested signals
US16/784,897 US10839095B2 (en) 2018-02-09 2020-02-07 Detecting events from features derived from ingested signals
US16/806,423 US20200265061A1 (en) 2017-08-28 2020-03-02 Signal normalization, event detection, and event notification using agency codes
US16/838,031 US10970184B2 (en) 2018-02-09 2020-04-02 Event detection removing private information
US16/867,285 US20200265236A1 (en) 2018-02-09 2020-05-05 Detecting events from a signal features matrix
US17/074,563 US20210081559A1 (en) 2018-02-09 2020-10-19 Managing roadway incidents
US16/950,073 US20210081556A1 (en) 2018-02-09 2020-11-17 Detecting events from features derived from ingested signals

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201862628866P 2018-02-09 2018-02-09
US201862654274P 2018-04-06 2018-04-06
US201862654277P 2018-04-06 2018-04-06
US201862664001P 2018-04-27 2018-04-27
US201862682176P 2018-06-08 2018-06-08
US201862682177P 2018-06-08 2018-06-08
US16/029,481 US20190251138A1 (en) 2018-02-09 2018-07-06 Detecting events from features derived from multiple ingested signals

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US16/396,454 Continuation-In-Part US20190332607A1 (en) 2017-08-28 2019-04-26 Normalizing ingested signals
US16/784,897 Continuation-In-Part US10839095B2 (en) 2018-02-09 2020-02-07 Detecting events from features derived from ingested signals

Related Child Applications (7)

Application Number Title Priority Date Filing Date
US16/203,792 Continuation US10311129B1 (en) 2018-02-09 2018-11-29 Detecting events from features derived from multiple ingested signals
US16/379,401 Continuation US10474733B2 (en) 2018-02-09 2019-04-09 Detecting events from features derived from multiple ingested signals
US16/516,684 Continuation-In-Part US10628601B2 (en) 2018-02-09 2019-07-19 Detecting events from features derived from ingested signals
US16/806,423 Continuation-In-Part US20200265061A1 (en) 2017-08-28 2020-03-02 Signal normalization, event detection, and event notification using agency codes
US16/838,031 Continuation-In-Part US10970184B2 (en) 2018-02-09 2020-04-02 Event detection removing private information
US16/867,285 Continuation-In-Part US20200265236A1 (en) 2018-02-09 2020-05-05 Detecting events from a signal features matrix
US17/074,563 Continuation-In-Part US20210081559A1 (en) 2018-02-09 2020-10-19 Managing roadway incidents

Publications (1)

Publication Number Publication Date
US20190251138A1 true US20190251138A1 (en) 2019-08-15

Family

ID=66673300

Family Applications (3)

Application Number Title Priority Date Filing Date
US16/029,481 Abandoned US20190251138A1 (en) 2017-08-28 2018-07-06 Detecting events from features derived from multiple ingested signals
US16/203,792 Expired - Fee Related US10311129B1 (en) 2018-02-09 2018-11-29 Detecting events from features derived from multiple ingested signals
US16/379,401 Expired - Fee Related US10474733B2 (en) 2018-02-09 2019-04-09 Detecting events from features derived from multiple ingested signals

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/203,792 Expired - Fee Related US10311129B1 (en) 2018-02-09 2018-11-29 Detecting events from features derived from multiple ingested signals
US16/379,401 Expired - Fee Related US10474733B2 (en) 2018-02-09 2019-04-09 Detecting events from features derived from multiple ingested signals

Country Status (1)

Country Link
US (3) US20190251138A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11005551B1 (en) * 2020-08-04 2021-05-11 Verizon Patent And Licensing Inc. Systems and methods for RF-based motion sensing and event detection
CN112822045A (en) * 2020-12-31 2021-05-18 天津大学 Content propagation hotspot prediction method based on multi-feature hybrid neural network

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11830519B2 (en) 2019-07-30 2023-11-28 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi Multi-channel acoustic event detection and classification method
CN110782083B (en) * 2019-10-23 2020-10-27 哈尔滨工业大学 Aero-engine standby demand prediction method based on deep Croston method
CN111355733B (en) * 2020-02-29 2021-01-29 中国地震局地震研究所 Earthquake damage information intrusion detection system and detection method based on SVM algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060287807A1 (en) * 2003-05-19 2006-12-21 Teffer Dean W Method for incorporating individual vehicle data collection, detection and recording of traffic violations in a traffic signal controller
US9507008B1 (en) * 2013-12-13 2016-11-29 Amazon Technologies, Inc. Location determination by correcting for antenna occlusion
US20170177722A1 (en) * 2015-12-22 2017-06-22 International Business Machines Corporation Segmenting social media users by means of life event detection and entity matching
US20170235820A1 (en) * 2016-01-29 2017-08-17 Jack G. Conrad System and engine for seeded clustering of news events
US20180004948A1 (en) * 2016-06-20 2018-01-04 Jask Labs Inc. Method for predicting and characterizing cyber attacks

Family Cites Families (111)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9134398B2 (en) 1996-09-09 2015-09-15 Tracbeam Llc Wireless location using network centric location estimators
US7945141B2 (en) 2003-10-06 2011-05-17 Samsung Electronics Co., Ltd. Information storage medium including event occurrence information, and apparatus and method for reproducing the information storage medium
US20050183143A1 (en) 2004-02-13 2005-08-18 Anderholm Eric J. Methods and systems for monitoring user, application or device activity
US7084775B1 (en) 2004-07-12 2006-08-01 User-Centric Ip, L.P. Method and system for generating and sending user-centric weather alerts
US7702673B2 (en) 2004-10-01 2010-04-20 Ricoh Co., Ltd. System and methods for creation and use of a mixed media environment
US8515565B2 (en) 2005-04-19 2013-08-20 Airsage, Inc. Method and system for an integrated incident information and intelligence system
US8131352B2 (en) * 2007-06-20 2012-03-06 Neuropace, Inc. System and method for automatically adjusting detection thresholds in a feedback-controlled neurological event detector
US8583267B2 (en) 2007-08-17 2013-11-12 The Invention Science Fund I, Llc Selective invocation of playback content supplementation
EP2206114A4 (en) 2007-09-28 2012-07-11 Gracenote Inc Synthesizing a presentation of a multimedia event
GB2456129B (en) 2007-12-20 2010-05-12 Motorola Inc Apparatus and method for event detection
US8782041B1 (en) 2008-08-04 2014-07-15 The Weather Channel, Llc Text search for weather data
US8676841B2 (en) * 2008-08-29 2014-03-18 Oracle International Corporation Detection of recurring non-occurrences of events using pattern matching
US8161504B2 (en) 2009-03-20 2012-04-17 Nicholas Newell Systems and methods for memorializing a viewer's viewing experience with captured viewer images
WO2010109125A1 (en) 2009-03-24 2010-09-30 France Telecom Method and device for processing a piece of information indicative of a desire to be involved in at least one user application session
KR102068790B1 (en) * 2009-07-16 2020-01-21 블루핀 랩스, 인코포레이티드 Estimating and displaying social interest in time-based media
US8380050B2 (en) 2010-02-09 2013-02-19 Echostar Technologies Llc Recording extension of delayed media content
US20110211737A1 (en) * 2010-03-01 2011-09-01 Microsoft Corporation Event Matching in Social Networks
US8595234B2 (en) * 2010-05-17 2013-11-26 Wal-Mart Stores, Inc. Processing data feeds
US8385593B2 (en) 2010-06-18 2013-02-26 Google Inc. Selecting representative images for establishments
CN103180002B (en) * 2010-07-30 2016-10-19 瑞思迈有限公司 Leakage detection method and equipment
US9940508B2 (en) * 2010-08-26 2018-04-10 Blast Motion Inc. Event detection, confirmation and publication system that integrates sensor data and social media
US9418705B2 (en) * 2010-08-26 2016-08-16 Blast Motion Inc. Sensor and media event detection system
US20160220169A1 (en) * 2010-10-15 2016-08-04 Brain Sentinel, Inc. Method and Apparatus for Detecting Seizures Including Audio Characterization
US9324093B2 (en) 2010-10-28 2016-04-26 Yahoo! Inc. Measuring the effects of social sharing on online content and advertising
WO2012138994A2 (en) 2011-04-07 2012-10-11 Oman Stephen System and methods for targeted event detection and notification
US8856121B1 (en) 2011-04-28 2014-10-07 Adobe Systems Incorporated Event based metadata synthesis
US9262522B2 (en) 2011-06-30 2016-02-16 Rednote LLC Method and system for communicating between a sender and a recipient via a personalized message including an audio clip extracted from a pre-existing recording
EP2732424A4 (en) * 2011-07-13 2015-03-25 Bluefin Labs Inc Topic and time based media affinity estimation
US20130083036A1 (en) 2011-08-19 2013-04-04 Hall of Hands Limited Method of rendering a set of correlated events and computerized system thereof
WO2013033780A1 (en) 2011-09-09 2013-03-14 Hildick-Pytte Margaret Emergency services system and method
US11151617B2 (en) 2012-03-09 2021-10-19 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US8341106B1 (en) * 2011-12-07 2012-12-25 TaKaDu Ltd. System and method for identifying related events in a resource network monitoring system
US9087024B1 (en) 2012-01-26 2015-07-21 Amazon Technologies, Inc. Narration of network content
US8830054B2 (en) 2012-02-17 2014-09-09 Wavemarket, Inc. System and method for detecting and responding to an emergency
US20140328570A1 (en) 2013-01-09 2014-11-06 Sri International Identifying, describing, and sharing salient events in images and videos
US8938089B1 (en) 2012-06-26 2015-01-20 Google Inc. Detection of inactive broadcasts during live stream ingestion
US8812623B2 (en) 2012-07-17 2014-08-19 Nokia Siemens Networks Oy Techniques to support selective mobile content optimization
US9230250B1 (en) 2012-08-31 2016-01-05 Amazon Technologies, Inc. Selective high-resolution video monitoring in a materials handling facility
US9894169B2 (en) 2012-09-04 2018-02-13 Harmon.Ie R&D Ltd. System and method for displaying contextual activity streams
US9002069B2 (en) * 2012-09-24 2015-04-07 International Business Machines Corporation Social media event detection and content-based retrieval
US8892484B2 (en) 2012-09-28 2014-11-18 Sphere Of Influence, Inc. System and method for predicting events
US9558220B2 (en) 2013-03-04 2017-01-31 Fisher-Rosemount Systems, Inc. Big data in process control systems
US9159030B1 (en) * 2013-03-14 2015-10-13 Google Inc. Refining location detection from a query stream
CA3078018C (en) 2013-03-15 2023-08-22 Amazon Technologies, Inc. Scalable analysis platform for semi-structured data
US10268660B1 (en) * 2013-03-15 2019-04-23 Matan Arazi Real-time event transcription system and method
US9077956B1 (en) * 2013-03-22 2015-07-07 Amazon Technologies, Inc. Scene identification
CA2947936C (en) 2013-05-04 2023-02-21 Christopher Decharms Mobile security technology
US20140351046A1 (en) * 2013-05-21 2014-11-27 IgnitionOne, Inc, System and Method for Predicting an Outcome By a User in a Single Score
US9727882B1 (en) * 2013-06-21 2017-08-08 Amazon Technologies, Inc. Predicting and classifying network activity events
US9514133B1 (en) 2013-06-25 2016-12-06 Jpmorgan Chase Bank, N.A. System and method for customized sentiment signal generation through machine learning based streaming text analytics
ITPI20130070A1 (en) * 2013-07-15 2015-01-16 Alessandro Battistini METHOD FOR THE CREATION OF DATABASES OF EVENTS WITH MEDIUM ECO ON THE INTERNET.
US20170279957A1 (en) * 2013-08-23 2017-09-28 Cellepathy Inc. Transportation-related mobile device context inferences
WO2015047287A1 (en) * 2013-09-27 2015-04-02 Intel Corporation Methods and apparatus to identify privacy relevant correlations between data values
US9396253B2 (en) 2013-09-27 2016-07-19 International Business Machines Corporation Activity based analytics
US9858322B2 (en) 2013-11-11 2018-01-02 Amazon Technologies, Inc. Data stream ingestion and persistence techniques
US9703974B1 (en) * 2013-12-20 2017-07-11 Amazon Technologies, Inc. Coordinated file system security via rules
US9940679B2 (en) 2014-02-14 2018-04-10 Google Llc Systems, methods, and computer-readable media for event creation and notification
US9466196B2 (en) 2014-04-08 2016-10-11 Cubic Corporation Anomalous phenomena detector
US20150294233A1 (en) 2014-04-10 2015-10-15 Derek W. Aultman Systems and methods for automatic metadata tagging and cataloging of optimal actionable intelligence
US10325205B2 (en) 2014-06-09 2019-06-18 Cognitive Scale, Inc. Cognitive information processing system environment
US9449229B1 (en) 2014-07-07 2016-09-20 Google Inc. Systems and methods for categorizing motion event candidates
US9501915B1 (en) 2014-07-07 2016-11-22 Google Inc. Systems and methods for analyzing a video stream
US9158974B1 (en) 2014-07-07 2015-10-13 Google Inc. Method and system for motion vector-based video monitoring and event categorization
US9082018B1 (en) 2014-09-30 2015-07-14 Google Inc. Method and system for retroactively changing a display characteristic of event indicators on an event timeline
US9703827B2 (en) 2014-07-17 2017-07-11 Illumina Consulting Group, Inc. Methods and apparatus for performing real-time analytics based on multiple types of streamed data
US9680919B2 (en) 2014-08-13 2017-06-13 Software Ag Usa, Inc. Intelligent messaging grid for big data ingestion and/or associated methods
US9363280B1 (en) * 2014-08-22 2016-06-07 Fireeye, Inc. System and method of detecting delivery of malware using cross-customer data
US9699523B1 (en) 2014-09-08 2017-07-04 ProSports Technologies, LLC Automated clip creation
US20160078362A1 (en) * 2014-09-15 2016-03-17 Qualcomm Incorporated Methods and Systems of Dynamically Determining Feature Sets for the Efficient Classification of Mobile Device Behaviors
EP3216222A2 (en) 2014-11-07 2017-09-13 Mporium Group PLC Influencing content or access to content
US20160135706A1 (en) * 2014-11-14 2016-05-19 Zoll Medical Corporation Medical Premonitory Event Estimation
US20160210367A1 (en) * 2015-01-20 2016-07-21 Yahoo! Inc. Transition event detection
US9544726B2 (en) 2015-01-23 2017-01-10 Apple Inc. Adding location names using private frequent location data
US20160239713A1 (en) * 2015-02-16 2016-08-18 William Allen STONE In-vehicle monitoring
US20160267144A1 (en) 2015-03-12 2016-09-15 WeLink, Inc. Collecting and generating geo-tagged social media data through a network router interface
US20160283860A1 (en) 2015-03-25 2016-09-29 Microsoft Technology Licensing, Llc Machine Learning to Recognize Key Moments in Audio and Video Calls
US9880769B2 (en) 2015-06-05 2018-01-30 Microsoft Technology Licensing, Llc. Streaming joins in constrained memory environments
US9965685B2 (en) * 2015-06-12 2018-05-08 Google Llc Method and system for detecting an audio event for smart home devices
WO2016202890A1 (en) 2015-06-15 2016-12-22 Piksel, Inc Media streaming
AU2016204072B2 (en) 2015-06-17 2017-08-03 Accenture Global Services Limited Event anomaly analysis and prediction
US10043551B2 (en) 2015-06-25 2018-08-07 Intel Corporation Techniques to save or delete a video clip
US9443002B1 (en) * 2015-07-10 2016-09-13 Grand Rapids, Inc. Dynamic data analysis and selection for determining outcomes associated with domain specific probabilistic data sets
ES2874055T3 (en) 2015-07-10 2021-11-04 Whether or Knot LLC System and procedure for the distribution of electronic data
US10176336B2 (en) 2015-07-27 2019-01-08 Microsoft Technology Licensing, Llc Automated data transfer from mobile application silos to authorized third-party applications
WO2017035536A1 (en) 2015-08-27 2017-03-02 FogHorn Systems, Inc. Edge intelligence platform, and internet of things sensor streams system
US9699205B2 (en) * 2015-08-31 2017-07-04 Splunk Inc. Network security system
US10614364B2 (en) * 2015-09-16 2020-04-07 Microsoft Technology Licensing, Llc Localized anomaly detection using contextual signals
WO2017051063A1 (en) 2015-09-23 2017-03-30 Nokia Technologies Oy Video content selection
US10057349B2 (en) 2015-11-12 2018-08-21 Facebook, Inc. Data stream consolidation in a social networking system for near real-time analysis
US10955810B2 (en) 2015-11-13 2021-03-23 International Business Machines Corporation Monitoring communications flow in an industrial system to detect and mitigate hazardous conditions
US10122783B2 (en) 2015-11-18 2018-11-06 Microsoft Technology Licensing, Llc Dynamic data-ingestion pipeline
US9992248B2 (en) 2016-01-12 2018-06-05 International Business Machines Corporation Scalable event stream data processing using a messaging system
US11190821B2 (en) 2016-03-02 2021-11-30 International Business Machines Corporation Methods and apparatus for alerting users to media events of interest using social media analysis
US10237295B2 (en) * 2016-03-22 2019-03-19 Nec Corporation Automated event ID field analysis on heterogeneous logs
US9715508B1 (en) * 2016-03-28 2017-07-25 Cogniac, Corp. Dynamic adaptation of feature identification and annotation
US10051349B2 (en) * 2016-04-05 2018-08-14 Tyco Fire & Security Gmbh Sensor based system and method for premises safety and operational profiling based on drift analysis
US10045120B2 (en) 2016-06-20 2018-08-07 Gopro, Inc. Associating audio with three-dimensional objects in videos
US10078537B1 (en) 2016-06-29 2018-09-18 EMC IP Holding Company LLC Analytics platform and associated controller for automated deployment of analytics workspaces
US10516930B2 (en) * 2016-07-07 2019-12-24 Bragi GmbH Comparative analysis of sensors to control power status for wireless earpieces
US10209081B2 (en) 2016-08-09 2019-02-19 Nauto, Inc. System and method for precision localization and mapping
US10356027B2 (en) 2016-10-03 2019-07-16 HYP3R Inc Location resolution of social media posts
US20180101590A1 (en) 2016-10-10 2018-04-12 International Business Machines Corporation Message management in a social networking environment
US10602235B2 (en) 2016-12-29 2020-03-24 Arris Enterprises Llc Video segment detection and replacement
US10915911B2 (en) 2017-02-03 2021-02-09 Snap Inc. System to determine a price-schedule to distribute media content
US10310918B2 (en) 2017-03-22 2019-06-04 International Business Machines Corporation Information sharing among mobile apparatus
US10320566B2 (en) 2017-04-04 2019-06-11 International Business Machines Corporation Distributed logging of application events in a blockchain
KR102414024B1 (en) * 2017-04-04 2022-06-29 에스케이하이닉스 주식회사 Mage sensor having optical filter and operating method thereof
US10034029B1 (en) 2017-04-25 2018-07-24 Sprint Communications Company L.P. Systems and methods for audio object delivery based on audible frequency analysis
US10571444B2 (en) 2017-04-27 2020-02-25 International Business Machines Corporation Providing data to a distributed blockchain network
US10237393B1 (en) 2017-09-12 2019-03-19 Intel Corporation Safety systems and methods that use portable electronic devices to monitor the personal safety of a user
US10268642B1 (en) 2018-04-27 2019-04-23 Banjo, Inc. Normalizing insufficient signals based on additional information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060287807A1 (en) * 2003-05-19 2006-12-21 Teffer Dean W Method for incorporating individual vehicle data collection, detection and recording of traffic violations in a traffic signal controller
US9507008B1 (en) * 2013-12-13 2016-11-29 Amazon Technologies, Inc. Location determination by correcting for antenna occlusion
US20170177722A1 (en) * 2015-12-22 2017-06-22 International Business Machines Corporation Segmenting social media users by means of life event detection and entity matching
US20170235820A1 (en) * 2016-01-29 2017-08-17 Jack G. Conrad System and engine for seeded clustering of news events
US20180004948A1 (en) * 2016-06-20 2018-01-04 Jask Labs Inc. Method for predicting and characterizing cyber attacks

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11005551B1 (en) * 2020-08-04 2021-05-11 Verizon Patent And Licensing Inc. Systems and methods for RF-based motion sensing and event detection
US11601180B2 (en) 2020-08-04 2023-03-07 Verizon Patent And Licensing Inc. Systems and methods for RF-based motion sensing and event detection
CN112822045A (en) * 2020-12-31 2021-05-18 天津大学 Content propagation hotspot prediction method based on multi-feature hybrid neural network

Also Published As

Publication number Publication date
US10474733B2 (en) 2019-11-12
US20190251139A1 (en) 2019-08-15
US10311129B1 (en) 2019-06-04

Similar Documents

Publication Publication Date Title
US10803084B2 (en) Normalizing insufficient signals based on additional information
US10382938B1 (en) Detecting and validating planned event information
US10474733B2 (en) Detecting events from features derived from multiple ingested signals
US10838991B2 (en) Detecting an event from signals in a listening area
US10885068B2 (en) Consolidating information from different signals into an event
US10397757B1 (en) Deriving signal location from signal content
US10257058B1 (en) Ingesting streaming signals
US11062144B2 (en) Classifying video
US10324948B1 (en) Normalizing ingested signals
US10970184B2 (en) Event detection removing private information
US20200265236A1 (en) Detecting events from a signal features matrix
US20200265061A1 (en) Signal normalization, event detection, and event notification using agency codes
US20210081556A1 (en) Detecting events from features derived from ingested signals
US20210056345A1 (en) Creating signal sequences
US20210012114A1 (en) Segmenting video stream frames
US10671651B1 (en) Deriving signal location information
WO2019195674A1 (en) Detecting events from features derived from multiple ingested signals
US20210067596A1 (en) Detecting major events

Legal Events

Date Code Title Description
AS Assignment

Owner name: BANJO, INC., UTAH

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PATTON, DAMIEN;MEHTA, RISH;BRUCKHAUS, TILMANN;SIGNING DATES FROM 20180709 TO 20180712;REEL/FRAME:046389/0297

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION