WO2020132104A1 - Systems and methods for crowdsourced incident data distribution - Google Patents

Systems and methods for crowdsourced incident data distribution Download PDF

Info

Publication number
WO2020132104A1
WO2020132104A1 PCT/US2019/067234 US2019067234W WO2020132104A1 WO 2020132104 A1 WO2020132104 A1 WO 2020132104A1 US 2019067234 W US2019067234 W US 2019067234W WO 2020132104 A1 WO2020132104 A1 WO 2020132104A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
incident
electronic device
transmitting
platform
Prior art date
Application number
PCT/US2019/067234
Other languages
French (fr)
Inventor
Kenneth Liu
Original Assignee
Kenneth Liu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kenneth Liu filed Critical Kenneth Liu
Publication of WO2020132104A1 publication Critical patent/WO2020132104A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/10Recognition assisted with metadata
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Definitions

  • the present disclosure is directed to techniques for selecting incident data for distribution. More specifically, the disclosure is directed at techniques for selecting, from multiple crowdsourced incident data, particular data of high relevance for the desired party.
  • One solution to this problem is to install fixed cameras at locations where incidents occur with high frequency.
  • An example may be a traffic camera located at an intersection which continuously records such that footage may be retrieved for visual evidence when an incident (e.g. , traffic accident) takes place.
  • this solution fails to provide the relevant party (e.g, law enforcement, insurance company, and/or party involved in incident) with multiple visual evidence sources with multiple perspective for a single incident.
  • systems and methods are disclosed herein for crowdsourced incident data distribution.
  • systems and methods receive a report of an incident from one or more devices connected to the incident platform.
  • the platform identifies a number of active devices within a determined radius of the incident.
  • the identified devices then send a specific time-period of recording to the platform for processing.
  • the platform receives the recordings and identifies the most relevant recordings by comparing the recordings to a pre-selected number of quality training sets.
  • the recordings are formatted and sent to the selected party.
  • the platform receiving the recordings with specified tags and descriptors for a specific incident. Upon receiving the recordings, the platform compares the video to a set of pre-determined visual identifiers to verify the accuracy of the specified tags and descriptors. If the recordings are verified, they are formatted and sent to the selected output party.
  • the disclosed platform provides techniques for determining enhanced accuracy through image recognition by combining specific techniques with auxiliary score computations to determine specific surveillance incidents.
  • FIG. 1 shows an illustrative diagram of a user device used in a vehicle recording in a first environment, in accordance with some embodiments of the disclosure
  • FIG. 2 shows an illustrative diagram of a user device used in a vehicle recording an incident in a second environment, in accordance with some embodiments of the disclosure
  • FIG. 3 shows an illustrative diagram of a law enforcement officer accessing the incident data platform to retrieve relevant recordings, in accordance with some embodiments of the disclosure
  • FIG. 4 shows an illustrative system diagram of the incident data platform, training set data structure, multiple devices, and multiple party servers, in accordance with some embodiments of the disclosure
  • FIG. 5 shows an illustrative block diagram of the incident data platform, in accordance with some embodiments of the disclosure
  • FIG. 6 is an illustrative flowchart of a process for crowdsourced incident data distribution, in accordance with some embodiments of the disclosure.
  • FIG. 7 is an illustrative flowchart of for transmitting reports to law enforcement devices for review, in accordance with some embodiments of the disclosure.
  • FIG. 1 shows an illustrative diagram 100 of a user device used in a vehicle recording in a first environment, in accordance with some embodiments of the disclosure.
  • a user is operating a vehicle while a smartphone with embedded camera is recording the environment from the vantage point of the vehicle by using an image capture technology (e.g ., a camera operating on a smartphone).
  • the device used to capture the environment may use additional data capture techniques to supplement the visual capture such as locational data, telemetry, temperature, acoustic sound capture, and other measurable metrics related to the environment.
  • FIG. 1 illustrates a driver commuting along a street while the driver’s smartphone operates a software application which instructs the camera to passively capture the environment outside from the perspective of the front windshield based on the positioning of the smartphone.
  • FIG. 2 shows an illustrative diagram 200 of a user device used in a vehicle recording an incident in a second environment, in accordance with some embodiments of the disclosure.
  • the smartphone camera captures a vehicle making an illegal U- turn at an intersection.
  • the driver selects a record function on the software application on the driver’s smartphone to initiate the generation of a report for the incident data platform. Pressing the record function may provide for the incident data platform to record the previous 30 seconds and future 30 seconds as a stored video clip (or compilation of images for this duration).
  • the amount of time would be preconfigured. Implementation of a video buffer may be used to retrieve previous video data from a specified timestamp.
  • the incident data platform may include additional video data which includes prior video data occurring at a predefined amount of time prior to the video data (e.g., 30 seconds before incident) and subsequent video data occurring at a predefined amount of time subsequent to the video data (e.g., 30 seconds after incident).
  • additional video data which includes prior video data occurring at a predefined amount of time prior to the video data (e.g., 30 seconds before incident) and subsequent video data occurring at a predefined amount of time subsequent to the video data (e.g., 30 seconds after incident).
  • the incident data platform is configured to detect incidents based on image recognition. For example, the incident data platform detects the street sign symbol indicating that U-turns are prohibited. Secondly, the incident data platform detects from the captured images that the vehicle has performed the prohibited maneuver based on the one or more captured images from the smartphone when compared to a training set for prohibited maneuvers which contains a U-turn training set.
  • a training set may be set of examples for a specific set of parameters used by the platform to fit the parameters from the received recordings to compare against a predefined model. The training set may be used for image recognition, image quality, audio recognition, and/or any other comparative calculation.
  • the incident data platform Upon a comparison of the real-time image captures to the training set for prohibited maneuvers, the incident data platform automatically selects the record function to record the specific incident.
  • the incident data platform may determine whether the video data is similar to an incident profile.
  • the incident profile may be the training set which specific images of specific types of incidents (e.g., accidents, illegal U-turns, entry into intersection during red light, etc.).
  • the incident data platform may retrieve additional video data.
  • the additional video data is contiguous to the video data (e.g., as described above recording the previous 30 seconds and future 30 seconds).
  • the incident data platform may transmit the video data and the additional video data to an electronic device such as an insurance server or police server.
  • the incident data platform may implement a convolutional network for image recognition (e.g., determining whether the video data is similar to the incident profile).
  • the convolutional neural network may include a plurality of layers. A first layer receives an input data and applies a weight/bias to the input before passing the data to the next layer.
  • Each subsequent layer applies a bias before passing on the data to the next layer.
  • the final layer provides the output of the data.
  • the convolutional network may be configured to use RGB image color for each pixel in the image against various layers in the convolutional network.
  • Each layer in the convolutional network may have a small receptive field (e.g., 3x3 pixels).
  • Each receptive field may comprise a neuron.
  • weights or bias are applied to the convolutional layer such that the input receives a bias as it passes through each layer.
  • Weights may be generated using a variety of known techniques such as mini -batch gradient descent, or other known statistical and/or mathematical models.
  • pooling layers are then implemented which may include max pooling to take the maximum value from each of a cluster of neurons from a prior layer and use for a single neuron for the next layer. Additional layers may also be used such as Fully-Connected layers which connect every neuron in one layer to every neuron into another layer.
  • a training set may be used to train the model for baseline configuration.
  • Various training sets are readily publicly available for initial training.
  • the platform identifies one or more active devices within a determined radius of the incident. Identification of the active devices may be determined by locational (or geographical) hardware/software within the devices interfacing with the incident data platform. For example, a device may utilize a software application which provides a Graphical User Interface (GUI) for interaction with the incident data platform.
  • GUI Graphical User Interface
  • the location of the incident itself may be determined from one or more of the received recording which may include metadata of the incident.
  • a video clip may have GPS coordinates, temperatures, telemetry (e.g., orientation, acceleration, etc.), and other metrics associated with the video adjusting in time as the recording plays. Location may also be determined from image recognition of the images from the recording.
  • the location may be determined from this data within the recording itself.
  • the incident data platform selects all devices within a predefined radius to retrieve recordings at the specified timestamp of the incident. In this way, the incident data platform receives multiple recordings at a specified timestamp of the incident, at a common location, from a variety of perspectives.
  • the incident data platform compares the one or more recordings to a number of training sets to ensure a minimum threshold quality of recording is met.
  • a training set may require that the visual histogram measuring brightness/contrast are within a certain average over time for the recording to meet minimum threshold quality.
  • the training set may use image recognition to verify that the recordings capture the incident and implement a similar scheme as noted in the preceding paragraph.
  • the incident data platform may determine the specific party for receipt of the recordings.
  • the user of the personal electronic device may specify on the software application interfacing with the incident data platform that the recording is to be forwarded to a specific party (e.g ., law enforcement, insurance, and/or party involved in incident).
  • the incident data platform may, based on a specific training set when compared to the recording, determine the specific party which is to receive the recordings.
  • the incident data platform may automatically forward all recording to a preconfigured party which is to receive the recordings.
  • the incident data platform may determine, by control circuitry implementing the machine learning model, a specific electronic device from a plurality of electronic device for transmission of the additional video data.
  • the training information for the machine learning model may include at least one of locational information, identities of parties involved, license plate of a vehicle, and signage within proximate location.
  • the incident data platform may parse the recording for specific data and/or use embedded metadata of the recordings to store the and characterize the data in specific groups. For example, if a recording is received from a red- light traffic violation in San Francisco, California, the incident data platform retrieves the locational metadata embedded in the recording, and associates this recording within a group or dataset based on the San Francisco location.
  • Specific data retrieved from the recording may be utilized by the incident data platform for analysis such as determining various metrics for group specific data. For example, the platform may determine that the specific location has had many previous receiving data, from the same location, which corresponds to red-light traffic violations. This data can then be used for further analysis or be output to a specific server (e.g ., law enforcement, insurance company, and/or party involved in the incident).
  • a specific server e.g ., law enforcement, insurance company, and/or party involved in the incident.
  • the characterization or group creation may be predefined by the incident data platform.
  • the characterization may be provided by an interested party such as law enforcement, insurance company, and/or party involved in the incident.
  • the incident data platform Based on selected party which is the receive the recordings, formats the one or more recordings to be in compliance with a predetermined formatting standard of the selected party. For example, if the one or more recordings are to be sent to law enforcement, the one or more recording may require specific encryption prior to transmission to the law enforcement server. In some embodiments, if encryption is required, the software application interfacing with the incident data platform may have all recording encrypted initially. In some embodiments, the format may require a specific compression, length, audio profile, video profile, metadata, to be added or removed prior to transmission.
  • FIG. 3 shows an illustrative diagram 300 of a law enforcement officer accessing the incident data platform to retrieve relevant recordings, in accordance with some embodiments of the disclosure.
  • the officer is using a law enforcement server device (e.g., laptop) to access the incident data platform.
  • the incident data platform provides the laptop with the one or more recordings in the specific format for law enforcement.
  • the officer may view the recordings to find a variety of perspectives of the incident.
  • the incident data platform transmits to a specific device for the specified party.
  • the incident data platform may transmit to a central server of the specified party.
  • the devices may provide tags to the recordings to classify the incident.
  • a recording made by a smartphone in a vehicle may automatically generate tags for the recording based on the metadata of the environment.
  • the tags may include“night-time,”“liquor-store,” “night-club,” or other environment based tags.
  • the tags may automatically generate based on image recognition where specific detected objects or actions may be tagged such as“gun,”“collision,”“bank.”
  • the tags may be manually input by the user of the device where the tags may be input through voice-input or keystroke.
  • the tags may be used to help classify the recordings such that classification of specific videos may be sent for particular devices within a specified party. Additionally, this may be useful for archival storage in databases.
  • the incident data platform may include auxiliary
  • a device may also capture audio of the incident in the recording.
  • This audio may be analyzed for specific keywords.
  • the determined keyword may be compared to a database comprising specific keywords indicating distress, locational information, specific dialogue between parties. This information may be classified based on these various determinations and be assigned a score based on the specific incident classification. For example, an audio may include“The light was red!” captured by a witness during a vehicle running a red light. This information may be afforded a higher weight based on an audio score computation. This would then be used in conjunction with the image recognition determination to generate a collaborative score which may then be sent to the selected output party.
  • auxiliary data may include audio captured by the devices, locational information by the devices, facial expressions of people captured on video, various landmarks, timing of upload of the recording relative to the incident, any indication of authenticity of recording, and/or similar auxiliary information.
  • FIG. 4 shows an illustrative system diagram 400 of the incident data platform, training set data structure, multiple devices, and multiple party servers, in accordance with some embodiments of the disclosure.
  • the incident data platform 402 may be of any hardware that provides for the functionality of the disclosed techniques for crowdsourced incident data distribution.
  • the incident data platform may be communicatively coupled to multiple devices (e.g ., device 1 (406), device 2 (408), device 3 (410), and/or device n (412)).
  • the incident data platform may be communicatively coupled to a training set data structure 404.
  • the incident data platform may also be communicatively coupled to one or more servers (e.g ., law enforcement server 414, insurance company server 416, and server of party involved in incident 418).
  • FIG. 5 shows an illustrative block diagram of the incident data platform, in accordance with some embodiments of the disclosure.
  • the incident data platform may be embedded within a device having shared hardware of the device.
  • the incident data platform may be part of a digital camera, personal computer, smartphone, tablet, wearable technology product or other electronic device.
  • the incident data platform may be remote from the device where the platform resides in a cloud receiving information from multiple devices.
  • the incident data platform may be within one of the devices, 406, 408, 410, or 412.
  • Any of the system modules e.g., incident data platform, training set data structure, devices, and servers
  • the devices interfacing with the incident data platform 402 include (e.g, device 1 (406), device 2 (408), device 3 (410), and/or device n (412)) may be any device which have send and/or receive functionality and image capture technology. These devices capture the incident and transmit the captured recordings to the incident data platform.
  • these devices can include, but are not limited to, network-connected devices (e.g, Internet-of-Things devices), wearable devices (e.g, glasses, smartwatches, smart-clothing), smartphones, dash-cameras, video cameras, digital cameras, tablets, personal computers, smart appliances, consumer electronics, and similar systems.
  • the devices may be any device which have send and/or receive functionality and image capture technology. These devices capture the incident and transmit the captured recordings to the incident data platform.
  • these devices can include, but are not limited to, network-connected devices (e.g, Internet-of-Things devices), wearable devices (e.g, glasses, smartwatches, smart-clothing), smartphones, dash-cameras, video
  • the devices may utilize a software application which provides a GUI for interaction with the incident data platform.
  • the training set data structure 404 may be any database, server, computing device which contains memory for storing various types of information retrieved by the incident data platform.
  • the training set data structure contains training set information for neural networks, adversarial generative network, machine learning, deep learning, and other computer learning techniques which require training sets to compare data.
  • a training set which provides images of street signs may be used by the neural network for the hidden layer nodes to iterate comparison analysis based on the provided training set.
  • training sets may include parameters for video or image quality.
  • training sets may provide parameters as to what constitutes an incident.
  • the training set may provide video examples of illegal traffic maneuvers for a traffic application of this current system.
  • the training set data structure may be communicatively coupled to the incident data platform through a communication means (e.g ., network connection, Bluetooth, near field
  • the servers interfacing with the incident data platform 402 include (e.g., law enforcement server (414), insurance company server (416), and/or server of party involved in incident (418)) may be any device which have send and/or receive functionality and storage functionality to store the one or more recordings. These servers receive the one or more recordings from the incident data platform and store them for access and/or archive. In various systems, these servers can include, but are not limited to, network-connected devices (e.g, Intern et-of-Things devices), personal computers, servers, network-connected storage devices, smartphones, tablets, wearable devices (e.g, glasses, smartwatches, smart-clothing), smart appliances, consumer electronics, and similar systems. The servers may be any device which have send and/or receive functionality and storage functionality to store the one or more recordings. These servers receive the one or more recordings from the incident data platform and store them for access and/or archive. In various systems, these servers can include, but are not limited to, network-connected devices (e.g, Intern et-of-Things
  • a communication means e.g, network connection, Bluetooth, near field communication, cellular network, Wi-Fi, or any other communicative means.
  • FIG. 5 shows an illustrative block diagram 500 of the incident data platform 402, in accordance with some embodiments of the disclosure.
  • the incident data platform may be communicatively connected to a user interface.
  • the incident data platform may include processing circuitry, control circuitry, and storage (e.g, RAM, ROM, hard disk, removable disk, etc.).
  • the incident data platform may include an input/output path 506.
  • I/O path 506 may provide device information, or other data over a local area network (LAN) or wide area network (WAN), and/or other content and data to control circuitry 504, which includes processing circuitry 508 and storage 510.
  • Control circuitry 504 may be used to send and receive commands, requests, and other suitable data using I/O path 506.
  • I/O path 506 may connect control circuitry 504 (and specifically processing circuitry 508) to one or more communications paths.
  • Control circuitry 504 may be based on any suitable processing circuitry such as processing circuitry 508.
  • processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor e.g ., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer.
  • processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g, an Intel Core i5 processor and an Intel Core i7 processor).
  • control circuitry 504 executes instructions for an incident data platform stored in memory (i.e., storage 510).
  • Memory may be an electronic storage device provided as storage 510 which is part of control circuitry 504.
  • the phrase "electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, solid state devices, quantum storage devices, or any other suitable fixed or removable storage devices, and/or any combination of the same.
  • Nonvolatile memory may also be used (e.g, to launch a boot-up routine and other instructions).
  • the incident data platform 502 may be coupled to communications network.
  • Communications network may be one or more networks including the Internet, a mobile phone network, mobile voice or data network (e.g, a 4G or LTE network), cable network, public switched telephone network, or other types of communications network or
  • Paths may separately or together include one or more communications paths, such as, a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications, free-space connections (e.g, for broadcast or other wireless signals), or any other suitable wired or wireless communications path or
  • FIG. 6 is an illustrative flowchart of a process for crowdsourced incident data distribution, in accordance with some embodiments of the disclosure.
  • Process 600 may be executed by control circuitry 504 (e.g, in a manner instructed to control circuitry 504 by the incident data platform).
  • Control circuitry 504 may be part of incident data platform 402, or of a remote server separated from the incident data platform by way of communication network, or distributed over a combination of both.
  • the incident data platform by control circuitry 504, receives a report of an incident from one or more devices connected to the incident platform.
  • the incident data platform may receive the recording from any of devices may be any of device 1 (406), device 2 (408), device 3 (410), and/or device n (412). The reception of the report may use the I/O path 506 of the incident data platform 402. If, at 604, control circuitry 504 determines“No,” the report of an incident from one or more devices connected to the incident platform was not received, the process reverts to 602.
  • control circuitry 504 determines“Yes,” the report of an incident from one or more devices connected to the incident platform was received, the process advances to 606.
  • the incident data platform by control circuitry 504, identifies one or more active devices within a determined radius of the incident.
  • the incident data platform by control circuitry 504, compares the recordings to a pre-selected number of quality training sets. If, at 610, control circuitry 504 determines “No,” the recording does not meet one or more comparison thresholds, the process advances to 611. At 611, the incident data platform, by control circuitry 504, retrieves the next identified recording.
  • the incident data platform may receive the recording from any of devices may be any of device 1 (406), device 2 (408), device 3 (410), and/or device n (412). The reception of the report may use the I/O path 506 of the incident data platform 402.
  • control circuitry 504 determines“Yes,” the recording meets one or more comparison thresholds, the process advances to 612.
  • the incident data platform by control circuitry 504, retrieves output party selection.
  • the retrieval of the output party selection may use the incident data platform 402 processing circuitry 508.
  • the retrieval of the output party selection may use the incident data platform 402 I/O path 506, the output party selection may be retrieved from any of device 1 (406), device 2 (408), device 3 (410), and/or device n (412).
  • the incident data platform by control circuitry 504, formats the recording for output party selection.
  • the formatting of the recording by incident data platform 402 may utilize processing circuitry 508.
  • the incident data platform transmits the recording to device corresponding to output party selection.
  • the transmittal of the recording to the device corresponding to the output party selection may utilize the incident data platform 402 I/O path 506.
  • FIG. 7 is an illustrative flowchart of for transmitting reports to law enforcement devices for review, in accordance with some embodiments of the disclosure.
  • a reckless driver commits an incident.
  • a user witnessing incident uses the application on their device to submit the recording to the platform.
  • the recording is saved from 30 seconds prior to the button press, to 30 seconds after the button press. This ensures that full context is provided in the recording.
  • the Artificial Intelligence (AI) algorithm analyzes the recording to ensure relevance when compared to training set.
  • the recording is paired to locational data of the incident.
  • both the recording and locational data are sent to a database via encrypted mobile data via platform.
  • the platform transmits the video file and GPS location data to law enforcement server for review.
  • the platform filters one or more recordings that were taken within a certain law enforcement agency’s jurisdiction.
  • the platform further transmits one or more recordings within the law enforcement agency’ s jurisdiction.
  • FIGS. 6-7 may be used with any other embodiment of this disclosure.
  • the steps and descriptions described in relation to FIGS. 6-7 may be done in alternative orders or in parallel to further the purposes of this disclosure.
  • each of these steps may be performed in any order or in parallel or substantially simultaneously to reduce lag or increase the speed of the system or method. Any of these steps may also be skipped or omitted from the process.
  • any of the devices or equipment discussed in relation to FIGS. 4-5 could be used to perform one or more of the steps in FIGS. 6-7.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Tourism & Hospitality (AREA)
  • Technology Law (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Educational Administration (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems and methods are disclosed herein for crowdsourced incident data distribution. In one embodiment of the disclosed technique for crowdsourced incident data distribution, systems and methods receive a report of an incident from one or more devices connected to the incident platform. Upon report of incident, the platform identifies a number of active devices within a determined radius of the incident. The identified devices then send a specific time-period of recording to the platform for processing. The platform receives the recordings and identifies the most relevant recordings by comparing the recordings to a pre-selected number of quality training sets. Upon one or more of the recordings meeting comparison thresholds, the recordings are formatted and sent to the selected party.

Description

SYSTEMS AND METHODS FOR CROWDSOURCED INCIDENT DATA
DISTRIBUTION
Background
[0001] The present disclosure is directed to techniques for selecting incident data for distribution. More specifically, the disclosure is directed at techniques for selecting, from multiple crowdsourced incident data, particular data of high relevance for the desired party.
Summary
[0002] An estimated 115 million cars are used by Americans to commute to work every day. But not all of them are driven safely or legally, as the American Insurance Institute for Highway Safety reports that 34,439 people died nationwide in 2016 as a result of a fatal motor crash. Traffic citations are often used as a deterrent to encourage drivers to be safe and efficient behind the wheel, but with approximately 2,600 drivers per single police officer, it is impossible for police to find and cite each individual traffic violation.
[0003] One solution to this problem is to install fixed cameras at locations where incidents occur with high frequency. An example may be a traffic camera located at an intersection which continuously records such that footage may be retrieved for visual evidence when an incident ( e.g. , traffic accident) takes place. However, this solution fails to provide the relevant party (e.g, law enforcement, insurance company, and/or party involved in incident) with multiple visual evidence sources with multiple perspective for a single incident.
[0004] Accordingly, systems and methods are disclosed herein for crowdsourced incident data distribution. In one embodiment of the disclosed technique for crowdsourced incident data distribution, systems and methods receive a report of an incident from one or more devices connected to the incident platform. Upon report of incident, the platform identifies a number of active devices within a determined radius of the incident. The identified devices then send a specific time-period of recording to the platform for processing. The platform receives the recordings and identifies the most relevant recordings by comparing the recordings to a pre-selected number of quality training sets. Upon one or more of the recordings meeting comparison thresholds, the recordings are formatted and sent to the selected party.
[0005] In another embodiment of the disclosed technique for selecting an output device for crowdsourced incident data distribution, the platform receiving the recordings with specified tags and descriptors for a specific incident. Upon receiving the recordings, the platform compares the video to a set of pre-determined visual identifiers to verify the accuracy of the specified tags and descriptors. If the recordings are verified, they are formatted and sent to the selected output party.
[0006] These systems and methods provide solutions in instances where a single recording instrument fails to provide sufficient visual record to view an incident from a plurality of viewpoints during a common time period. This further provides for instruments to be available in areas of high population as the recording instruments may include personal electronic devices which are readily available solving a logistical and technical problem of surveillance at various locations within society. Moreover, the disclosed platform may provide information to a variety of interested parties based on tagging and other
configurations which provide a more efficient way to gather and distribute recorded information. Moreover, the disclosed platform provides techniques for determining enhanced accuracy through image recognition by combining specific techniques with auxiliary score computations to determine specific surveillance incidents.
Brief Description of the Drawings
[0007] The below and other objects and advantages of the disclosure will be apparent upon consideration of the following detailed description, taken in conjunction with the
accompanying drawings, in which like reference characters refer to like parts throughout, and in which: [0008] FIG. 1 shows an illustrative diagram of a user device used in a vehicle recording in a first environment, in accordance with some embodiments of the disclosure;
[0009] FIG. 2 shows an illustrative diagram of a user device used in a vehicle recording an incident in a second environment, in accordance with some embodiments of the disclosure;
[0010] FIG. 3 shows an illustrative diagram of a law enforcement officer accessing the incident data platform to retrieve relevant recordings, in accordance with some embodiments of the disclosure;
[0011] FIG. 4 shows an illustrative system diagram of the incident data platform, training set data structure, multiple devices, and multiple party servers, in accordance with some embodiments of the disclosure;
[0012] FIG. 5 shows an illustrative block diagram of the incident data platform, in accordance with some embodiments of the disclosure;
[0013] FIG. 6 is an illustrative flowchart of a process for crowdsourced incident data distribution, in accordance with some embodiments of the disclosure; and
[0014] FIG. 7 is an illustrative flowchart of for transmitting reports to law enforcement devices for review, in accordance with some embodiments of the disclosure.
Detailed Description
[0015] FIG. 1 shows an illustrative diagram 100 of a user device used in a vehicle recording in a first environment, in accordance with some embodiments of the disclosure. In this example, a user is operating a vehicle while a smartphone with embedded camera is recording the environment from the vantage point of the vehicle by using an image capture technology ( e.g ., a camera operating on a smartphone). In some variants, the device used to capture the environment may use additional data capture techniques to supplement the visual capture such as locational data, telemetry, temperature, acoustic sound capture, and other measurable metrics related to the environment. Specifically, FIG. 1 illustrates a driver commuting along a street while the driver’s smartphone operates a software application which instructs the camera to passively capture the environment outside from the perspective of the front windshield based on the positioning of the smartphone.
[0016] FIG. 2 shows an illustrative diagram 200 of a user device used in a vehicle recording an incident in a second environment, in accordance with some embodiments of the disclosure. In this example, the smartphone camera captures a vehicle making an illegal U- turn at an intersection. In some embodiments, the driver selects a record function on the software application on the driver’s smartphone to initiate the generation of a report for the incident data platform. Pressing the record function may provide for the incident data platform to record the previous 30 seconds and future 30 seconds as a stored video clip (or compilation of images for this duration). In some embodiments, the amount of time would be preconfigured. Implementation of a video buffer may be used to retrieve previous video data from a specified timestamp. The incident data platform may include additional video data which includes prior video data occurring at a predefined amount of time prior to the video data (e.g., 30 seconds before incident) and subsequent video data occurring at a predefined amount of time subsequent to the video data (e.g., 30 seconds after incident).
[0017] In other embodiments, the incident data platform is configured to detect incidents based on image recognition. For example, the incident data platform detects the street sign symbol indicating that U-turns are prohibited. Secondly, the incident data platform detects from the captured images that the vehicle has performed the prohibited maneuver based on the one or more captured images from the smartphone when compared to a training set for prohibited maneuvers which contains a U-turn training set. A training set may be set of examples for a specific set of parameters used by the platform to fit the parameters from the received recordings to compare against a predefined model. The training set may be used for image recognition, image quality, audio recognition, and/or any other comparative calculation. Upon a comparison of the real-time image captures to the training set for prohibited maneuvers, the incident data platform automatically selects the record function to record the specific incident. The incident data platform may determine whether the video data is similar to an incident profile. The incident profile may be the training set which specific images of specific types of incidents (e.g., accidents, illegal U-turns, entry into intersection during red light, etc.). In response to determining that the video data is similar to the incident profile, the incident data platform may retrieve additional video data. The additional video data is contiguous to the video data (e.g., as described above recording the previous 30 seconds and future 30 seconds). The incident data platform may transmit the video data and the additional video data to an electronic device such as an insurance server or police server. [0018] For example, the incident data platform may implement a convolutional network for image recognition (e.g., determining whether the video data is similar to the incident profile). The convolutional neural network may include a plurality of layers. A first layer receives an input data and applies a weight/bias to the input before passing the data to the next layer.
Each subsequent layer applies a bias before passing on the data to the next layer. The final layer provides the output of the data.
[0019] The convolutional network may be configured to use RGB image color for each pixel in the image against various layers in the convolutional network. Each layer in the convolutional network may have a small receptive field (e.g., 3x3 pixels). Each receptive field may comprise a neuron. As the neurons are passed from one convolutional layer to the next convolutional layer, weights or bias are applied to the convolutional layer such that the input receives a bias as it passes through each layer. Weights may be generated using a variety of known techniques such as mini -batch gradient descent, or other known statistical and/or mathematical models. In some embodiments, pooling layers are then implemented which may include max pooling to take the maximum value from each of a cluster of neurons from a prior layer and use for a single neuron for the next layer. Additional layers may also be used such as Fully-Connected layers which connect every neuron in one layer to every neuron into another layer.
[0020] Once the model is constructed, a training set may be used to train the model for baseline configuration. Various training sets are readily publicly available for initial training.
[0021] Various other neural networks, deep learning, machine learning and/or artificial intelligence techniques may be used to alter, or be used in place of, the convolutional network discussed above.
[0022] Upon the incident data platform receiving the report, the platform identifies one or more active devices within a determined radius of the incident. Identification of the active devices may be determined by locational (or geographical) hardware/software within the devices interfacing with the incident data platform. For example, a device may utilize a software application which provides a Graphical User Interface (GUI) for interaction with the incident data platform. The location of the incident itself may be determined from one or more of the received recording which may include metadata of the incident. For example, a video clip may have GPS coordinates, temperatures, telemetry (e.g., orientation, acceleration, etc.), and other metrics associated with the video adjusting in time as the recording plays. Location may also be determined from image recognition of the images from the recording. For example, if street signs are within the image of the recording, the location may be determined from this data within the recording itself. Upon confirmation of the location, the incident data platform selects all devices within a predefined radius to retrieve recordings at the specified timestamp of the incident. In this way, the incident data platform receives multiple recordings at a specified timestamp of the incident, at a common location, from a variety of perspectives.
[0023] The incident data platform compares the one or more recordings to a number of training sets to ensure a minimum threshold quality of recording is met. For example, a training set may require that the visual histogram measuring brightness/contrast are within a certain average over time for the recording to meet minimum threshold quality. In some embodiments, the training set may use image recognition to verify that the recordings capture the incident and implement a similar scheme as noted in the preceding paragraph.
[0024] The incident data platform may determine the specific party for receipt of the recordings. In some embodiments, the user of the personal electronic device may specify on the software application interfacing with the incident data platform that the recording is to be forwarded to a specific party ( e.g ., law enforcement, insurance, and/or party involved in incident). In other embodiments, the incident data platform may, based on a specific training set when compared to the recording, determine the specific party which is to receive the recordings. In yet other embodiments, the incident data platform may automatically forward all recording to a preconfigured party which is to receive the recordings. The incident data platform may determine, by control circuitry implementing the machine learning model, a specific electronic device from a plurality of electronic device for transmission of the additional video data. The training information for the machine learning model may include at least one of locational information, identities of parties involved, license plate of a vehicle, and signage within proximate location.
[0025] Upon receiving a recording from a device, the incident data platform may parse the recording for specific data and/or use embedded metadata of the recordings to store the and characterize the data in specific groups. For example, if a recording is received from a red- light traffic violation in San Francisco, California, the incident data platform retrieves the locational metadata embedded in the recording, and associates this recording within a group or dataset based on the San Francisco location. [0026] Specific data retrieved from the recording may be utilized by the incident data platform for analysis such as determining various metrics for group specific data. For example, the platform may determine that the specific location has had many previous receiving data, from the same location, which corresponds to red-light traffic violations. This data can then be used for further analysis or be output to a specific server ( e.g ., law enforcement, insurance company, and/or party involved in the incident).
[0027] The characterization or group creation may be predefined by the incident data platform. In some embodiments, the characterization may be provided by an interested party such as law enforcement, insurance company, and/or party involved in the incident.
[0028] Based on selected party which is the receive the recordings, the incident data platform formats the one or more recordings to be in compliance with a predetermined formatting standard of the selected party. For example, if the one or more recordings are to be sent to law enforcement, the one or more recording may require specific encryption prior to transmission to the law enforcement server. In some embodiments, if encryption is required, the software application interfacing with the incident data platform may have all recording encrypted initially. In some embodiments, the format may require a specific compression, length, audio profile, video profile, metadata, to be added or removed prior to transmission.
[0029] FIG. 3 shows an illustrative diagram 300 of a law enforcement officer accessing the incident data platform to retrieve relevant recordings, in accordance with some embodiments of the disclosure. Specifically, the officer is using a law enforcement server device (e.g., laptop) to access the incident data platform. The incident data platform provides the laptop with the one or more recordings in the specific format for law enforcement. The officer may view the recordings to find a variety of perspectives of the incident. In some embodiments, the incident data platform transmits to a specific device for the specified party. In other embodiments, the incident data platform may transmit to a central server of the specified party.
[0030] In some variants of the disclosed systems and methods, the devices may provide tags to the recordings to classify the incident. For example, a recording made by a smartphone in a vehicle may automatically generate tags for the recording based on the metadata of the environment. For example, the tags may include“night-time,”“liquor-store,” “night-club,” or other environment based tags. In other embodiments, the tags may automatically generate based on image recognition where specific detected objects or actions may be tagged such as“gun,”“collision,”“bank.” In yet other embodiments, the tags may be manually input by the user of the device where the tags may be input through voice-input or keystroke.
[0031] The tags may be used to help classify the recordings such that classification of specific videos may be sent for particular devices within a specified party. Additionally, this may be useful for archival storage in databases.
[0032] In some embodiments, the incident data platform may include auxiliary
computations to add further accuracy to the determined incident determined by image recognition techniques. A device may also capture audio of the incident in the recording.
This audio may be analyzed for specific keywords. The determined keyword may be compared to a database comprising specific keywords indicating distress, locational information, specific dialogue between parties. This information may be classified based on these various determinations and be assigned a score based on the specific incident classification. For example, an audio may include“The light was red!” captured by a witness during a vehicle running a red light. This information may be afforded a higher weight based on an audio score computation. This would then be used in conjunction with the image recognition determination to generate a collaborative score which may then be sent to the selected output party.
[0033] In some embodiments, auxiliary data may include audio captured by the devices, locational information by the devices, facial expressions of people captured on video, various landmarks, timing of upload of the recording relative to the incident, any indication of authenticity of recording, and/or similar auxiliary information.
[0034] FIG. 4 shows an illustrative system diagram 400 of the incident data platform, training set data structure, multiple devices, and multiple party servers, in accordance with some embodiments of the disclosure. The incident data platform 402 may be of any hardware that provides for the functionality of the disclosed techniques for crowdsourced incident data distribution. The incident data platform may be communicatively coupled to multiple devices ( e.g ., device 1 (406), device 2 (408), device 3 (410), and/or device n (412)). The incident data platform may be communicatively coupled to a training set data structure 404. The incident data platform may also be communicatively coupled to one or more servers ( e.g ., law enforcement server 414, insurance company server 416, and server of party involved in incident 418). A further detailed disclosure on the incident data platform can be seen in FIG. 5 showing an illustrative block diagram of the incident data platform, in accordance with some embodiments of the disclosure.
[0035] In some embodiments, the incident data platform may be embedded within a device having shared hardware of the device. For example, the incident data platform may be part of a digital camera, personal computer, smartphone, tablet, wearable technology product or other electronic device. In other approaches, the incident data platform may be remote from the device where the platform resides in a cloud receiving information from multiple devices. In yet another approach, the incident data platform may be within one of the devices, 406, 408, 410, or 412. Any of the system modules (e.g., incident data platform, training set data structure, devices, and servers) may be any combination of shared or disparate hardware pieces that are communicatively coupled.
[0036] The devices interfacing with the incident data platform 402 include (e.g, device 1 (406), device 2 (408), device 3 (410), and/or device n (412)) may be any device which have send and/or receive functionality and image capture technology. These devices capture the incident and transmit the captured recordings to the incident data platform. In various systems, these devices can include, but are not limited to, network-connected devices (e.g, Internet-of-Things devices), wearable devices (e.g, glasses, smartwatches, smart-clothing), smartphones, dash-cameras, video cameras, digital cameras, tablets, personal computers, smart appliances, consumer electronics, and similar systems. The devices may be
communicatively coupled to the incident data platform through a communication means (e.g, network connection, Bluetooth, near field communication, cellular network, Wi-Fi, or any other communicative means). In some embodiments, the devices may utilize a software application which provides a GUI for interaction with the incident data platform.
[0037] The training set data structure 404 may be any database, server, computing device which contains memory for storing various types of information retrieved by the incident data platform. In some embodiments, the training set data structure contains training set information for neural networks, adversarial generative network, machine learning, deep learning, and other computer learning techniques which require training sets to compare data. For example, in a neural network for image recognition, a training set which provides images of street signs may be used by the neural network for the hidden layer nodes to iterate comparison analysis based on the provided training set. Similarly, training sets may include parameters for video or image quality. In some embodiments, training sets may provide parameters as to what constitutes an incident. For example, the training set may provide video examples of illegal traffic maneuvers for a traffic application of this current system.
The training set data structure may be communicatively coupled to the incident data platform through a communication means ( e.g ., network connection, Bluetooth, near field
communication, cellular network, Wi-Fi, or any other communicative means).
[0038] The servers interfacing with the incident data platform 402 include (e.g., law enforcement server (414), insurance company server (416), and/or server of party involved in incident (418)) may be any device which have send and/or receive functionality and storage functionality to store the one or more recordings. These servers receive the one or more recordings from the incident data platform and store them for access and/or archive. In various systems, these servers can include, but are not limited to, network-connected devices (e.g, Intern et-of-Things devices), personal computers, servers, network-connected storage devices, smartphones, tablets, wearable devices (e.g, glasses, smartwatches, smart-clothing), smart appliances, consumer electronics, and similar systems. The servers may be
communicatively coupled to the incident data platform through a communication means (e.g, network connection, Bluetooth, near field communication, cellular network, Wi-Fi, or any other communicative means).
[0039] FIG. 5 shows an illustrative block diagram 500 of the incident data platform 402, in accordance with some embodiments of the disclosure. In some embodiments, the incident data platform may be communicatively connected to a user interface. In some embodiments, the incident data platform may include processing circuitry, control circuitry, and storage (e.g, RAM, ROM, hard disk, removable disk, etc.). The incident data platform may include an input/output path 506. I/O path 506 may provide device information, or other data over a local area network (LAN) or wide area network (WAN), and/or other content and data to control circuitry 504, which includes processing circuitry 508 and storage 510. Control circuitry 504 may be used to send and receive commands, requests, and other suitable data using I/O path 506. I/O path 506 may connect control circuitry 504 (and specifically processing circuitry 508) to one or more communications paths.
[0040] Control circuitry 504 may be based on any suitable processing circuitry such as processing circuitry 508. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor e.g ., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g, an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, control circuitry 504 executes instructions for an incident data platform stored in memory (i.e., storage 510).
[0041] Memory may be an electronic storage device provided as storage 510 which is part of control circuitry 504. As referred to herein, the phrase "electronic storage device" or "storage device" should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, solid state devices, quantum storage devices, or any other suitable fixed or removable storage devices, and/or any combination of the same. Nonvolatile memory may also be used (e.g, to launch a boot-up routine and other instructions).
[0042] The incident data platform 502 may be coupled to communications network.
Communications network may be one or more networks including the Internet, a mobile phone network, mobile voice or data network (e.g, a 4G or LTE network), cable network, public switched telephone network, or other types of communications network or
combinations of communications networks. Paths may separately or together include one or more communications paths, such as, a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications, free-space connections (e.g, for broadcast or other wireless signals), or any other suitable wired or wireless communications path or
combination of such paths.
[0043] FIG. 6 is an illustrative flowchart of a process for crowdsourced incident data distribution, in accordance with some embodiments of the disclosure. Process 600, and any of the following processes, may be executed by control circuitry 504 (e.g, in a manner instructed to control circuitry 504 by the incident data platform). Control circuitry 504 may be part of incident data platform 402, or of a remote server separated from the incident data platform by way of communication network, or distributed over a combination of both. [0044] At 602, the incident data platform, by control circuitry 504, receives a report of an incident from one or more devices connected to the incident platform. The incident data platform may receive the recording from any of devices may be any of device 1 (406), device 2 (408), device 3 (410), and/or device n (412). The reception of the report may use the I/O path 506 of the incident data platform 402. If, at 604, control circuitry 504 determines“No,” the report of an incident from one or more devices connected to the incident platform was not received, the process reverts to 602.
[0045] If, at 604, control circuitry 504 determines“Yes,” the report of an incident from one or more devices connected to the incident platform was received, the process advances to 606. At 606, the incident data platform, by control circuitry 504, identifies one or more active devices within a determined radius of the incident.
[0046] At 608, the incident data platform, by control circuitry 504, compares the recordings to a pre-selected number of quality training sets. If, at 610, control circuitry 504 determines “No,” the recording does not meet one or more comparison thresholds, the process advances to 611. At 611, the incident data platform, by control circuitry 504, retrieves the next identified recording. The incident data platform may receive the recording from any of devices may be any of device 1 (406), device 2 (408), device 3 (410), and/or device n (412). The reception of the report may use the I/O path 506 of the incident data platform 402.
[0047] If, at 610, control circuitry 504 determines“Yes,” the recording meets one or more comparison thresholds, the process advances to 612. At 612, the incident data platform, by control circuitry 504, retrieves output party selection. The retrieval of the output party selection may use the incident data platform 402 processing circuitry 508. In some embodiments, the retrieval of the output party selection may use the incident data platform 402 I/O path 506, the output party selection may be retrieved from any of device 1 (406), device 2 (408), device 3 (410), and/or device n (412).
[0048] At 614, the incident data platform, by control circuitry 504, formats the recording for output party selection. The formatting of the recording by incident data platform 402 may utilize processing circuitry 508.
[0049] At 616, the incident data platform, by control circuitry 504, transmits the recording to device corresponding to output party selection. The transmittal of the recording to the device corresponding to the output party selection may utilize the incident data platform 402 I/O path 506.
[0050] FIG. 7 is an illustrative flowchart of for transmitting reports to law enforcement devices for review, in accordance with some embodiments of the disclosure. At 702, a reckless driver commits an incident. At 704, a user witnessing incident uses the application on their device to submit the recording to the platform. At 706, upon executing submission, the recording is saved from 30 seconds prior to the button press, to 30 seconds after the button press. This ensures that full context is provided in the recording. At 708, the Artificial Intelligence (AI) algorithm analyzes the recording to ensure relevance when compared to training set. At 710, based on the comparison, if deemed relevant, the recording is paired to locational data of the incident. At 712, both the recording and locational data are sent to a database via encrypted mobile data via platform. At 714, the platform transmits the video file and GPS location data to law enforcement server for review. At 716, the platform filters one or more recordings that were taken within a certain law enforcement agency’s jurisdiction.
At 718, the platform further transmits one or more recordings within the law enforcement agency’ s jurisdiction.
[0051] It is contemplated that the steps or descriptions of FIGS. 6-7 may be used with any other embodiment of this disclosure. In addition, the steps and descriptions described in relation to FIGS. 6-7 may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these steps may be performed in any order or in parallel or substantially simultaneously to reduce lag or increase the speed of the system or method. Any of these steps may also be skipped or omitted from the process. Furthermore, it should be noted that any of the devices or equipment discussed in relation to FIGS. 4-5 could be used to perform one or more of the steps in FIGS. 6-7.
[0052] The processes discussed above are intended to be illustrative and not limiting. One skilled in the art would appreciate that the steps of the processes discussed herein may be omitted, modified, combined, and/or rearranged, and any additional steps may be performed without departing from the scope of the invention. More generally, the above disclosure is meant to be exemplary and not limiting. Only the claims that follow are meant to set bounds as to what the present invention includes. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods.

Claims

We Claim:
1. A method for transmitting video data of incidents during vehicle operation, the method comprising: receiving, from one or more sensors, video data, wherein the video data comprises data indicative of video capture during vehicle operation; determining whether the video data is similar to an incident profile; and in response to determining that the video data is similar to the incident profile: retrieving additional video data, wherein the additional video data is contiguous to the video data; and transmitting the video data and the additional video data to an electronic device.
2. The method of claim 1, wherein determining whether the video data is similar to an incident profile comprises control circuitry implementing a neural network.
3. The method of claim 1, wherein the additional video data comprises prior video data occurring at a predefined amount of time prior to the video data and subsequent video data occurring at a predefined amount of time subsequent to the video data.
4. The method of claim 1, wherein transmitting the video data and the additional video data to the electronic device comprises: generating for display, a user interface providing a selection of one or more electronic devices; receiving a selection of one of the one or more electronic devices from the user interface; and transmitting the video data and the additional video data to the selected electronic device.
5. The method of claim 1, wherein transmitting the video data and the additional video data to the electronic device comprises: determining, by the control circuitry implementing the machine learning model, a selected electronic device from one or more electronic devices based on training information; and transmitting the video data and the additional video data to the selected electronic device.
6. The method of claim 5, wherein the training information includes at least one of locational information, identities of parties involved, license plate of a vehicle, and signage within proximate location.
7. The method of claim 1, further comprising: determining geographic information associated with the video data; determining a secondary electronic device within a predefined distance based on the geographic information; receiving secondary video data from the secondary electronic device; and transmitting the secondary video data to the electronic device.
8. The method of claim 7, wherein transmitting the secondary video data to the electronic device comprises: determining whether the secondary video data meets a minimum threshold quality; and in response to determining that the secondary video data meets the minimum threshold quality, transmitting the secondary video data to the electronic device.
9. The method of claim 1, wherein transmitting the video data and the additional video data to the electronic device comprises: generating for display, a user interface comprising input selection for electronic tags for the video data; receiving an input comprising an electronic tag of the video data; and transmitting the video data, the additional video data, and the electronic tag to the selected electronic device.
10. The method of claim 1, wherein transmitting the video data and the additional video data to the electronic device comprises: determining environmental information associated with the video data; generating an electronic tag associated with the video data based on the environmental information; and transmitting the video data, the additional video data, and the electronic tag to the selected electronic device.
11. A system for transmitting video data of incidents during vehicle operation, the system comprising: control circuitry configured to implement the steps of any of claims 1 to 10.
12. A non-transitory computer medium having instructions encoded thereon that when executed by control circuitry enable the control circuitry to execute the steps of the method of any of claims 1 to 10.
PCT/US2019/067234 2018-12-18 2019-12-18 Systems and methods for crowdsourced incident data distribution WO2020132104A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862781550P 2018-12-18 2018-12-18
US62/781,550 2018-12-18

Publications (1)

Publication Number Publication Date
WO2020132104A1 true WO2020132104A1 (en) 2020-06-25

Family

ID=71100562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/067234 WO2020132104A1 (en) 2018-12-18 2019-12-18 Systems and methods for crowdsourced incident data distribution

Country Status (1)

Country Link
WO (1) WO2020132104A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040252193A1 (en) * 2003-06-12 2004-12-16 Higgins Bruce E. Automated traffic violation monitoring and reporting system with combined video and still-image data
US8204273B2 (en) * 2007-11-29 2012-06-19 Cernium Corporation Systems and methods for analysis of video content, event notification, and video content provision
US20170017734A1 (en) * 2015-07-15 2017-01-19 Ford Global Technologies, Llc Crowdsourced Event Reporting and Reconstruction
US20170061214A1 (en) * 2015-08-31 2017-03-02 General Electric Company Controlling bandwith utilization of video transmissions for quality and scalability

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040252193A1 (en) * 2003-06-12 2004-12-16 Higgins Bruce E. Automated traffic violation monitoring and reporting system with combined video and still-image data
US8204273B2 (en) * 2007-11-29 2012-06-19 Cernium Corporation Systems and methods for analysis of video content, event notification, and video content provision
US20170017734A1 (en) * 2015-07-15 2017-01-19 Ford Global Technologies, Llc Crowdsourced Event Reporting and Reconstruction
US20170061214A1 (en) * 2015-08-31 2017-03-02 General Electric Company Controlling bandwith utilization of video transmissions for quality and scalability

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
VITALY PETROV, ANDREEV SERGEY, GERLA MARIO, KOUCHERYAVY YEVGENI: "Breaking the limits in urban video monitoring: massive crowd sourced surveillance over vehicles", IEEE WIRELESS COMMUNICATIONS, vol. 25, no. 5, 24 June 2018 (2018-06-24), XP055722035 *

Similar Documents

Publication Publication Date Title
US11367346B2 (en) Digitizing and mapping the public space using collaborative networks of mobile agents and cloud nodes
US20200293791A1 (en) Identifying and redacting captured data
US10152858B2 (en) Systems, apparatuses and methods for triggering actions based on data capture and characterization
US20180286239A1 (en) Image data integrator for addressing congestion
EP3130113B1 (en) Automated cloud-based analytics for security and/or surveillance
CN109902575B (en) Anti-walking method and device based on unmanned vehicle and related equipment
WO2019128028A1 (en) Road traffic data recording method and vehicle-mounted device
US9854015B2 (en) Incident data collection for public protection agencies
US11417103B2 (en) Investigation assist system and investigation assist method
US10880672B2 (en) Evidence management system and method
CN111131771A (en) Video monitoring system
CN114550053A (en) Traffic accident responsibility determination method, device, computer equipment and storage medium
WO2020132104A1 (en) Systems and methods for crowdsourced incident data distribution
EP4125002A2 (en) A video processing apparatus, method and computer program
WO2021075277A1 (en) Information processing device, method, and program
AU2013201326B2 (en) A Method and System for Generating a Report
US20240013801A1 (en) Audio content searching in multi-media
WO2023084814A1 (en) Communication system, server, communication method, and communication program
US20240037761A1 (en) Multimedia object tracking and merging
BE1029668B1 (en) METHODS, SYSTEMS, STORAGE MEDIA AND EQUIPMENT FOR END-TO-END SCENARIO EXTRACTION FROM 3D INPUT POINT CLOUDS, SCENARIO LAYOUT AND SEQUENTIAL DRIVE PROPERTY GENERATION FOR THE IDENTIFICATION OF SAFETY-CRITICAL SCENARIO CATEGORIES
US11630677B2 (en) Data aggregation with self-configuring drivers
US11887386B1 (en) Utilizing an intelligent in-cabin media capture device in conjunction with a transportation matching system
CN118038334B (en) Accident handling method, device, electronic equipment and computer readable medium
US11880404B1 (en) System and method for multi-media content bookmarking with provenance
US11995733B2 (en) Method and system for linking unsolicited electronic tips to public-safety data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19897914

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19897914

Country of ref document: EP

Kind code of ref document: A1