WO2023129706A1 - Forensic evidence collection systems and methods - Google Patents

Forensic evidence collection systems and methods Download PDF

Info

Publication number
WO2023129706A1
WO2023129706A1 PCT/US2022/054333 US2022054333W WO2023129706A1 WO 2023129706 A1 WO2023129706 A1 WO 2023129706A1 US 2022054333 W US2022054333 W US 2022054333W WO 2023129706 A1 WO2023129706 A1 WO 2023129706A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
report
sensors
forensic
evidentiary
Prior art date
Application number
PCT/US2022/054333
Other languages
French (fr)
Inventor
Marissa SPANO
Joshua QUINT
Original Assignee
Venm Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Venm Inc. filed Critical Venm Inc.
Publication of WO2023129706A1 publication Critical patent/WO2023129706A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Definitions

  • Systems and methods are provided for automated, simplified, and forensically sound evidence reports deriving from the use of a hardware or loT device. Accurate and complete reporting of the facts surrounding an incident is of great social, economic, and judicial importance.
  • the present disclosure provides a method of increasing transparency, improving solvability of violent crimes, and bridging gaps in forensic evidence collection via real-time data capture and the forensic reporting of the captured data.
  • the systems and methods of the present disclosure may further allow for real-time mapping of potential incidents; artificially-intelligence-based analysis of trends and risk mitigation/breaches; authenticated and immutable data and evidence collection; anonymous reporting or reporting via an identifiable token; deep environmental triangulation; and crime deterrence.
  • Systems and methods of the present disclosure may aid law enforcement, safety apps, universities, employers, govence/municipal entities, and public or private companies in tracking, reporting, and authenticating evidence of incidents where safety was threatened, including incidents or crimes.
  • Current safety platforms and apps are lagging in innovation and integration with loT for transparency and accountability between parties. Accordingly, a need exists for improved systems and methods for collecting evidentiary data and generating forensically-sound reports using said evidentiary data.
  • a method of evidence collection comprising: (a) receiving evidentiary data from an input device comprising a plurality of sensors; (b) generating a checksum of said evidentiary data using a cryptographic hash; (c) storing said evidentiary data and said checksum to a local storage medium; (d) uploading said evidentiary data and said checksum to a cloud-based storage medium; (e) generating a first report using said evidentiary data and said checksum from said local storage medium; (f) streaming said first report to said cloud-based storage medium; (g) generating a second report using said evidentiary data and said checksum in said cloud-based storage; (h) comparing said first report and said second report to ensure matching of said reports, in the event where device is not damaged, in which case, the second report would be the sole source of validated incident; and (i) preparing a combined report using matching content from said first
  • said evidentiary data comprises positional information.
  • said input device comprises one or more of a location sensor, an inertial sensor, altitude sensor, attitude sensors, pressure sensors, or field sensors.
  • said evidentiary data comprises phone, hardware device, computer, or loT data.
  • said phone data comprises one or more of calls made or received on a given day, or over a period of time; text messages sent and received on a given day, or over a period of time, or specific time frame; calendar events that a person has pending or scheduled; photos taken; videos taken; or browsing history, and other relevant hardware and application data.
  • said cloud-based storage medium encrypts said evidentiary data stored in said first report.
  • said evidentiary data comprises Calendar Application Programming Interface (API) data.
  • said evidentiary data comprises identifiers of said input device.
  • the identifiers of the input device comprises carrier information, International Mobile Equipment Identity (IMEI) information of a user, or a phone number.
  • evidentiary data comprises location information.
  • location information comprises latitude or longitude coordinates.
  • location information comprises elevation.
  • elevation is translated into an estimate of a building story.
  • the input device comprises a wearable device, wherein the wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module.
  • evidentiary data comprises a submission signed by a local secure element of said input device.
  • evidentiary data comprises multimedia data live capture.
  • evidentiary data comprises preincident image capture.
  • evidentiary data comprises post-incident image capture.
  • said input device comprises a wearable device, wherein said wearable device comprises of an audio receiving module, a video receiving module, and a location information receiving module.
  • the method disclosed herein further comprises of encrypting the said combined report.
  • said combined report may be used to report one or more of: traumatic events, workplace hazards, witnessing a crime, personal mental or physical injuries, hate crimes, hate speech, riots, theft, property damage, equipment damage, sexual harassment, sexual assault, aggravated assault, environmental reports, and Occupational Safety and Health Administration violations.
  • the present disclosure also provides systems for evidence collection.
  • a system for electronically securing a forensic copy of evidence comprising: (a) a forensic capture interface in operative communication with one or more input devices comprising a plurality of sensors, wherein said plurality of sensors generate evidentiary data; (b) a central processing unit comprising one or more processors operatively coupled to said forensic capture interface, said processors configured to: receive said evidentiary data from said one or more input devices, take a checksum of said evidentiary data using a cryptographic hash, and generate a first forensic report; (c) a local memory operatively coupled to said central processing unit, said local memory storing said evidentiary data, said checksum, and said first forensic report; and (d) a communications module in networked communication and in local communication with said central processing unit, wherein said communications module uploads said evidentiary data, said checksum, and said first report to a cloud-based server, whereby
  • said central processing unit receives said evidentiary data from one or more input devices following an activation event.
  • said activation event comprises one or more of: haptic feedback, voice or sound-activated feedback, biometric feedback, or positional feedback, or a hardware trigger or software trigger.
  • said forensic capture interface comprises of one or more wearable devices.
  • said forensic capture interface comprises of: a smart watch, a mobile phone, an Internet of Things (IOT), a camera, a microphone, an alarm, a panic button, a jewelry item or personal accessory, smart glasses, wearables, fitness bands, smart jewelry, including rings, smart necklaces, smart bracelets, and smart watch bands; smart clothing, smart machines (ATMs), smart cars, or a close circuit television (CCTV).
  • said plurality of sensors comprises one or more of: humidity sensors, temperature sensors, other environmental sensors, radio, lidar, cameras, microphones, biometric sensors, or positional sensors.
  • said forensic capture interface becomes activated by one or more voice-activated trigger words, thus causing said forensic capture interface to transmit said evidentiary data to said central processing unit.
  • said forensic capture interface becomes activated by sudden noise or sudden movement detected by said plurality of sensors.
  • said plurality of sensors comprise one or more of: a location sensor; an inertial sensor selected from the group consisting of: accelerometers, gyroscopes, and inertial measurement units (IMUs); an altitude sensor, an attitude sensor; a barometer; a magnetometer; an electromagnetic sensor, or a humidity sensor.
  • the system described herein further comprises a user input interface operatively connected to said forensic capture interface, wherein said user input interface allows a user to manually activate said forensic capture interface.
  • Forensic capture interface can also become activated by a manual human initiated trigger, biometrics, haptics, motions, gestures, and/or algorithms including a combination of the above and additional actions.
  • the present disclosure provides a method of collecting forensic evidence, comprising: (a) retrieving data from a local device and extracting metadata associated with the data; (b) hashing and streaming said metadata to a cloud, wherein the hashing comprises hashing a local time provided by the local device; (c) receiving the data from the local device, rehashing the metadata associated with the data; (d) comparing said hashed metadata and rehashed metadata; and (d) determining, based on said comparing, whether said data has been altered prior to said streaming.
  • the local device comprises an input device.
  • the input device comprises a smartphone or a smartwatch.
  • the input device comprises a wearable device, wherein the wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module.
  • the data comprises a submission signed by a local secure element of the input device.
  • the data comprises multimedia data live capture.
  • the data comprises pre-incident image capture.
  • the data comprises post-incident image capture.
  • the data comprises location information.
  • the data comprises device identifier information.
  • the present disclosure provides a system for electronically securing forensic evidence, said system comprising: (a) a forensic capture interface in operative communication with one or more input devices comprising a sensor, wherein said plurality of sensors generate data; (b) a central processing unit comprising one or more processors operatively coupled to said forensic capture interface, said processors configured to: (i) receive said data from said one or more input devices; (ii) generate a hash based on said data; (iii) determine whether said data complies with an authenticity standard; (iv) and generate a report of said data; (b) a local memory operatively coupled to said central processing unit, said local memory storing said data and said hash; and (c) a communications module in networked communication and in local communication with said central processing unit, wherein said communications module uploads said data, said hash, and said report to a cloudbased server.
  • the one or more input devices comprise one or more of: a location sensor, an inertial sensor, altitude sensor, attitude sensors, pressure sensors, or field sensors.
  • the data comprises phone data.
  • the phone data comprises one or more of: calls made or received on a given day; text messages sent and received on a given day; calendar events that a person has scheduled; dates a person may have had; photos taken; or browsing history.
  • the data comprises identifiers of the one or more input devices.
  • the identifiers comprises carrier information, International Mobile Equipment Identity (IMEI) information of a user, or a phone number.
  • IMEI International Mobile Equipment Identity
  • the data comprises location information.
  • the location information comprises latitude and longitude coordinates. In some cases, the location information comprises elevation. In some cases, the elevation is translated into an estimate of a building story.
  • the one or more input devices comprise a wearable device. In some embodiment, the wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module.
  • the data comprises a submission signed by a local secure element of the one or more input devices. In some cases, the data comprises multimedia data live capture. In some cases, the data comprises pre-incident image capture. In some cases, the data comprises postincident image capture. INCORPORATION BY REFERENCE
  • FIG. 1 shows an example of a system that may be used for evidence collection, in accordance with embodiments of the invention.
  • FIG. 2 provides an overview of exemplary methods described herein, including peer-to-peer based network communications.
  • FIG. 3 provides an exemplary embodiment of peer-to-peer networking as implemented via systems and methods of the present disclosure.
  • FIG. 4 shows a block diagram depicting an exemplary machine that includes a computer system.
  • FIG. 5 shows an example of an application provision system.
  • FIG. 6 shows an application provision system having a distributed, cloud-based architecture.
  • FIG. 7 provides a schematic or screenshot of an exemplary report generated using the systems and methods described herein.
  • FIG. 8 provides an overview of how forensically captured data points may be sourced (e.g., with cellular dependency, with smartphone dependency).
  • FIG. 9 provides an overview of a safety app for a mobile device.
  • FIG. 10 provides information that may be derived from audio and video multimedia data gathered from an input device.
  • FIG. 11 provides an overview of types of users who may access systems described herein.
  • FIG. 12 provides an overview of Application Programming Interface (API) types compatible with the systems and methods provided herein.
  • FIG. 13 provides an exemplary embodiment of how the present systems may evaluate a file for authenticity.
  • API Application Programming Interface
  • FIG. 14 provides an exemplary embodiment of how the present systems may deconstruct a file into hash sequences.
  • FIG. 15 provides an overview of the cloud store and validation process used for gathering evidence in systems and methods described herein.
  • the invention provides systems and methods for evidentiary collection and evidence report generation.
  • Various aspects of the systems and methods described herein may be applied to any of the particular applications set forth below. It shall be understood that different aspects of the invention can be appreciated individually, collectively or in combination with each other.
  • systems provided herein are provided as an application programming interface (API).
  • API application programming interface
  • Systems provided herein are designed to comply with evidentiary standards, such as Daubert’s standard, allowing for legally admissible forensic evidence collection.
  • Digital evidence may be collected by sensors, such as audio and video, which is used to provide information about a potential incident or event.
  • a method of evidence collection comprising: (a) retrieving, receiving, and the interrogation of evidentiary data from an input device comprising a plurality of sensors; (b) generating a checksum of said evidentiary data using a cryptographic hash; (c) storing said evidentiary data and said checksum to a local storage medium; (d) uploading said evidentiary data and said checksum to a cloud-based storage medium; (e) generating a first report using said evidentiary data and said checksum from said local storage medium; (f) streaming said first report to said cloud-based storage medium; (g) generating a second report using said evidentiary data and said checksum in said cloud-based storage; (h) comparing said first report and said second report to ensure matching of cryptographic hashes of said reports; and (i) preparing a combined report using matching content from said first report and said second report,
  • a method of electronically securing forensic evidence comprising: (a) receiving evidentiary data from an input device comprising a plurality of sensors; (b) storing said evidentiary data to a local storage medium; (c) generating a first checksum of said evidentiary data using a cryptographic hash; (d) generating a first report using said evidentiary data and said checksum from said local storage medium; (e) uploading said evidentiary data and said first report to a cloud-based storage medium; (f) generating a second checksum using said evidentiary data from said cloud-based storage medium; (g) generating a second report using said evidentiary data and said checksum; (h) streaming said second report to said cloud-based storage medium; (i) comparing said first report and said second report to ensure matching of said first report and said second report; and (j) preparing a combined report using matching content from said first report and said second report, thereby electronically securing forensic evidence.
  • FIG. 1 depicts a system that may aid in electronically securing forensic evidence, in accordance with embodiments of the invention.
  • FIG. 1 provides a method for electronically securing forensic evidence, said method comprising: (a) receiving evidentiary data from an input device comprising a plurality of sensors (100); (b) generating a checksum of said evidentiary data using a cryptographic hash (101); (c) storing said evidentiary data and said checksum to a local storage medium (103); (d) uploading said evidentiary data and said checksum to a cloud-based storage medium (102) ; (e) generating a first report using said evidentiary data and said checksum from said local storage medium (105); (f) streaming said first report to said cloud-based storage medium (106); (g) generating a second report using said evidentiary data and said checksum in said cloud-based storage (104); (h) comparing said first report and said second report to ensure matching of said reports (107); and (i) preparing a combined report
  • the system may be implemented by a user device.
  • the user device may contain a display and an interface for capturing forensic evidence and/or for generating a report based on the evidentiary data captured.
  • the device may include one or more memory storage units, one or more processors, one or more communication interfaces, one or more power sources, and/or one or more sensors.
  • the present systems and methods involve collection of forensically captured datapoints.
  • Devices may communicate with the API to provide forensically captured data points.
  • a smart device with cellular connection e.g., an Apple watch with no iPhone dependency
  • a smart device e.g., an Apple watch with iPhone dependency
  • may stream data e.g., GPS information
  • any smart device e.g., an Android equivalent
  • any smart device e.g., an Android equivalent
  • FIG. 9 demonstrates sources of forensically captured datapoints.
  • Such datapoints can include location data points, such as latitude/longitude information, which may refreshed at a rate of less than 1 second, 1-2 seconds, 2-5 seconds, 5-10 seconds, 10-20 seconds, 30-50 seconds, or more.
  • Location data points can also include elevation, which may be measured in feet, yards, meters, or any other metrics. Elevation datapoints can also be translated into stories, such as would be helpful for identifying the occurrence of an event in a building.
  • Data points can also include post-incident information. For example, images taken after an incident may be uploaded to the system cloud.
  • Images can be identified based on image format, creation date, modification date, hash (e.g., Sha256 Hash and ECC Hash), Plist Storage, location, etc.
  • Forensically captured data points can also include identifies, such as identifiers of a device serving as the source of such data streamed. Identifiers of a cellphone, for example, can include carrier information, phone number, or IMEI (USER).
  • submissions of forensically captured datapoints may be signed by local secure compute elements (e.g., TPM, Secure Enclave, Titam M, Knox), which can then be validated and subject to checksum both before uploading to the cloud and after uploading to the cloud.
  • Datapoints can also be derived from multimedia data live capture, such as audio and video streaming.
  • FIG. 10 depicts information that can be derived from this type of data.
  • Forensically captured datapoints derived from live audio capture may include information about date created, date modified, storage on local device (e.g., Plist), or MdS Hash for file. Audio capture data can also include information on the format recorded (e.g., AAC, MP3).
  • Video data captured as multimedia data live capture can similarly include such information, as shown in FIG. 10. Such information can include date created, data modified, storage location on device, or hash information (e.g., Sha256 Hash and ECC Hash for a video file). Information can also include frame rate, as measure in Frames Per Second (FPS), created resolution, and format of the recorded video (e.g., MP4, RAW).
  • FPS Frames Per Second
  • Systems and methods provided herein collect information on endpoints. Any point of storage may be considered an “endpoint”. Such data may be held to a validation standard. Such validation standards may serve as a guarantee that endpoint data is authentically from said endpoint when presented as evidence. Any stream of digital information, such as traditional audio/video microphones and cameras may be compatible with the systems described herein. Additionally, collection over specialized (2.4GHz: Wifi, Bluetooth) and full-spectrum antennas may provide additional insight to the radio environment surrounding an event. Signal analysis, or radio environment mapping of such data streams, may serve as a validation mechanism for data collected in the present systems. Endpoints may be collected as streamed data in a binary format, for example, as received from a video.
  • endpoints are derived from data mapping.
  • endpoints may be based on device occupancy determinations, which serve as a means to validate the data further based on the amount of sources such data may have arrived from.
  • endpoints may include mapping based on movements. For example, device occupancy in one location may suddenly change, indicating an emergency (e.g., fire) in that location.
  • Endpoints register with the system cloud, such as Amazon Web Services (AWS) , using a unique identifier assigned to the endpoint.
  • AWS Amazon Web Services
  • Metadata is transmitted over a secure Message Queuing Telemetry Transfer (MQTT) channel, and when appropriate data is submitted over HTTPS.
  • MQTT Message Queuing Telemetry Transfer
  • Metadata and full-content data from endpoints may include additional data channels beyond strictly video and audio. See, e.g., FIG. 9. Examples from loT include radio mapping data than can be used to estimate with reasonable precision the number of devices in an area, or the relative position over time of such devices.
  • the system’s reporting template can be extended to include the output of additional processing techniques such as a radio map, or wireless ID report.
  • additional processing techniques such as a radio map, or wireless ID report.
  • additional documentation regarding the implementation details and specific processing method used to generate the derivative analysis will be appended to the report.
  • the intention of the inclusion of this additional documentation is to facilitate the third-party validation and courtroom acceptance of these processing methods independently from the core evidence report.
  • Systems and methods described herein can further include an auditing strategy based on a “trust, but verify” approach for users, and a zero-trust model for devices.
  • Device data and system integrity is cryptographically guaranteed by the platform and one or more third-party node holders, facilitated by a private blockchain.
  • systems described herein may collect metadata for further confirmation of authenticity.
  • Cryptographic validation of data authenticity serves to ensure that streamed data conforms with authenticity standards of the systems described herein.
  • the system endpoint binaries After being collected from a sensor, such as a camera, the system endpoint binaries produce cryptographic hashes of the data as, or as soon after as is practical, it is stored. These hashes, along with hardware system identifiers, wall clock values, and other anti-spoofing signatures constitute metadata that is submitted in near real-time to the system cloud.
  • devices of the present system enact local anti-tampering measures 1301. See FIG. 13.
  • devices may include signed firmware, implemented when the software vendor signed the firmware image with a private key, which may be authenticated when accepted by the cloud.
  • the system may further require Trusted Platform Modules (TPM) signature for a video and/or timestamps streamed from a device.
  • TPM Trusted Platform Modules
  • the local anti-tampering measures 1301 may facilitate the generation of a local blockchain for the metadata associated with the video, audio, video, and audio data it collected.
  • the local time source 1302 of such a device also be streamed and processed by a file chunk hasher 1303.
  • the received video feed is sliced into chunks by, for example, time period.
  • the time period is configurable, and can be 5 seconds, 10 seconds, 30 seconds, 1 minute, 2 minutes, 3 minutes, 4 minutes, 5 minutes, etc.
  • the received video feed is sliced into chunks by, for example, data packet size (not shown in FIG. 13).
  • the data size of each chuck is configurable, and can be 1 Mb, 2 Mb, 3 Mb, 4 Mb, 5 Mb, 6 Mb, 10 Mb, 50 Mb, 100 Mb, 500 Mb, 1000 Mb, 1Gb, 2Gb, etc.
  • the received video feed chunks may be stored to file system observer 1307 for further analysis.
  • the file system observer 1307 may transmit the received video data chunks to file chunk hasher 1303.
  • the file chunk hasher 1303 may hash the metadata associated with received data chunks along with the TPM signature for the video, TMP signature for time stamp, local time source, etc.
  • the file chunk hasher 1303 may only hash the metadata to save local and cloud storage.
  • the file chunk hasher 1303 may hash the received data chunks using various techniques, for example, hash algorithms such as SHA-256 hash, ECC hash, MD5, SHA-1, SHA-2, NTLM, and LANMAN, etc.
  • the hash overlap analyzer 1304 may analyze the hashed data chunks by retrieving the device ID, file name, file date creation, file modification time, starting byte size, and/or hashing algorithm (SHA-256 hash, ECC hash), etc.
  • hashing algorithm SHA-256 hash, ECC hash
  • Overlap between hashes may be determined to validate authenticity of data. For example, if hashes have at least x% overlap, they will comply with the authenticity standards of the present systems. See FIG. 14.
  • the overlap may be configurable, and may be time-based. For example, the overlap may be 1, 2, 3, 4, 5, 10 seconds for a 1 minute video chunk. In another example, the overlap may be 1, 2, 3, 4, 5, 10 minutes for a 1 hour video chunk.
  • an individual data chunk hash may not have any overlap at the start point of the chunk, which may indicate that this chunk is the beginning of a video. As shown in FIG. 14, the first data chunk hash may denote the beginning of the video, and may only have an end point overlap with a subsequent data chunk.
  • a VMS implementation observer 1309 may determine whether a file is complete, for example, by inquiring whether there is a subsequent data chunk (e.g., video file). When there is no subsequent data chunk, the system may determine this is the last chunk hash and the video is complete.
  • each hash chunk may be analyzed, for example, by hash overlap analyzer 1304, regarding the device ID, file name, file date creation, file modification time, starting byte size, and/or hashing algorithm (SHA-256 hash, ECC hash), etc.
  • receiving a final hash of a video feed may denote a complete of the video feed.
  • the generated hashes may be provided to MQTT reporting queue to system cloud API 1306. In some embodiments, the generated hashes are transmitted in a queue based on the video timestamp. Since the hashed metadata is generally smaller in size comparing to the full video, a secured MQTT protocol may be employed to transmit the hash. Other protocols may be selected and utilized to transmit this data.
  • a further aspect of authenticity standards relates to time alignment.
  • Timely submission of metadata contributes to the validation process utilized by the present systems.
  • metadata generated by an endpoint is paired with additional network and clock metadata from the cloud platform, allowing for confirmation that the data collected has not been modified prior to, or subsequent to, submission.
  • a cloud copy of the metadata is also collected, allowing the system to identify whether data has been cut, is incomplete, or has been deleted prematurely to the native age-off criteria.
  • timely submission of data may not be possible. In such cases, metadata is submitted on a best-effort basis.
  • FIG. 15 provides a flow chart depicting such a process. As shown in FIG.
  • the hashed metadata reporting queue received from endpoint collector may be encoded to include cloud time.
  • the cloud time may indicate a time at which the cloud received the hash.
  • the could block chain table may compose the received hash into block by time sequence.
  • the blocks may form a private block chain is immutable and may allow multiple parties to audit.
  • a third-party may hold a node of the cloud blockchain. This may provide additional visibility of the hashed value (which may write as a block in the block chain).
  • the system presented herein may identify the matching block range, i.e., the videos that marches the cloud copy of hash.
  • the platform may utilize a file chunk hasher to rehash the received full-content copy. This rehash may be compared with the cloud hash copy to ensure authenticity and thereby verify the video. This verification process may evaluate discrepancies between the rehashed copy and the cloud copy.
  • the hash technique used in hashing the metadate and in rehashing the fullcontent copy may be the same, to ensure that the same hash value may be generated when the content is the same.
  • the cloud copy hash should match with the rehash value.
  • Various measures may aid in the validation process.
  • a time sanity measure may be utilized. For example, the time embedded in the video (i.e., timestamp on the video, creation time) should be prior to a medication time (i.e., a metadata associated with the recorded time). The medication time should be prior to the submission time (i.e., a metadata associated with the transmission time). The submission time should be prior to the could receipt time. In short, those timestamps should be in a monotonic increasing format.
  • any discrepancy from the above listed behaviors may indicate a tampered video.
  • the metadata when there is no video captured, or when there is no new data stored on storage, the metadata may still be hashed and streamed to the cloud. This may ensure that no spoofer may take advantage of an endpoint device idle time period.
  • the known manipulation techniques e.g., deepfakes, visual time stamp manipulation
  • the system presented herein may provide anti-tamper measures to prevent video being tampered using those techniques.
  • the verification process may utilize the TPM signature of the local device (i.e., endpoint device).
  • the TPM signature (i.e., a piece of metadata associated with video) may be hashed and transmitted to the cloud.
  • the later rehashed copy may also include the TPM signature, and the system presented herein may compare the hash value between the two copies.
  • Another important source of event triggers is when an investigator enters a report request in the system’s portal.
  • data may have not been previously retrieved as a result of an event trigger, the data will be retrieved when requested as a part of the report generation process.
  • Compressed resolution or lower framerate previews of the content are needed in a timely manner to determine if the content is relevant to an incident.
  • the system platform depending on the capabilities and/or limitations of the endpoint, can facilitate the automated retrieval and presentation of this summary content. Additionally, in response to an event trigger or a report generation request, the system can retrieve and archive full-content data. Summary data and full -content data retrieval are triggered through an MQTT job and uploaded via an HTTPS session.
  • Metadata is securely stored in the system cloud in two forms. First, the data is written to a document-based database and indexed for quick retrieval. Second, the data is time-bundled with other data matching a throughput-optimized time window and submitted to a Hyperledger Fabric private blockchain node. This node is then synchronized with a third-party node held by an independently contracted auditing entity. Validating any full-content data, in any digital format, is achieved through the strong reconciliation between the proposed full-content data copy, the metadata held in the system’s cloud platform, and the integrity of the blockchain. In the event that full-content data fails validation, the manner of failure can provide unique insight. A common example would be that missing data, either before, in the middle of, or after an incident could be flagged to indicate that either the full-content copy presented was spoofed, spliced, or incomplete.
  • the report based on the evidentiary data captured may include, for example, video files, audio files, biometric information or relevant collected data regarding the victim, and information on potential witnesses or devices detected nearby.
  • a report generator of the system assembles full-content data and metadata from the system’s archive and blockchain and presents this data in standardized formats that can be customized based on the customer and intended use case. For instance, the template for regulatory track and trace compliance differs from the police and courtroom evidence submission use cases. Every formatted report includes a printable barcode, which encodes a unique identifier and web URL directing the user to the digital version of the report. This bridges the gap between paper reporting requirements and digital evidence validation.
  • the integrity of the systems’ metadata archive is backed by a private blockchain, where third-party node holders can both back up and guarantee the integrity of the blockchain, even in the event of an unfavorable security event in the system cloud.
  • Industrystandard security measures are utilized in the implementation and operation of the system cloud, but this additional layer of security strengthens the courtroom admissibility case for digital evidence and is how the system platform exceeds current digital evidence immutability practices.
  • the systems provided herein validate that the data presented as evidence is authentic as it was submitted. This means that data and metadata such as timestamps have not been tampered with after the point at which metadata was submitted to the system platform. This is an important improvement over current evidentiary collection methods. Authentic data, however, is not necessarily genuine. The diverse nature of uncontrolled or customer- controlled data sources allows the, albeit unlikely, possibility for spoofed data submission. Inline CCTV loops, for example, could produce misleading data, although it could not be further tampered with after submission.
  • SDK System Source Development Kit
  • source development kits may further serve to validate data streams collected as forensically captured datapoints.
  • a source development kit and pre-compiled binaries are used as an internal reference when integrating with a new endpoint, which could be a single camera, a video management system (VMS), or another live data aggregation point.
  • VMS video management system
  • a variant of this kit, binaries, and other sample scripts will be provided to external developers who plan to add support for additional endpoint devices or VMSes to the ForceField platform.
  • This kit and scripts include functions to securely register, transmit metadata, and respond to summary data and full-content data requests from the ForceField cloud.
  • APIs are outlined in FIG. 12. API Types include metadata (for the submission of metadata), event submission (for submission of summary data or fullcontent data), Reporting (for reporter and validation requests), Derivative Processing (for independent requests to derivative processing techniques, such as radio mapping or device crown analysis).
  • Methods of forensic evidence capture and reporting may include ascribing guidance on implementation and certification of devices interacting with (e.g., streaming data to) the systems.
  • guidance may indicate information as to quality, submission timeliness, and other criteria such as anti-spoofing mechanisms supported by the device that provides the streamed content.
  • Systems provided herein may be utilized by various classes of users, such as those provided in FIG. 11.
  • a data owner can include, for example, a company that owns and operates a camera. Such a user would have permission to view any data that they have submitted if it has been retained by the system platform.
  • Investigators in the B2B case have permission to query and use the report generator and validator for derivations of their data. Law enforcement investigators can query data from any geographic areas within their designated jurisdiction. In some cases, regulators may share this access.
  • Investigator oversight is provided by auditors, which can include, for example, an internal affairs division of law enforcement. Another system user may include legal inquiry users.
  • Such users include legal teams desiring confirmation of a full-content data copy presented as evidence or a regulator tasked with the verification of a report.
  • An additional type of user includes platform administrators, such as engineers or administrative personnel of the platform, tasked with user or operation support responsibilities.
  • the platform systems may comprise memory that maintains an audit log of customer user activity. Generated reports and system queries are tracked to provide a record to oversight-class users. Oversight users are required for law enforcement usage of the FF platform, and auditors are required to regularly review all system usage.
  • the system comprises a Reporting Portal.
  • a Reporting Portal or user portal, serves a variety of purposes. One such purpose is to facilitate the generation and presentation of incident reports containing fullcontent data and validation metadata. Another purpose of the user portal assists investigators in determining data of interest for either an active or retrospective time period and geographic region. A further purpose is to present and revalidate copies of system-signed incident reports or full-content data copies.
  • the systems and methods disclosed herein may provide a means for preventing crime, reporting public emergencies, or alerting both laypersons and/or officials of the existence of an emergency event or other incident.
  • Other embodiments may be directed towards assisting law enforcement or other investigating individuals in forensically collecting field data and securely storing and processing that data.
  • the combined report may be a forensic evidence report, an affidavit, or other useful tool for investigating incidents.
  • Some embodiments of the present disclosure are directed towards helping victims and witnesses of an incident or even directly supplement a police report.
  • said combined report may be used to report one or more of: incidents, crimes, accidents, injuries, theft, property damage, equipment damage, sexual harassment, sexual assault, aggravated assault, environmental reports, or Occupational Safety and Health Administration violations.
  • the combined report can be used for field investigations by public/private safety organizations and/or law enforcement agencies.
  • FIG. 2 outlines a system or method as described herein.
  • the system described herein is implemented via an application on a tablet or other device.
  • the application causes the forensic capture interface to scan and collect data (200).
  • Evidence is collected (201), authenticated in the cloud-based storage medium (204), and a FIELD report is created (205).
  • the application sends a notification to a safety circle (202).
  • a “forcefield” is activated (203), and the application may further provide the user with post-incident resources as necessary (208). After the forcefield is deactivated, the system may provide a user with trauma and legal references (207).
  • the method disclosed herein further comprises encrypting said combined report.
  • said cloud-based storage medium encrypts said evidentiary data stored in said first report.
  • blockchain may be used to store and verify evidentiary data and/or cryptographic hashes.
  • the forensic evidence capture system may collect evidentiary data and/or cryptographic hashes and then use blockchain to store the data.
  • the forensic evidence capture system may comprise one or more processors, and memory storing instructions that, when executed by the one or more processors, cause the forensic evidence capture system to: generate a form comprising a first data field configured to receive evidentiary data; cause creation of a blockchain corresponding to the first data field and the form, wherein the blockchain is configured to store blockchain entries corresponding to data lineage of the first data field; cause a first blockchain entry to be added to the blockchain, wherein the first blockchain entry corresponds to a second computing device permitted to receive data associated with the first data field and comprises at least one rule associated with the first data field; receive first data via the first data field; and transfer the first evidentiary data to a second computing device based on evaluating the first blockchain entry.
  • the present disclosure provides systems for evidence collection.
  • a system for electronically securing forensic evidence comprising: (a) a forensic capture interface in operative communication with one or more input devices comprising a plurality of sensors, wherein said plurality of sensors generate evidentiary data; (b) a central processing unit comprising one or more processors operatively coupled to said forensic capture interface, said processors configured to: receive said evidentiary data from said one or more input devices, take a checksum of said evidentiary data using a cryptographic hash, and generate a first forensic report; (c) a local memory operatively coupled to said central processing unit, said local memory storing said evidentiary data, said checksum, and said first forensic report; and (d) a communications module in networked communication and in local communication with said central processing unit, wherein said communications module uploads said evidentiary data, said checksum, and said first report to a cloud-based server, whereby
  • said central processing unit receives said evidentiary data from one or more input devices following activation of the forensic capture interface due to a triggering event.
  • the input device comprises a wearable device, wherein said wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module.
  • said evidentiary data comprises positional information and said input device comprises one or more of: a location sensor, an inertial sensor, altitude sensor, attitude sensors, pressure sensors, or field sensors. Said evidentiary data may also comprise of cellular data.
  • cellular data comprises of one or more: calls made or received on a given day or a specific time-frame; text messages sent and/or received on a given day, or a specific time-frame; calendar events that a person has scheduled or pending; photos taken; videos taken; or browsing history, or other related or relevant hardware and/or software data.
  • said forensic capture interface becomes activated (i.e., begins receiving data from the input devices) by one or more trigger words, thus causing said forensic capture interface to transmit said evidentiary data to said central processing unit.
  • said forensic capture interface becomes activated by sudden noise or sudden movement detected by said plurality of sensors.
  • said forensic capture interface may be activated by the user falling, as detected by a defined accelerometer output, an audio sensor and/or a visual sensor.
  • the triggering event comprises one or more of: haptic feedback, voice or sound-activated feedback, biometric feedback, or positional feedback.
  • said forensic capture interface comprises: a smart watch, a mobile phone, an Internet of Things (IOT), a camera, an alarm, a panic button, a jewelry item, smart glasses, wearables, fitness bands, smart rings, smart watch bands, smart clothing, smart machines (ATMs), smart cars, or a close circuit television (CCTV).
  • said plurality of sensors comprise one or more of: humidity sensors, temperature sensors, cameras, microphones, biometric sensors, or positional sensors.
  • said forensic capture interface comprises one or more of a wearable device.
  • Wearable devices may include: glasses, watches, fitness bands, necklaces, rings, bracelets, earrings, accessories, smart clothing, smart accessors (e.g., smartwatches, smartglasses, FitBit, etc.).
  • said plurality of sensors comprise one or more of: a location sensor; an inertial sensor selected from the group consisting of: accelerometers, gyroscopes, and inertial measurement units (IMUs); an altitude sensor, an attitude sensor; a barometer; a magnetometer; an electromagnetic sensor, or a humidity sensor.
  • the system described herein further comprises a user input interface operatively connected to said forensic capture interface, wherein said user input interface allows a user to manually activate said forensic capture interface.
  • the systems and methods described herein may be provided via an application for use with a user device.
  • the various types of user devices may include, but are not limited to, a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or a combination of any two or more of these data processing devices or other data processing devices.
  • the user device include one or more of the following; handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices.
  • the user device includes two or more of a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices.
  • the user device includes three or more of a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices.
  • the user device includes four or more of a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices.
  • the user device includes five or more, six or more, seven or more, eight or more, nine or more, or ten or more of a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices.
  • the application may allow for real-time, instant and automatic recordation of video and audio; scanning of areas for Bluetooth, wifi, hardcoded device IDs; passively recordation of wifi and Bluetooth signals received by the users device at all times; active recordation via audio and video capture; capturing of information on nearby beacons and smart devices; manual manipulation; voice-activation, and other features that aid in crime reporting, investigation, and prevention.
  • the application may be push- activated, voice-activated, or biometrically activated.
  • the application may allow a user to broadcast an incident to family, friends, emergency contacts, law enforcement agencies, or those whose jurisdiction the investigation would fall into, being a private or public investigatory authority.
  • the app may allow the user to report in an anonymous or identified manner.
  • the application may also store evidence package and data via a cloud-based server.
  • the systems and methods described herein may be implemented on existing devices, such as a smartphone or smartwatch via an Application Programming Interface (API).
  • An API may be used to collect evidence using voice activation as a trigger and use this evidence to automatically generate an evidence report from existing apps and devices.
  • the systems and methods described herein may be implemented via a full-stack application.
  • the application may allow for use as a standalone and to synch -with other personal user devices, such as a smartphone or Fitbit.
  • the user device may be a mobile device (e.g., smartphone, tablet, pager, personal digital assistant (PDA)), a computer (e.g., laptop computer, desktop computer, server), or a wearable device (e.g., smartwatches).
  • the user device may be portable.
  • the user device may be handheld.
  • the user device may be a network device capable of connecting to a network, such as a local area network (LAN), wide area network (WAN) such as the Internet, a telecommunications network, a data network, or any other type of network.
  • LAN local area network
  • WAN wide area network
  • the user device comprises a wearable device comprising a biosensor, a motion sensor unit, a location sensor, or a haptic sensor, wherein the wearable device is wirelessly connected with a mobile device.
  • a wearable device comprises a forensic capture interface coupled to a communication unit that transmits data collected by the forensic capture interface to a central processing unit for generating a checksum, and then transmits the checksum and data collected by the forensic capture interface to a network.
  • the user device may comprise memory storage units which may comprise non- transitory computer readable medium comprising code, logic, or instructions for performing one or more steps.
  • the user device may comprise one or more processors capable of executing one or more steps, for instance in accordance with the non-transitory computer readable media.
  • the user device may comprise a display showing a graphical user interface.
  • the user device may be capable of accepting inputs via a recipient interactive device. Examples of such recipient interactive devices may include a keyboard, button, mouse, touchscreen, touchpadjoystick, trackball, camera, microphone, motion sensor, heat sensor, inertial sensor, or any other type of recipient interactive device.
  • the user device may be capable of executing software or applications provided by one or more evidence collection systems.
  • the user device may be an electronic device capable of collecting evidentiary data through one or more input devices comprising sensors.
  • the user device may be mobile device (e.g., smartphone, tablet, pager, personal digital assistant (PDA)), a computer (e.g., laptop computer, desktop computer, server, or any other type of device.
  • PDA personal digital assistant
  • the user device may optionally be portable.
  • the user device may be handheld.
  • the user device may be a wearable device.
  • the user device comprising a smart watch, smart jewelry, smart clothes, or the like.
  • the user device may be a network device capable of connecting a network, such as a local area network (LAN), wide area network (WAN) such as the Internet, a telecommunications network, a data network, or any other type of network.
  • a network such as a local area network (LAN), wide area network (WAN) such as the Internet, a telecommunications network, a data network, or any other type of network.
  • the user device may be capable of direct or indirect wireless communications.
  • the user device may be capable of peer-to-peer (P2P) communications and/or communications with cloud-based infrastructure.
  • P2P peer-to-peer
  • the user device may include a display.
  • the display may include a screen, such as a liquid crystal display (LCD) screen, light-emitting diode (LED) screen, organic lightemitting diode (OLED) screen, plasma screen, electronic ink (e-ink) screen, touchscreen, or any other type of screen or display.
  • the display may or may not accept user input.
  • the display may show a graphical user interface.
  • the graphical user interface may be part of a browser, software, or application that may aid in the user in generating a report using the device.
  • the user device may be capable of accepting inputs via a user interactive device.
  • user interactive devices may include a keyboard, button, mouse, touchscreen, touchpadjoystick, trackball, camera, microphone, motion sensor, heat sensor, inertial sensor, or any other type of user interactive device.
  • the user device may comprise one or more memory storage units which may comprise non-transitory computer readable medium comprising code, logic, or instructions for performing one or more steps.
  • the user device may comprise one or more processors capable of executing one or more steps, for instance in accordance with the non-transitory computer readable media.
  • the one or more memory storage units may store one or more software applications or commands relating to the software applications.
  • the one or more processors may, individually or collectively, execute steps of the software application.
  • a communication unit may be provided on the device.
  • the communication unit may allow the user device to communicate with an external device.
  • the external device may be, for example, a server or may be a cloud-based infrastructure.
  • the communications may include communications over a network or a direct communication.
  • the communication unit may permit wireless or wired communications. Examples of wireless communications may include, but are not limited to WiFi, 3G, 4G, 5G LTE, radiofrequency, Bluetooth, infrared, or any other type of communications.
  • the user device may comprise an imaging sensor that serves as an input device.
  • the imaging input device may be on-board the user device.
  • the input device can include hardware and/or software element.
  • the sensor may be located external to the user device, and evidentiary data may be transmitted to the user device via communication means as described elsewhere herein.
  • the input device can be controlled by an application/ software configured to scan a visual code.
  • the camera may be configured to scan a barcode on an ID card, a passport, a document, or displayed on an external display.
  • the software and/or applications may be configured to activate the camera on the user device to scan the code.
  • the camera can be controlled by a processor natively embedded in the user device.
  • the imaging input device may be a fixed lens or auto focus lens camera.
  • An input device may make use of complementary metal oxide semiconductor (CMOS) sensors that generates electrical signals in response to wavelengths of light. The resultant electrical signals can be processed to produce evidentiary data.
  • CMOS complementary metal oxide semiconductor
  • the input device may include a lens configured to direct light onto an imaging sensor.
  • a camera can be a movie or video camera that captures dynamic image data (e.g., video).
  • a camera can be a still camera that captures static images (e.g., photographs).
  • a camera may capture both dynamic image data and static images.
  • a camera may switch between capturing dynamic image data and static images.
  • the input device may comprise a camera used to capture visual images around the device. Any other type of sensor may be used, such as an infra-red sensor that may be used to capture thermal images around the device.
  • the imaging sensor may collect information anywhere along the electromagnetic spectrum, and may generate corresponding images accordingly.
  • the input device may comprise a Light Detection And Ranging (LiDAR) sensor.
  • the LiDAR sensor may collect three-dimensional location data.
  • the user device may comprise an audio sensor that serves as an input device.
  • the audio input device may be on-board the user device.
  • the audio input device can include hardware and/or software element.
  • the audio input device may be a microphone operably coupled to the user device.
  • the audio input device may be located external to the user device, and image data of a graphical element such as barcode may be transmitted to the user device via communication means as described elsewhere herein.
  • the audio input device can be controlled by an application/software configured to analyze audio input and determine the significance.
  • the software and/or applications may be configured to activate the microphone on the user device to record the audio input.
  • the camera can be controlled by a processor natively embedded in the user device.
  • the user device may comprise a location sensor that serves as a location input device.
  • the user device may have one or more sensors on-board the device to provide instantaneous positional and attitude information of the device.
  • the positional and attitude information may be provided by sensors such as a location sensor (e.g., Global Positioning System (GPS)), inertial sensors (e.g., accelerometers, gyroscopes, inertial measurement units (IMUs)), altitude sensors, attitude sensors (e.g., compasses) pressure sensors (e.g., barometers), and/or field sensors (e.g., magnetometers, electromagnetic sensors) and the like.
  • GPS Global Positioning System
  • inertial sensors e.g., accelerometers, gyroscopes, inertial measurement units (IMUs)
  • altitude sensors e.g., attitude sensors (e.g., compasses) pressure sensors (e.g., barometers), and/or field sensors (e.g., magnet
  • the user device may comprise one or more additional sensors.
  • the user device may comprise two or more additional sensors.
  • the user device may comprise 3 or more additional sensors.
  • the user device may comprise four or more additional sensors.
  • the user device may comprise five or more additional sensors.
  • the user device may comprise six or more additional sensors.
  • the user device may comprise seven or more additional sensors.
  • the user device may comprise eight or more additional sensors.
  • the user device may comprise nine or more additional sensors.
  • the user device may comprise ten or more additional sensors.
  • the user device may comprise more than ten additional sensors.
  • the sensors of a user device may include, but are not limited to, location sensors (e.g., global positioning system (GPS) sensors, mobile device transmitters enabling location triangulation), vision sensors (e.g., imaging devices capable of detecting visible, infrared, or ultraviolet light, such as cameras), proximity sensors (e.g., ultrasonic sensors, lidar, time-of- flight cameras), inertial sensors (e.g., accelerometers, gyroscopes, inertial measurement units (IMUs)), altitude sensors, pressure sensors (e.g., barometers), audio sensors (e.g., microphones), time sensors (e.g., clocks), temperature sensors, sensors capable of detecting memory usage and/or processor usage, or field sensors (e.g., magnetometers, electromagnetic sensors).
  • location sensors e.g., global positioning system (GPS) sensors, mobile device transmitters enabling location triangulation
  • vision sensors e.g., imaging devices capable of detecting visible, infrared, or ultraviolet light
  • sensors can be used, such as one, two, three, four, five, or more sensors.
  • the data can be received from sensors of different types (e.g., two, three, four, five, or more types).
  • Sensors of different types may measure different types of signals or information (e.g., position, orientation, velocity, acceleration, proximity, pressure, etc.) and/or utilize different types of measurement techniques to obtain data.
  • the sensors may include any suitable combination of active sensors (e.g., sensors that generate and measure energy from their own source) and passive sensors (e.g., sensors that detect available energy).
  • the sensors may include different types of sensors, or the same types of sensors.
  • the sensors and/or any other components described herein may be enclosed within a housing of the device, embedded in the housing of the device, or on an external portion of the housing of the device.
  • the one or more sensors may collect information continuously in real-time or may be collecting information on a periodic basis. In some embodiments, the sensors may collect information at regular time intervals, or at irregular time intervals.
  • the sensors may collect information at a high frequency (e.g., every minute or more frequently, every 10 seconds or more frequently, every second or more frequently, every 0.5 seconds or more frequently, every 0.1 seconds or more frequently, every 0.05 seconds or more frequently, every 0.01 seconds or more frequently, every 0.005 seconds or more frequently, every 0.001 seconds or more frequently, every 0.0005 seconds or more frequently, or every 0.0001 seconds or more frequently).
  • the sensors may collect information according to a regular or irregular schedule.
  • the sensors may collect information only after a triggering event has occurred.
  • a state of the user device may include positional information relating to the user device.
  • positional information may include spatial location of the user device (e.g., geo-location).
  • positional information may include a latitude, longitude, and/or altitude of the user device.
  • the positional information may be expressed as coordinates.
  • the positional information may include an orientation of the user device.
  • the positional information may include an orientation of the device with respect to one, two, or three axes (e.g., a yaw axis, pitch axis, and/or roll axis).
  • the positional information may be an attitude of the user device.
  • the positional information may be determined relative to an inertial reference frame (e.g., environment, Earth, gravity), and/or a local reference frame.
  • positional information may be processed by the central processing unit to ascertain the crime rate of a given area. If a high crime rate is detected, the forensic capture interface may begin capturing evidentiary data. In some embodiments, the evidentiary data may be captured at a frequency that correlates to the risk associated with the area or location of the user device.
  • the positional information may include movement information of the imaging device. For instance, the positional information may include linear speed of the device or linear acceleration of the device relative to one, two, or three axes.
  • the positional information may include angular velocity or angular acceleration of the device about one, two, or three axes.
  • the positional information may be collected with aid of one or more inertial sensors, such as accelerometers, gyroscopes, and/or magnetometers.
  • the positional information may trigger the forensic capture interface of the user device to initiate collection of evidentiary data. For example, a sudden drop of the device or movement into a high-crime area may trigger the device to capture images, video, or sound recordings.
  • a state of the user device may also include environmental information collected by the user device at the time evidentiary data is captured.
  • the environmental information may include audio information collected by a microphone of the device.
  • the environmental information may include information collected by a motion detector, an ultrasonic sensor, lidar, temperature sensor, pressure sensor, or any other type of sensor that may collect environmental information about the device.
  • the environmental information may include detecting the touch or hand position of a user holding the device, and collecting which portions of the device or touched or held by the user.
  • Evidentiary data include any type of environmental data about the device that may be collected.
  • Environmental data about the device may refer to data pertaining to the environment outside the device.
  • the environmental data may include collection of data of external conditions outside the device.
  • Such data may be visual, thermal, humidity, audio, or positional data, for example.
  • the environmental data may be collected at a single point in time, multiple successive points in time, or over a time interval.
  • the environmental data about the device may refer to data collected by an image sensor of the device.
  • One or more image sensors of the device may be used to collect an image or video of an environment outside the device.
  • the image or video collected by the image sensor may include an image of one or more landmark features around a device and/or an image of the user of the device, or any combination thereof.
  • the environmental data may include the image or images captured by one or more image sensors, or may include data about the image(s) captured by the one or more image sensors.
  • the environmental data may include snapshots collected over a period of time (e.g., dynamic display).
  • the sensor used to collect environmental data may include lidar, sonar, radar, ultrasonic sensors, motion sensors, or any other sensor that may generate a signal that may be reflected back to the sensor.
  • Such sensors may be used to collect information about the environment, such as the presence and/or location of objects within the environment.
  • the audio information may include sounds captured from the environment. This may include ambient noise, and/or noise generated by the device itself.
  • the audio data may include an audio snapshot collected at a single point in time, or may include an audio clip collected over a period of time.
  • An analysis of audio characteristics of the audio data may be collected or determined. For instance, a fast Fourier transform (FFT) analysis or similar type of analysis may be performed on the audio data. In some embodiments, it may be determined that a change in the audio captured may reduce the likelihood of a replay attack.
  • FFT fast Fourier transform
  • the raw audio data and/or an analysis of the raw audio data may be provided as the environmental data. An audio fingerprint may thus be generated. An audio fingerprint may be expected to be unique to a particular time at which it is collected. Completely identical audio data may be extremely unlikely, and may indicate a higher likelihood of a replay attack.
  • one or more audio sensors may be used to collect information.
  • An audio sensor may be a microphone.
  • the microphone may collect information from a wide range of directions, or may be a directional or parabolic microphone which has a limited range of directions.
  • the microphone may be a condenser microphone, dynamic microphone, ribbon microphone, carbon microphone, piezoelectric microphone, fiber optic microphone, laser microphone, liquid microphone, MEMs microphone, or any other type of microphone.
  • the audio sensors may be capable of collecting audio data with a high degree of sensitivity.
  • the evidentiary data may relate to a checksum function.
  • any type of data input may be used to derive a checksum value, via a checksum function.
  • the checksum value may be significantly different, even if an input value just differs slightly.
  • Any type of checksum function known or later developed may be used.
  • a checksum may utilize a parity check, such as a longitudinal parity check.
  • a modular sum may be used.
  • Some checksums may be position-dependent, such as Fletcher’s checksum, Adler-32, or cyclic redundancy checks (CRCs). .
  • the checksum values may be used as nonce data.
  • the checksum function may be an existing checksum function and/or utilized by the system for the purpose of detecting errors that may have been introduced during its transmission or storage.
  • the checksum value may be further utilized for generating nonce data in addition to the aforementioned purpose.
  • the checksum value may be a small-sized datum derived from a block of digital data (e.g., image data) such that memory or data transmission bandwidth required for the forensic evidence capture may be reduced.
  • sensor data may undergo a checksum function to yield a checksum value.
  • data from any sensors described elsewhere herein, such as position sensors, image sensors, audio sensors, vibration sensors, motion sensors, infrared sensors, or any other type of sensor may undergo a checksum function to yield a checksum value.
  • image data may undergo a checksum function to yield a checksum value.
  • Data derived from an image itself e.g., image of a user’s identification document, selfie image, etc.
  • various parameters relating to the image may undergo a checksum function to yield a checksum value.
  • device component data may undergo a checksum function to yield a checksum value. Any type of data, including local or operational data, or environmental data, may be used to yield a checksum value.
  • the checksum values may be generated on-board the device or in a cloud-based storage medium.
  • the checksum values may be generated using one or more processors of the device.
  • the checksum values may be generated using a central processing unit.
  • the checksum values may be stored in one or more memory storage units of the device.
  • the checksum values may be transmitted with aid of a communication unit to an external device or network.
  • the checksum values may be generated off-board the device.
  • Data from a data source such as sensors or processors, may or may not be stored in a memory before being transmitted. The data from the data sources may be used at an external device or network to generate checksum data.
  • the checksum is derived from information collected on the input device.
  • the checksum may also be derived from information once the is collected into the forensic report.
  • Identification data may contain information used to authenticate or verify identity of a user.
  • the identification data may contain personal information such as name, date of birth, address, nationality and the like that describe identity of a user.
  • the evidentiary data may be about a state of the user device.
  • the local data of the imaging device may include positional information, such as orientation of the device, geo-location and the like.
  • the local data of the imaging device may include timestamps, such as the time evidentiary data is captured.
  • the local data may be collected from one or more sensors of the user device such as the GPS, IMU, accelerometers, and barometers as described elsewhere herein.
  • the local data may also include Know Your Customer (KYC) Biometric data.
  • KYC Biometric data may verify the signatory of the generated forensic report.
  • the local data about a user device may be obtained from metadata of an image.
  • the metadata may be data automatically attached to a photo.
  • the metadata may contain variable data including technical information about evidentiary data and its capture method, such as settings, capture time, and GPS location information.
  • the metadata is generated by a microprocessor of the device.
  • the local data of a user device may include operational parameters.
  • the operational parameters may be event-based parameters.
  • One or more processors on- the user device may be provided that may aid in collecting operational parameters about the user device.
  • the local data of a device may include positional information.
  • positional information may include a latitude, longitude, and/or altitude of the device.
  • the positional information may be expressed as coordinates.
  • the positional information may include an orientation of the device.
  • the positional information may include an orientation of the device with respect to one, two, or three axes (e.g., a yaw axis, pitch axis, and/or roll axis).
  • the positional information may be an attitude of the device.
  • the positional information may be determined relative to an inertial reference frame (e.g., environment, Earth, gravity), and/or a local reference frame.
  • the local data about a user device may be obtained from metadata of evidentiary data.
  • the metadata may be data automatically attached to the evidentiary data.
  • the metadata may contain variable data including technical information about the evidentiary data and its capture method.
  • the metadata is generated by a microprocessor on-board the user device.
  • the local data of a user device may include operational parameters.
  • the operational parameters may be event-based parameters.
  • the local data of an user device may include positional information.
  • positional information may include a latitude, longitude, and/or altitude of the device.
  • the positional information may be expressed as coordinates.
  • the positional information may include an orientation of the device.
  • the positional information may include an orientation of the device with respect to one, two, or three axes (e.g., a yaw axis, pitch axis, and/or roll axis).
  • the positional information may be an attitude of the device.
  • the positional information may be determined relative to an inertial reference frame (e.g., environment, Earth, gravity), and/or a local reference frame.
  • One or more sensors may be provided that may aid in collecting positional information about the user device or about the user (e.g., phone conversations, social media profiles, Bluetooth connections nearby, etc.).
  • the data may be the historic data collected from one or more emergency events, which can include, for example, repeated violent crimes in an area or a history of flooding.
  • the historic data may all be stored together in a single memory unit or may be distributed over multiple memory units. Data distributed over multiple memory units may or may not be simultaneously accessible or linked.
  • the historic data can be saved to a cloud-based network.
  • the historic data may include data for a single user, or from multiple users.
  • Data from multiple users may all be stored together or may be stored separately from one another.
  • the historic data may include data collected from a single user device or from multiple user devices.
  • the historic data can relate to a type of event or a location of one or more incidents.
  • Data from multiple user devices may all be stored together or may be stored separately from one another.
  • identification related data may be used to authenticate the user identity and the related evidentiary reports generated.
  • the identification data may include the user’s name, an identifier unique to the user, or any personal information about the user (e.g., user address, email, phone number, birthdate, birthplace, website, social security number, account number, gender, race, religion, educational information, health-related information, employment information, family information, marital status, dependents, or any other information related to the user).
  • the personal information about the user may include financial information about the user.
  • financial information about the user may include user payment card information (e.g., credit card, debit card, gift card, discount card, pre-paid card, etc.), user financial account information, routing numbers, balances, amount of debt, credit limits, past financial transactions, or any other type of information.
  • user payment card information e.g., credit card, debit card, gift card, discount card, pre-paid card, etc.
  • user financial account information e.g., routing numbers, balances, amount of debt, credit limits, past financial transactions, or any other type of information.
  • the identification data may pertain to the user’s device. For instance, a unique device identifier may be provided.
  • Device fingerprint data (e.g., information about one or more characteristics of the device) may be provided.
  • Information collected regarding the device’s clock, model number, serial number, device’s IP address, Bluetooth mac-address, Wi-Fi MAC address, applications running on the device, or any other information relating to the device may be collected.
  • An authentication system may include one or more user devices that may communicate with one or more external devices, such as devices held by law enforcement.
  • the one or more user devices may be associated with one or more respective users. Data from the one or more user devices may be conveyed to the one or more external devices or entities.
  • data received by a first external device may be the same as data received by a second external device, or the data may be different.
  • a first external device may be or belong to an authentication server system (e.g., a server system configured to provide secure authentication), and/or a second external device may be or belong to one or more third parties (e.g., a school, law enforcement, agency, employer, company, transportation agency, or a medical professional).
  • the network may be a communication network.
  • the communication network(s) may include local area networks (LAN) or wide area networks (WAN), such as the Internet.
  • the communication network(s) may comprise telecommunication network(s) including transmitters, receivers, and various communication channels (e.g., routers) for routing messages in-between.
  • the communication network(s) may be implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), WiMAX, or any other suitable communication protocols.
  • USB Universal Serial Bus
  • FIREWIRE Global System for Mobile Communications
  • GSM Global System for Mobile Communications
  • EDGE Enhanced Data GSM Environment
  • CDMA code division multiple access
  • TDMA time division multiple access
  • Bluetooth Wi-Fi
  • Wi-Fi voice over Internet Protocol
  • blockchain may be used to store and verify evidentiary data.
  • the forensic evidence capture system may collect evidentiary data and then use blockchain to store the data.
  • the forensic evidence capture system may comprise one or more processors, and memory storing instructions that, when executed by the one or more processors, cause the forensic evidence capture system to: generate a form comprising a first data field configured to receive evidentiary data; cause creation of a blockchain corresponding to the first data field and the form, wherein the blockchain is configured to store blockchain entries corresponding to data lineage of the first data field; cause a first blockchain entry to be added to the blockchain, wherein the first blockchain entry corresponds to a second computing device permitted to receive data associated with the first data field and comprises at least one rule associated with the first data field; receive first data via the first data field; and transfer the first evidentiary data to a second computing device based on evaluating the first blockchain entry.
  • the forensic capture interface can be triggered via an event or user action, as described below.
  • the systems and methods described herein may also provide user alerts.
  • the system may evaluate nearby threats or dangers, current events happening if reported en masse, such as in the case of public threats, and the timeline of evidence collected. Alerts may be discreet.
  • the wearable device may provide a vibrational pattern that indicates to the user that they should be aware, alert, and check with their safety circle/base.
  • the system may further comprise an SOS button with real-time response, wherein activation of said SOS button causes an alert to be sent to one or more of: a user’s emergency contacts, emergency services, or law enforcement.
  • Systems of the present disclosure may be activated when a pre-determined trigger word or phrase, such as “Help Me,” is spoken into a microphone on the device, and recognized by the device. Alternatively, more discrete trigger phrases may be used, such as “The train is leaving.” Systems of the present disclosure may be activated upon a triggering event. Triggering events may include, for example, moving or shaking the mobile device in a pre-determined pattern, so that an accelerometer on one of the input devices of the present systems may detect the pattern to initiate the duress trigger.
  • the systems of the present disclosure may also be activated based on vital signs harvested from bracelets such as a fitness bracelet, or a security vest with bio monitoring, or any other vital sign monitoring device. The person can pre-determine the specific vital signs and thresholds which would indicate a triggering event, and thus activate the device. Once activated, systems described herein may collect evidentiary data.
  • Systems and methods of the present disclosure may be activated (via activation of the forensic capture interface).
  • the forensic capture interface may be activated, and thus collect evidentiary data from the one or more input devices, as a result of sounds having a certain frequency, pitch, duration, or other identifiable characteristics.
  • the forensic capture interface may be activated by a scream, a yell, an alarm, or a sound of a gunshot. In some embodiments, sudden movements may cause the forensic capture interface to activate.
  • the forensic capture interface may be triggered based on the location of the device user or the location of nearby individuals.
  • a peer-to-peer system may be used to identify nearby witnesses or other potential victims. For instance, devices of nearby potential witnesses may be detected by the user device, for instance if they are within a particular proximity.
  • the forensic capture interface may track the location of individuals with devices, and based on calculated locations be able to identify potential witnesses without requiring direct peer to peer interaction.
  • potential witnesses may be identified to aid police in collecting witness statements.
  • witness devices may automatically be used to aid in collection of evidence (e.g., video, audio, other sensor data).
  • the forensic capture interface may be activated by user action.
  • User action may include, for example, audio feedback, visual feedback, haptic feedback, biometric feedback, or other.
  • the forensic capture interface may be triggered to collect evidence based on the mode of activation.
  • an alert mode may provide a public-safety alert for other device users.
  • a FIELD investigation mode may be used to assist law enforcement in investigating an incident or event, such as a natural disaster or crime.
  • a reporting mode may be triggered to collect evidence in order to aid an individual in reporting a crime.
  • an alert mode, a FIELD investigation mode, or a reporting mode of forensic evidence capture system might be triggered.
  • Any of the alert mode, field investigation mode, or reporting modes may be triggered by, for example, a mechanical trigger, an audible or visual trigger, and a sequence of one or more keys, motion detectors, velocity detectors and other means of quickly triggering the alert.
  • activation of the alert mode might be based on biomedical indicators that allow an alert mode to be triggered in a “stealth mode” such that it is not apparent to others, such as a robber or kidnapper, that an alert mode has been activated.
  • a vibrational pattern may activate to indicate to a user that the alert mode has been activated. In such scenarios, a user would feel the vibrational pattern, but third parties would not notice.
  • Embodiments of the present invention may include peer-to-peer networking. For example, an alert mode may be triggered as a result of an action of a user or other individual.
  • FIG. 3 provides an exemplary embodiment of the systems and methods described herein.
  • the systems described herein may identify an event or incident within a geographical area (300). This may be identified by a first user device (301), which may trigger the forensic capture interface of a second user’s device (302) to activate as well based on the location of the second user. For example, this may cause the second user’s device to begin collecting evidentiary data through the one or more input devices of the forensic capture interface (e.g., by activating the camera or microphone of a second user’s device).
  • Such evidentiary data may be shared with others, such as emergency contact, emergency personnel, law enforcement officers, or agencies.
  • a user device (302) within the geographical area of an incident (300) may communicate to a user device outside of the geographical area (300).
  • the user device (302) may communicate with a user device of a member of the safety circle/emergency contacts or it may be an emergency personnel (303).
  • various user devices may simultaneously stream evidentiary data (e.g., 304, 303), checksum, and reports to a cloud-based storage medium (305).
  • an alert and/or reporting system comprising a processing and communication unit located in a user device and having a processor executing software from a non-transitory medium and a coupled data repository, the processor interfacing to a plurality of sensors, a communication module coupled to the processor and enabled to at least send communications to an Internet network, a global positioning system (GPS) coupled to the processor, determining geographic location of the user device.
  • the processor monitors data from the plurality of sensors, consults status information based on one or both of one or more sensor readings or combinations of sensor readings, and selects and sends according to the one or more sensor readings or combinations of sensor readings, by the communications module, a preprogrammed communication addressed to a particular Internet destination.
  • the plurality of sensors comprises a motion sensor.
  • a motion sensor may include, for example, an accelerometer, gyroscope, or other inertial, triangulation, tactile, proximity or position sensor, such as a global positioning system sensor.
  • Embodiments of the present invention might be in communication with one or more peripheral triggering devices that are in communication with the user device.
  • peripheral triggering devices might be employed by a user to covertly activate an alert mode of mobile device.
  • peripheral triggering devices might include smart eyewear.
  • Such smart eyewear might be contact lenses or glasses that display prompts visible only to the user wearing the eyewear.
  • the smart eyewear might then allow the user to communicate an emergency situation with law enforcement or other emergency responders through, for example, eye movement or blinking.
  • the smart eyewear may also trigger the forensic capture interface to gather evidentiary data for generating an evidence report.
  • clothing accessories might be configured as peripheral triggering devices.
  • such clothing accessories might include alarm wallet.
  • Alarm wallet might be in communication with a user device, for example, by Bluetooth transceiver, radio frequency identification (RFID), or any other wireless communication.
  • RFID radio frequency identification
  • an alert might be triggered to cause the forensic capture interface to begin collecting evidentiary data from the one or more input devices for evidence preservation.
  • the systems described herein may analyze evidentiary data collected through the forensic capture interface, such as to determine the gender or other information about a speaker, perpetrator, or a bystander.
  • Activation of said SOS button may be triggered by “trigger words” used by the user.
  • a subset of trigger words may exist, such that one trigger word would activate evidence collection and another activates sending of an SOS alert, for example.
  • information relating to a user’s emergency contacts may be collected and stored within a user device. Such emergency contacts may be described as a “safety circle.” When an alert is sent to a safety circle, it may ping the user device. If certain trigger words are used, an SOS for help may be sent out.
  • a user may incorporate after-the-fact information about the incident in the first report. For example, a user may select the type of event that occurred, such as property damage, abuse (emotional, physical, verbal); assault (verbal, physical, sexual); harassment (verbal, physical, sexual, or social). Such information may be provided via a user statement.
  • the system may ask a user to provide a user statement in response to question prompts, which may ask about: the type of abuse (physical, sexual, emotional); relationship with perpetrator; name of perpetrator; social media handles of the perpetrator (if known); whether the victim reported to the police; whether the victim went to a hospital for treatment; whether witnesses were present; whether the act constituted a hate crime; whether others were injured; whether evidence of the incident has been collected; and whether DNA evidence might be available.
  • the type of abuse physical, sexual, emotional
  • relationship with perpetrator name of perpetrator
  • social media handles of the perpetrator if known
  • whether the victim reported to the police whether the victim went to a hospital for treatment
  • witnesses were present whether the act constituted a hate crime; whether others were injured; whether evidence of the incident has been collected; and whether DNA evidence might be available.
  • the systems described herein may use biometric data to verify a user.
  • the user is a law enforcement officer.
  • biometrics may be used to verify the identity of the officer.
  • locational or positional data is also used to verify the identity of the user.
  • KYC Biometrics may be used in the case of a law enforcement officer as a means of signing the forensic report.
  • Biometric data may further be used as a means of verifying the law enforcement officer’s identity.
  • such biometrics may be used to verify the identity of a non-law enforcement user, such as a victim of a crime who is submitting a report.
  • the system may collect evidentiary data, analyze this data, and use the data to perform a predictive analysis for public safety.
  • evidentiary data in the form of biometrics may be collected before, during, or after the event.
  • Information collected for this predictive analysis may also include: the timing of events (to track trends); location of events (to triage future emergencies); identify real-time threats (as it happens, via SOS warnings); and hot spots, wherein incidents are grouped together based on correlations detected.
  • the system may collect evidentiary data for purposes of field analysis or investigation.
  • the system may collect evidentiary data for purposes of assisting in reporting of a crime or offense.
  • Evidentiary data may include, for example, GPS data, video files, image files, audio files, documents, local phone data, a list of identified potential witnesses (e.g., based on Bluetooth connections available and detected by the user device or by location data based on other user devices).
  • the system may collect evidentiary data to confirm the accuracy of information provided by an individual in reporting an incident.
  • EPI Evidence Package for Investigation
  • EPI reports compile all data and footage captured once a user says “trigger word” to activate (for example) video,, audio and metadata capture or once user presses/activated the device, all accepted by the user to provide information.
  • Information can relate to circumstances of the event, such as whether the person was at an event or on a date, based on metadata analysis of a user’s recent camera pictures, recent online communications, and online searches within a time frame surrounding the event.
  • the report may have autopopulated fields filled in with data captured, links to media, or the media itself attached to the report. The report would then get mailed to, for example, the suer’s emergency contacts, other user, and an appropriate agency or authority, such as a university, management, school, hospital or insurance company.
  • an evidence report may include a combination of information (herein called incident information) including audio, photographs, video, forms, text, graphics, scans, detected signals, and electronic documents (e.g., email, word processing, spreadsheets, graphical models, photographs, equipment configuration data, equipment operation event logs) and/or linked data stored on a web-based server.
  • incident information information
  • electronic documents e.g., email, word processing, spreadsheets, graphical models, photographs, equipment configuration data, equipment operation event logs
  • Detected signals may include intercepted remote control signals (e.g., for mechanical and electrical equipment); intercepted communications systems simultaneously operating during the incident such as land line phones, cell phones, pagers, radios, tracking devices, media broadcasting stations, wireless and wired computer network links, and sources of interference with these systems; and measurements (e.g., environmental sensors for temperature, sensors for hazardous conditions, monitors for physical conditions).
  • the evidence report may display all of the evidentiary data collected by the forensic capture interface.
  • the evidence report may provide a subset of the evidentiary data collected by the forensic capture interface.
  • the evidence report may display all of the evidentiary data on a single display screen.
  • the evidence report may display only a subset of the evidentiary data on a display screen.
  • the content of the report may be viewable by a user only, by police only, by members of a safety circle, and/or by third parties given permission by the user.
  • the evidence report may be viewable by the user, by police, by members of the user’s safety circle, or by public officials. The user may select who to allow access to.
  • the systems and methods herein may generate a timeline of events relating to the evidentiary data and evidence reports.
  • Evidentiary data may include biometric information.
  • Biometric information may include fingerprints, swipe patterns, facial recognition, retina scans, DNA analysis, voice recognition, finger vein patterns.
  • Evidentiary data may include video files or image files with timestamps providing data, location, and time of said evidence capture, along with metadata.
  • encrypted evidentiary data may be stored on a server for FBI-standardized collection protocol, such that the evidentiary data is reflected in big data but anonymized. Examples of the FBI-standardized collection protocols include but are not limited to National Incident-based Reporting System (NIBRS) and/or Uniform Crime Reporting (UCR).
  • NIBRS National Incident-based Reporting System
  • UTR Uniform Crime Reporting
  • FIG. 4 a block diagram is shown depicting an exemplary machine that includes a computer system 1000 (e.g., a processing or computing system) within which a set of instructions can execute for causing a device to perform or execute any one or more of the aspects and/or methodologies for static code scheduling of the present disclosure.
  • a computer system 1000 e.g., a processing or computing system
  • the components in FIG. 4 are examples only and do not limit the scope of use or functionality of any hardware, software, embedded logic component, or a combination of two or more such components implementing particular embodiments.
  • Computer system 1000 may include one or more processors 1001, a memory 1003, and a storage 1008 that communicate with each other, and with other components, via a bus 1040.
  • the bus 1040 may also link a display 1032, one or more input devices 1033 (which may, for example, include a keypad, a keyboard, a mouse, a stylus, etc.), one or more output devices 1034, one or more storage devices 1035, and various tangible storage media 1036. All of these elements may interface directly or via one or more interfaces or adaptors to the bus 1040.
  • the various tangible storage media 1036 can interface with the bus 1040 via storage medium interface 1026.
  • Computer system 1000 may have any suitable physical form, including but not limited to one or more integrated circuits (ICs), printed circuit boards (PCBs), mobile handheld devices (such as mobile telephones or PDAs), laptop or notebook computers, distributed computer systems, computing grids, or servers.
  • ICs integrated circuits
  • PCBs printed circuit boards
  • mobile handheld devices such as mobile telephones
  • Computer system 1000 includes one or more processor(s) 1001 (e.g., central processing units (CPUs) or general purpose graphics processing units (GPGPUs)) that carry out functions.
  • processor(s) 1001 optionally contains a cache memory unit 1002 for temporary local storage of instructions, data, or computer addresses.
  • Processor(s) 1001 are configured to assist in execution of computer readable instructions.
  • Computer system 1000 may provide functionality for the components depicted in FIG. 4 as a result of the processor(s) 1001 executing non-transitory, processor-executable instructions embodied in one or more tangible computer-readable storage media, such as memory 1003, storage 1008, storage devices 1035, and/or storage medium 1036.
  • the computer-readable media may store software that implements particular embodiments, and processor(s) 1001 may execute the software.
  • Memory 1003 may read the software from one or more other computer-readable media (such as mass storage device(s) 1035, 1036) or from one or more other sources through a suitable interface, such as network interface 1020.
  • the software may cause processor(s) 1001 to carry out one or more processes or one or more steps of one or more processes described or illustrated herein. Carrying out such processes or steps may include defining data structures stored in memory 1003 and modifying the data structures as directed by the software.
  • the memory 1003 may include various components (e.g., machine readable media) including, but not limited to, a random access memory component (e.g., RAM 1004) (e.g., static RAM (SRAM), dynamic RAM (DRAM), ferroelectric random access memory (FRAM), phase-change random access memory (PRAM), etc.), a read-only memory component (e.g., ROM 1005), and any combinations thereof.
  • ROM 1005 may act to communicate data and instructions unidirectionally to processor(s) 1001
  • RAM 1004 may act to communicate data and instructions bidirectionally with processor(s) 1001.
  • ROM 1005 and RAM 1004 may include any suitable tangible computer-readable media described below.
  • a basic input/output system 1006 (BIOS) including basic routines that help to transfer information between elements within computer system 1000, such as during startup, may be stored in the memory 1003.
  • Fixed storage 1008 is connected bidirectionally to processor(s) 1001, optionally through storage control unit 1007.
  • Fixed storage 1008 provides additional data storage capacity and may also include any suitable tangible computer-readable media described herein.
  • Storage 1008 may be used to store operating system 1009, executable(s) 1010, data 1011, applications 1012 (application programs), and the like.
  • Storage 1008 can also include an optical disk drive, a solid-state memory device (e.g., flash-based systems), or a combination of any of the above.
  • Information in storage 1008 may, in appropriate cases, be incorporated as virtual memory in memory 1003.
  • storage device(s) 1035 may be removably interfaced with computer system 1000 (e.g., via an external port connector (not shown)) via a storage device interface 1025.
  • storage device(s) 1035 and an associated machine-readable medium may provide non-volatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for the computer system 1000.
  • software may reside, completely or partially, within a machine-readable medium on storage device(s) 1035.
  • software may reside, completely or partially, within processor(s) 1001.
  • Bus 1040 connects a wide variety of subsystems.
  • reference to a bus may encompass one or more digital signal lines serving a common function, where appropriate.
  • Bus 1040 may be any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
  • such architectures include an Industry Standard Architecture (ISA) bus, an Enhanced ISA (EISA) bus, a Micro Channel Architecture (MCA) bus, a Video Electronics Standards Association local bus (VLB), a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, an Accelerated Graphics Port (AGP) bus, HyperTransport (HTX) bus, serial advanced technology attachment (SATA) bus, and any combinations thereof.
  • ISA Industry Standard Architecture
  • EISA Enhanced ISA
  • MCA Micro Channel Architecture
  • VLB Video Electronics Standards Association local bus
  • PCI Peripheral Component Interconnect
  • PCI-X PCI-Express
  • AGP Accelerated Graphics Port
  • HTTP HyperTransport
  • SATA serial advanced technology attachment
  • Computer system 1000 may also include an input device 1033.
  • a user of computer system 1000 may enter commands and/or other information into computer system 1000 via input device(s) 1033.
  • Examples of an input device(s) 1033 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device (e.g., a mouse or touchpad), a touchpad, a touch screen, a multi-touch screen, a joystick, a stylus, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), an optical scanner, a video or still image capture device (e.g., a camera), and any combinations thereof.
  • an alpha-numeric input device e.g., a keyboard
  • a pointing device e.g., a mouse or touchpad
  • a touchpad e.g., a touch screen
  • a multi-touch screen e.g., a joystick
  • the input device is a Kinect, Leap Motion, or the like.
  • Input device(s) 1033 may be interfaced to bus 1040 via any of a variety of input interfaces 1023 (e.g., input interface 1023) including, but not limited to, serial, parallel, game port, USB, FIREWIRE, THUNDERBOLT, or any combination of the above.
  • computer system 1000 when computer system 1000 is connected to network 1030, computer system 1000 may communicate with other devices, specifically mobile devices and enterprise systems, distributed computing systems, cloud storage systems, cloud computing systems, and the like, connected to network 1030. Communications to and from computer system 1000 may be sent through network interface 1020.
  • network interface 1020 may receive incoming communications (such as requests or responses from other devices) in the form of one or more packets (such as Internet Protocol (IP) packets) from network 1030, and computer system 1000 may store the incoming communications in memory 1003 for processing.
  • IP Internet Protocol
  • Computer system 1000 may similarly store outgoing communications (such as requests or responses to other devices) in the form of one or more packets in memory 1003 and communicated to network 1030 from network interface 1020.
  • Processor(s) 1001 may access these communication packets stored in memory 1003 for processing.
  • Examples of the network interface 1020 include, but are not limited to, a network interface card, a modem, and any combination thereof.
  • Examples of a network 1030 or network segment 1030 include, but are not limited to, a distributed computing system, a cloud computing system, a wide area network (WAN) (e.g., the Internet, an enterprise network), a local area network (LAN) (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a direct connection between two computing devices, a peer-to-peer network, and any combinations thereof.
  • a network, such as network 1030 may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.
  • Information and data can be displayed through a display 1032.
  • a display 1032 include, but are not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT-LCD), an organic liquid crystal display (OLED) such as a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display, a plasma display, and any combinations thereof.
  • the display 1032 can interface to the processor(s) 1001, memory 1003, and fixed storage 1008, as well as other devices, such as input device(s) 1033, via the bus 1040.
  • the display 1032 is linked to the bus 1040 via a video interface 1022, and transport of data between the display 1032 and the bus 1040 can be controlled via the graphics control 1021.
  • the display is a video projector.
  • the display is a head-mounted display (HMD) such as a VR headset.
  • suitable VR headsets include, by way of non-limiting examples, HTC Vive, Oculus Rift, Samsung Gear VR, Microsoft HoloLens, Razer OSVR, FOVE VR, Zeiss VR One, Avegant Glyph, Freefly VR headset, and the like.
  • the display is a combination of devices such as those disclosed herein.
  • computer system 1000 may include one or more other peripheral output devices 1034 including, but not limited to, an audio speaker, a printer, a storage device, and any combinations thereof.
  • peripheral output devices may be connected to the bus 1040 via an output interface 1024.
  • Examples of an output interface 1024 include, but are not limited to, a serial port, a parallel connection, a USB port, a FIREWIRE port, a THUNDERBOLT port, and any combinations thereof.
  • computer system 1000 may provide functionality as a result of logic hardwired or otherwise embodied in a circuit, which may operate in place of or together with software to execute one or more processes or one or more steps of one or more processes described or illustrated herein.
  • Reference to software in this disclosure may encompass logic, and reference to logic may encompass software.
  • reference to a computer-readable medium may encompass a circuit (such as an IC) storing software for execution, a circuit embodying logic for execution, or both, where appropriate.
  • the present disclosure encompasses any suitable combination of hardware, software, or both.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may be integral to the processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may reside in a user terminal.
  • the processor and the storage medium may reside as discrete components in a user terminal.
  • suitable computing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles.
  • server computers desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles.
  • Suitable tablet computers include those with booklet, slate, and convertible configurations, known to those of skill in the art.
  • the computing device includes an operating system configured to perform executable instructions.
  • the operating system is, for example, software, including programs and data, which manages the device’s hardware and provides services for execution of applications.
  • suitable server operating systems include, by way of non-limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®.
  • suitable personal computer operating systems include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®.
  • the operating system is provided by cloud computing.
  • suitable mobile smartphone operating systems include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®.
  • suitable media streaming device operating systems include, by way of non-limiting examples, Apple TV®, Roku®, Boxee®, Google TV®, Google Chromecast®, Amazon Fire®, and Samsung® HomeSync®.
  • video game console operating systems include, by way of non-limiting examples, Sony® PS3®, Sony® PS4®, Microsoft® Xbox 360®, Microsoft Xbox One, Nintendo® Wii®, Nintendo® Wii U®, and Ouya®.
  • Non-transitory computer readable storage medium
  • the platforms, systems, media, and methods disclosed herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked computing device.
  • a computer readable storage medium is a tangible component of a computing device.
  • a computer readable storage medium is optionally removable from a computing device.
  • a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, distributed computing systems including cloud computing systems and services, and the like.
  • the program and instructions are permanently, substantially permanently, semi-permanently, or non- transitorily encoded on the media.
  • the platforms, systems, media, and methods disclosed herein include at least one computer program, or use of the same.
  • a computer program includes a sequence of instructions, executable by one or more processor(s) of the computing device’s CPU, written to perform a specified task.
  • Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), computing data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs Application Programming Interfaces
  • a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof.
  • a computer program includes a web application.
  • a web application in various embodiments, utilizes one or more software frameworks and one or more database systems.
  • a web application is created upon a software framework such as Microsoff®.NET or Ruby on Rails (RoR).
  • a web application utilizes one or more database systems including, by way of non-limiting examples, relational, nonrelational, object oriented, associative, and XML database systems.
  • suitable relational database systems include, by way of non-limiting examples, Microsoft® SQL Server, mySQLTM, and Oracle®.
  • a web application in various embodiments, is written in one or more versions of one or more languages.
  • a web application may be written in one or more markup languages, presentation definition languages, client-side scripting languages, server-side coding languages, database query languages, or combinations thereof.
  • a web application is written to some extent in a markup language such as Hypertext Markup Language (HTML), Extensible Hypertext Markup Language (XHTML), or extensible Markup Language (XML).
  • a web application is written to some extent in a presentation definition language such as Cascading Style Sheets (CSS).
  • CSS Cascading Style Sheets
  • a web application is written to some extent in a client-side scripting language such as Asynchronous Javascript and XML (AJAX), Flash® ActionScript, JavaScript, or Silverlight®.
  • AJAX Asynchronous Javascript and XML
  • Flash® ActionScript JavaScript
  • Silverlight® a web application is written to some extent in a server-side coding language such as Active Server Pages (ASP), ColdFusion®, Perl, JavaTM, JavaServer Pages (JSP), Hypertext Preprocessor (PHP), PythonTM, Ruby, Tel, Smalltalk, WebDNA®, or Groovy.
  • a web application is written to some extent in a database query language such as Structured Query Language (SQL).
  • SQL Structured Query Language
  • a web application integrates enterprise server products such as IBM® Lotus Domino®.
  • a web application includes a media player element.
  • a media player element utilizes one or more of many suitable multimedia technologies including, by way of non-limiting examples, Adobe® Flash®, HTML 5, Apple® QuickTime®, Microsoft® Silverlight®, JavaTM, and Unity®.
  • an application provision system comprises one or more databases 1100 accessed by a relational database management system (RDBMS) 1110. Suitable RDBMSs include Firebird, MySQL, PostgreSQL, SQLite, Oracle Database, Microsoft SQL Server, IBM DB2, IBM Informix, SAP Sybase, SAP Sybase, Teradata, and the like.
  • the application provision system further comprises one or more application severs 1120 (such as Java servers, .NET servers, PHP servers, and the like) and one or more web servers 1130 (such as Apache, IIS, GWS and the like).
  • the web server(s) optionally expose one or more web services via app application programming interfaces (APIs) 1140.
  • APIs app application programming interfaces
  • an application provision system alternatively has a distributed, cloud-based architecture 1200 and comprises elastically load balanced, auto-scaling web server resources 1210 and application server resources 1220 as well synchronously replicated databases 1230.
  • one or more systems or components of the present disclosure are implemented as a containerized application (e.g., application container or service containers).
  • the application container provides tooling for applications and batch processing such as web servers with Python or Ruby, JVMs, or even Hadoop or HPC tooling.
  • Application containers are what developers are trying to move into production or onto a cluster to meet the needs of the business.
  • Methods and systems of the invention will be described with reference to embodiments where container-based virtualization (containers) is used.
  • the methods and systems can be implemented in application provided by any type of systems (e.g., containerized application, unikernel adapted application, operating-system- level virtualization or machine level virtualization).
  • a computer program includes a mobile application provided to a mobile computing device.
  • the mobile application is provided to a mobile computing device at the time it is manufactured.
  • the mobile application is provided to a mobile computing device via the computer network described herein.
  • a mobile application is created by techniques known to those of skill in the art using hardware, languages, and development environments known to the art. Those of skill in the art will recognize that mobile applications are written in several languages. Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective-C, JavaTM, Javascript, Pascal, Object Pascal, PythonTM, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof.
  • Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, Airplay SDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, AndroidTM SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and Windows® Mobile SDK.
  • iOS iPhone and iPad
  • a computer program includes a standalone application, which is a program that is run as an independent computer process, not an add-on to an existing process, e.g., not a plug-in.
  • standalone applications are often compiled.
  • a compiler is a computer program(s) that transforms source code written in a programming language into binary object code such as assembly language or machine code. Suitable compiled programming languages include, by way of non-limiting examples, C, C++, Objective-C, COBOL, Delphi, Eiffel, JavaTM, Lisp, PythonTM, Visual Basic, and VB.NET, or combinations thereof. Compilation is often performed, at least in part, to create an executable program.
  • a computer program includes one or more executable complied applications.
  • the computer program includes a web browser plug-in (e.g., extension, etc.).
  • a plug-in is one or more software components that add specific functionality to a larger software application. Makers of software applications support plug-ins to enable third-party developers to create abilities which extend an application, to support easily adding new features, and to reduce the size of an application. When supported, plug-ins enable customizing the functionality of a software application. For example, plug-ins are commonly used in web browsers to play video, generate interactivity, scan for viruses, and display particular file types. Those of skill in the art will be familiar with several web browser plug-ins including, Adobe® Flash® Player, Microsoft® Silverlight®, and Apple® QuickTime®.
  • the toolbar comprises one or more web browser extensions, add-ins, or add-ons. In some embodiments, the toolbar comprises one or more explorer bars, tool bands, or desk bands.
  • plug-in frameworks are available that enable development of plug-ins in various programming languages, including, by way of non-limiting examples, C++, Delphi, JavaTM, PHP, PythonTM, and VB.NET, or combinations thereof.
  • Web browsers are software applications, designed for use with network-connected computing devices, for retrieving, presenting, and traversing information resources on the World Wide Web. Suitable web browsers include, by way of non-limiting examples, Microsoft® Internet Explorer®, Mozilla® Firefox®, Google® Chrome, Apple® Safari®, Opera Software® Opera®, and KDE Konqueror. In some embodiments, the web browser is a mobile web browser. Mobile web browsers (also called microbrowsers, mini-browsers, and wireless browsers) are designed for use on mobile computing devices including, by way of non-limiting examples, handheld computers, tablet computers, netbook computers, subnotebook computers, smartphones, music players, personal digital assistants (PDAs), and handheld video game systems.
  • PDAs personal digital assistants
  • Suitable mobile web browsers include, by way of non-limiting examples, Google® Android® browser, RIM BlackBerry® Browser, Apple® Safari®, Palm® Blazer, Palm® WebOS® Browser, Mozilla® Firefox® for mobile, Microsoft® Internet Explorer® Mobile, Amazon® Kindle® Basic Web, Nokia® Browser, Opera Software® Opera® Mobile, and Sony® PSPTM browser.
  • the platforms, systems, media, and methods disclosed herein include software, server, and/or database modules, or use of the same.
  • software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art.
  • the software modules disclosed herein are implemented in a multitude of ways.
  • a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof.
  • a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof.
  • the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application.
  • software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on a distributed computing platform such as a cloud computing platform. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location.
  • the platforms, systems, media, and methods disclosed herein include one or more databases, or use of the same.
  • suitable databases include, by way of non-limiting examples, relational databases, nonrelational databases, object oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. Further non-limiting examples include SQL, PostgreSQL, MySQL, Oracle, DB2, and Sybase.
  • a database is internet-based.
  • a database is web-based.
  • a database is cloud computing-based.
  • a database is a distributed database.
  • a database is based on one or more local computer storage devices.

Abstract

Systems and methods for electronic-based forensic evidence collection and analysis are provided herein.

Description

FORENSIC EVIDENCE COLLECTION SYSTEMS AND METHODS
RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Application Number 63/295,372, filed on December 30, 2021.
BACKGROUND OF THE INVENTION
[0002] Presently law enforcement, the court system, and the public and private sectors rely on outdated methods of forensic evidence collection. Incident reporting as practiced in the prior art has limitations that adversely affect accuracy and completeness. To this end, a need exists for more accurate, more complete, and verified incident reports/digital media.
SUMMARY OF THE INVENTION
[0003] Systems and methods are provided for automated, simplified, and forensically sound evidence reports deriving from the use of a hardware or loT device. Accurate and complete reporting of the facts surrounding an incident is of great social, economic, and judicial importance. The present disclosure provides a method of increasing transparency, improving solvability of violent crimes, and bridging gaps in forensic evidence collection via real-time data capture and the forensic reporting of the captured data. The systems and methods of the present disclosure may further allow for real-time mapping of potential incidents; artificially-intelligence-based analysis of trends and risk mitigation/breaches; authenticated and immutable data and evidence collection; anonymous reporting or reporting via an identifiable token; deep environmental triangulation; and crime deterrence.
[0004] Systems and methods of the present disclosure may aid law enforcement, safety apps, universities, employers, govemment/municipal entities, and public or private companies in tracking, reporting, and authenticating evidence of incidents where safety was threatened, including incidents or crimes. Current safety platforms and apps are lagging in innovation and integration with loT for transparency and accountability between parties. Accordingly, a need exists for improved systems and methods for collecting evidentiary data and generating forensically-sound reports using said evidentiary data.
[0005] In one aspect of the invention, a method of evidence collection is provided. The present disclosure provides a method of securing forensic evidence, said method comprising: (a) receiving evidentiary data from an input device comprising a plurality of sensors; (b) generating a checksum of said evidentiary data using a cryptographic hash; (c) storing said evidentiary data and said checksum to a local storage medium; (d) uploading said evidentiary data and said checksum to a cloud-based storage medium; (e) generating a first report using said evidentiary data and said checksum from said local storage medium; (f) streaming said first report to said cloud-based storage medium; (g) generating a second report using said evidentiary data and said checksum in said cloud-based storage; (h) comparing said first report and said second report to ensure matching of said reports, in the event where device is not damaged, in which case, the second report would be the sole source of validated incident; and (i) preparing a combined report using matching content from said first report and said second report, thereby generating a forensic copy of said evidentiary data to securing said forensic evidence.
[0006] In some embodiments, said evidentiary data comprises positional information. In some embodiments, said input device comprises one or more of a location sensor, an inertial sensor, altitude sensor, attitude sensors, pressure sensors, or field sensors. In some embodiments, said evidentiary data comprises phone, hardware device, computer, or loT data. In further embodiments, said phone data comprises one or more of calls made or received on a given day, or over a period of time; text messages sent and received on a given day, or over a period of time, or specific time frame; calendar events that a person has pending or scheduled; photos taken; videos taken; or browsing history, and other relevant hardware and application data.
[0007] In some embodiments, said cloud-based storage medium encrypts said evidentiary data stored in said first report. In some cases, said evidentiary data comprises Calendar Application Programming Interface (API) data.
[0008] In some cases, said evidentiary data comprises identifiers of said input device. In some cases, the identifiers of the input device comprises carrier information, International Mobile Equipment Identity (IMEI) information of a user, or a phone number. In some cases, evidentiary data comprises location information. In some cases, location information comprises latitude or longitude coordinates. In some cases, location information comprises elevation. In some cases, elevation is translated into an estimate of a building story.
[0009] In some cases, the input device comprises a wearable device, wherein the wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module. In some cases, evidentiary data comprises a submission signed by a local secure element of said input device. In some cases, evidentiary data comprises multimedia data live capture. In some cases evidentiary data comprises preincident image capture. In some cases, evidentiary data comprises post-incident image capture. [0010] In some embodiments, said input device comprises a wearable device, wherein said wearable device comprises of an audio receiving module, a video receiving module, and a location information receiving module.
[0011] In some embodiments, the method disclosed herein further comprises of encrypting the said combined report. In an embodiment, said combined report may be used to report one or more of: traumatic events, workplace hazards, witnessing a crime, personal mental or physical injuries, hate crimes, hate speech, riots, theft, property damage, equipment damage, sexual harassment, sexual assault, aggravated assault, environmental reports, and Occupational Safety and Health Administration violations.
[0012] The present disclosure also provides systems for evidence collection. For example, in an embodiment, the present disclosure provides a system for electronically securing a forensic copy of evidence, said system comprising: (a) a forensic capture interface in operative communication with one or more input devices comprising a plurality of sensors, wherein said plurality of sensors generate evidentiary data; (b) a central processing unit comprising one or more processors operatively coupled to said forensic capture interface, said processors configured to: receive said evidentiary data from said one or more input devices, take a checksum of said evidentiary data using a cryptographic hash, and generate a first forensic report; (c) a local memory operatively coupled to said central processing unit, said local memory storing said evidentiary data, said checksum, and said first forensic report; and (d) a communications module in networked communication and in local communication with said central processing unit, wherein said communications module uploads said evidentiary data, said checksum, and said first report to a cloud-based server, whereby a second forensic report is generated using said evidentiary data and said checksum and compared to said first forensic report to ensure matching, thereby electronically verifying and securing a forensic copy of evidence.
[0013] In some embodiments, said central processing unit receives said evidentiary data from one or more input devices following an activation event. In further embodiments, said activation event comprises one or more of: haptic feedback, voice or sound-activated feedback, biometric feedback, or positional feedback, or a hardware trigger or software trigger. In some embodiments, said forensic capture interface comprises of one or more wearable devices. In certain embodiments, said forensic capture interface comprises of: a smart watch, a mobile phone, an Internet of Things (IOT), a camera, a microphone, an alarm, a panic button, a jewelry item or personal accessory, smart glasses, wearables, fitness bands, smart jewelry, including rings, smart necklaces, smart bracelets, and smart watch bands; smart clothing, smart machines (ATMs), smart cars, or a close circuit television (CCTV). In some embodiments, said plurality of sensors comprises one or more of: humidity sensors, temperature sensors, other environmental sensors, radio, lidar, cameras, microphones, biometric sensors, or positional sensors.
[0014] In embodiments of the present disclosure, said forensic capture interface becomes activated by one or more voice-activated trigger words, thus causing said forensic capture interface to transmit said evidentiary data to said central processing unit. In some embodiments, said forensic capture interface becomes activated by sudden noise or sudden movement detected by said plurality of sensors. In certain embodiments, said plurality of sensors comprise one or more of: a location sensor; an inertial sensor selected from the group consisting of: accelerometers, gyroscopes, and inertial measurement units (IMUs); an altitude sensor, an attitude sensor; a barometer; a magnetometer; an electromagnetic sensor, or a humidity sensor.
[0015] In some embodiments, the system described herein further comprises a user input interface operatively connected to said forensic capture interface, wherein said user input interface allows a user to manually activate said forensic capture interface. Forensic capture interface can also become activated by a manual human initiated trigger, biometrics, haptics, motions, gestures, and/or algorithms including a combination of the above and additional actions.
[0016] In some cases, the present disclosure provides a method of collecting forensic evidence, comprising: (a) retrieving data from a local device and extracting metadata associated with the data; (b) hashing and streaming said metadata to a cloud, wherein the hashing comprises hashing a local time provided by the local device; (c) receiving the data from the local device, rehashing the metadata associated with the data; (d) comparing said hashed metadata and rehashed metadata; and (d) determining, based on said comparing, whether said data has been altered prior to said streaming.
[0017] In some cases, the local device comprises an input device. In some cases, the input device comprises a smartphone or a smartwatch. In some cases, the input device comprises a wearable device, wherein the wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module. In some cases, the data comprises a submission signed by a local secure element of the input device. In some cases, the data comprises multimedia data live capture. In some cases, the data comprises pre-incident image capture. In some cases, the data comprises post-incident image capture. In some cases, the data comprises location information. In some cases, the data comprises device identifier information.
[0018] In some cases, the present disclosure provides a system for electronically securing forensic evidence, said system comprising: (a) a forensic capture interface in operative communication with one or more input devices comprising a sensor, wherein said plurality of sensors generate data; (b) a central processing unit comprising one or more processors operatively coupled to said forensic capture interface, said processors configured to: (i) receive said data from said one or more input devices; (ii) generate a hash based on said data; (iii) determine whether said data complies with an authenticity standard; (iv) and generate a report of said data; (b) a local memory operatively coupled to said central processing unit, said local memory storing said data and said hash; and (c) a communications module in networked communication and in local communication with said central processing unit, wherein said communications module uploads said data, said hash, and said report to a cloudbased server.
[0019] In some embodiments, the one or more input devices comprise one or more of: a location sensor, an inertial sensor, altitude sensor, attitude sensors, pressure sensors, or field sensors. In some cases, the data comprises phone data. In some cases, the phone data comprises one or more of: calls made or received on a given day; text messages sent and received on a given day; calendar events that a person has scheduled; dates a person may have had; photos taken; or browsing history. In some cases, the data comprises identifiers of the one or more input devices. In some cases, the identifiers comprises carrier information, International Mobile Equipment Identity (IMEI) information of a user, or a phone number. [0020] In some embodiments, the data comprises location information. In some cases, the location information comprises latitude and longitude coordinates. In some cases, the location information comprises elevation. In some cases, the elevation is translated into an estimate of a building story. In some cases, the one or more input devices comprise a wearable device. In some embodiment, the wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module. In some cases, the data comprises a submission signed by a local secure element of the one or more input devices. In some cases, the data comprises multimedia data live capture. In some cases, the data comprises pre-incident image capture. In some cases, the data comprises postincident image capture. INCORPORATION BY REFERENCE
[0021] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:
[0023] FIG. 1 shows an example of a system that may be used for evidence collection, in accordance with embodiments of the invention.
[0024] FIG. 2 provides an overview of exemplary methods described herein, including peer-to-peer based network communications.
[0025] FIG. 3 provides an exemplary embodiment of peer-to-peer networking as implemented via systems and methods of the present disclosure.
[0026] FIG. 4 shows a block diagram depicting an exemplary machine that includes a computer system.
[0027] FIG. 5 shows an example of an application provision system.
[0028] FIG. 6 shows an application provision system having a distributed, cloud-based architecture.
[0029] FIG. 7 provides a schematic or screenshot of an exemplary report generated using the systems and methods described herein.
[0030] FIG. 8 provides an overview of how forensically captured data points may be sourced (e.g., with cellular dependency, with smartphone dependency).
[0031] FIG. 9 provides an overview of a safety app for a mobile device.
[0032] FIG. 10 provides information that may be derived from audio and video multimedia data gathered from an input device.
[0033] FIG. 11 provides an overview of types of users who may access systems described herein.
[0034] FIG. 12 provides an overview of Application Programming Interface (API) types compatible with the systems and methods provided herein. [0035] FIG. 13 provides an exemplary embodiment of how the present systems may evaluate a file for authenticity.
[0036] FIG. 14 provides an exemplary embodiment of how the present systems may deconstruct a file into hash sequences.
[0037] FIG. 15 provides an overview of the cloud store and validation process used for gathering evidence in systems and methods described herein.
DETAILED DESCRIPTION OF THE INVENTION
[0038] While preferable embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.
[0039] The invention provides systems and methods for evidentiary collection and evidence report generation. Various aspects of the systems and methods described herein may be applied to any of the particular applications set forth below. It shall be understood that different aspects of the invention can be appreciated individually, collectively or in combination with each other.
[0040] In some aspects of the invention, systems provided herein are provided as an application programming interface (API). Systems provided herein are designed to comply with evidentiary standards, such as Daubert’s standard, allowing for legally admissible forensic evidence collection. Digital evidence may be collected by sensors, such as audio and video, which is used to provide information about a potential incident or event.
[0041] Accordingly, in one aspect of the invention, a method of evidence collection is provided. The present disclosure provides a method of electronically securing forensic evidence, said method comprising: (a) retrieving, receiving, and the interrogation of evidentiary data from an input device comprising a plurality of sensors; (b) generating a checksum of said evidentiary data using a cryptographic hash; (c) storing said evidentiary data and said checksum to a local storage medium; (d) uploading said evidentiary data and said checksum to a cloud-based storage medium; (e) generating a first report using said evidentiary data and said checksum from said local storage medium; (f) streaming said first report to said cloud-based storage medium; (g) generating a second report using said evidentiary data and said checksum in said cloud-based storage; (h) comparing said first report and said second report to ensure matching of cryptographic hashes of said reports; and (i) preparing a combined report using matching content from said first report and said second report, thereby electronically securing forensic evidence.
[0042] In another aspect of the invention, a method of electronically securing forensic evidence is provided, the method comprising: (a) receiving evidentiary data from an input device comprising a plurality of sensors; (b) storing said evidentiary data to a local storage medium; (c) generating a first checksum of said evidentiary data using a cryptographic hash; (d) generating a first report using said evidentiary data and said checksum from said local storage medium; (e) uploading said evidentiary data and said first report to a cloud-based storage medium; (f) generating a second checksum using said evidentiary data from said cloud-based storage medium; (g) generating a second report using said evidentiary data and said checksum; (h) streaming said second report to said cloud-based storage medium; (i) comparing said first report and said second report to ensure matching of said first report and said second report; and (j) preparing a combined report using matching content from said first report and said second report, thereby electronically securing forensic evidence. In some cases, the forensic reports include only the evidentiary data. In some cases, the forensic reports include only the checksum values. In some cases, the forensic reports include both the evidentiary data and the checksum values.
[0043] FIG. 1 depicts a system that may aid in electronically securing forensic evidence, in accordance with embodiments of the invention. FIG. 1 provides a method for electronically securing forensic evidence, said method comprising: (a) receiving evidentiary data from an input device comprising a plurality of sensors (100); (b) generating a checksum of said evidentiary data using a cryptographic hash (101); (c) storing said evidentiary data and said checksum to a local storage medium (103); (d) uploading said evidentiary data and said checksum to a cloud-based storage medium (102) ; (e) generating a first report using said evidentiary data and said checksum from said local storage medium (105); (f) streaming said first report to said cloud-based storage medium (106); (g) generating a second report using said evidentiary data and said checksum in said cloud-based storage (104); (h) comparing said first report and said second report to ensure matching of said reports (107); and (i) preparing a combined report using matching content from said first report and said second report, thereby electronically securing forensic evidence (108).
[0044] The system may be implemented by a user device. The user device may contain a display and an interface for capturing forensic evidence and/or for generating a report based on the evidentiary data captured. The device may include one or more memory storage units, one or more processors, one or more communication interfaces, one or more power sources, and/or one or more sensors.
[0045] The present systems and methods involve collection of forensically captured datapoints. Devices may communicate with the API to provide forensically captured data points. In one exemplary embodiment, a smart device with cellular connection (e.g., an Apple watch with no iPhone dependency) may stream forensically captured datapoints directly to the system cloud. In another exemplary embodiment, a smart device (e.g., an Apple watch with iPhone dependency) may stream data (e.g., GPS information) to an iOS device, which then streams to the cloud as forensically captured datapoints, as depicted in FIG. 8. It should be noted that, although Apple devices are specified in this example, any smart device (e.g., an Android equivalent) may be used.
[0046] FIG. 9 demonstrates sources of forensically captured datapoints. Such datapoints can include location data points, such as latitude/longitude information, which may refreshed at a rate of less than 1 second, 1-2 seconds, 2-5 seconds, 5-10 seconds, 10-20 seconds, 30-50 seconds, or more. Location data points can also include elevation, which may be measured in feet, yards, meters, or any other metrics. Elevation datapoints can also be translated into stories, such as would be helpful for identifying the occurrence of an event in a building. Data points can also include post-incident information. For example, images taken after an incident may be uploaded to the system cloud. Images can be identified based on image format, creation date, modification date, hash (e.g., Sha256 Hash and ECC Hash), Plist Storage, location, etc. Forensically captured data points can also include identifies, such as identifiers of a device serving as the source of such data streamed. Identifiers of a cellphone, for example, can include carrier information, phone number, or IMEI (USER). Submissions of forensically captured datapoints may be signed by local secure compute elements (e.g., TPM, Secure Enclave, Titam M, Knox), which can then be validated and subject to checksum both before uploading to the cloud and after uploading to the cloud.
[0047] Datapoints can also be derived from multimedia data live capture, such as audio and video streaming. FIG. 10 depicts information that can be derived from this type of data. Forensically captured datapoints derived from live audio capture may include information about date created, date modified, storage on local device (e.g., Plist), or MdS Hash for file. Audio capture data can also include information on the format recorded (e.g., AAC, MP3). [0048] Video data captured as multimedia data live capture can similarly include such information, as shown in FIG. 10. Such information can include date created, data modified, storage location on device, or hash information (e.g., Sha256 Hash and ECC Hash for a video file). Information can also include frame rate, as measure in Frames Per Second (FPS), created resolution, and format of the recorded video (e.g., MP4, RAW).
Endpoints and Authenticity Standards
[0049] Systems and methods provided herein collect information on endpoints. Any point of storage may be considered an “endpoint”. Such data may be held to a validation standard. Such validation standards may serve as a guarantee that endpoint data is authentically from said endpoint when presented as evidence. Any stream of digital information, such as traditional audio/video microphones and cameras may be compatible with the systems described herein. Additionally, collection over specialized (2.4GHz: Wifi, Bluetooth) and full-spectrum antennas may provide additional insight to the radio environment surrounding an event. Signal analysis, or radio environment mapping of such data streams, may serve as a validation mechanism for data collected in the present systems. Endpoints may be collected as streamed data in a binary format, for example, as received from a video. In some cases, endpoints are derived from data mapping. For example, endpoints may be based on device occupancy determinations, which serve as a means to validate the data further based on the amount of sources such data may have arrived from. In some cases, endpoints may include mapping based on movements. For example, device occupancy in one location may suddenly change, indicating an emergency (e.g., fire) in that location.
[0050] Endpoints register with the system cloud, such as Amazon Web Services (AWS) , using a unique identifier assigned to the endpoint. During registration, a public-private keypair is created to uniquely identify and encrypt traffic from the endpoint to the Cloud. Metadata is transmitted over a secure Message Queuing Telemetry Transfer (MQTT) channel, and when appropriate data is submitted over HTTPS. Metadata and full-content data from endpoints may include additional data channels beyond strictly video and audio. See, e.g., FIG. 9. Examples from loT include radio mapping data than can be used to estimate with reasonable precision the number of devices in an area, or the relative position over time of such devices. The system’s reporting template can be extended to include the output of additional processing techniques such as a radio map, or wireless ID report. In these cases, the same strong validation of the relevant original streaming metadata is in place, but additional documentation regarding the implementation details and specific processing method used to generate the derivative analysis will be appended to the report. The intention of the inclusion of this additional documentation is to facilitate the third-party validation and courtroom acceptance of these processing methods independently from the core evidence report.
[0051] Systems and methods described herein can further include an auditing strategy based on a “trust, but verify” approach for users, and a zero-trust model for devices. Device data and system integrity is cryptographically guaranteed by the platform and one or more third-party node holders, facilitated by a private blockchain. Additionally, systems described herein may collect metadata for further confirmation of authenticity. Cryptographic validation of data authenticity serves to ensure that streamed data conforms with authenticity standards of the systems described herein. After being collected from a sensor, such as a camera, the system endpoint binaries produce cryptographic hashes of the data as, or as soon after as is practical, it is stored. These hashes, along with hardware system identifiers, wall clock values, and other anti-spoofing signatures constitute metadata that is submitted in near real-time to the system cloud.
[0052] In some embodiments, devices of the present system enact local anti-tampering measures 1301. See FIG. 13. For example, devices may include signed firmware, implemented when the software vendor signed the firmware image with a private key, which may be authenticated when accepted by the cloud. The system may further require Trusted Platform Modules (TPM) signature for a video and/or timestamps streamed from a device. As described elsewhere herein, the local anti-tampering measures 1301 may facilitate the generation of a local blockchain for the metadata associated with the video, audio, video, and audio data it collected. Alternatively or additionally, the local time source 1302 of such a device also be streamed and processed by a file chunk hasher 1303. In some embodiments, the received video feed is sliced into chunks by, for example, time period. For example, the time period is configurable, and can be 5 seconds, 10 seconds, 30 seconds, 1 minute, 2 minutes, 3 minutes, 4 minutes, 5 minutes, etc. In some embodiments, the received video feed is sliced into chunks by, for example, data packet size (not shown in FIG. 13). For example, the data size of each chuck is configurable, and can be 1 Mb, 2 Mb, 3 Mb, 4 Mb, 5 Mb, 6 Mb, 10 Mb, 50 Mb, 100 Mb, 500 Mb, 1000 Mb, 1Gb, 2Gb, etc. The received video feed chunks may be stored to file system observer 1307 for further analysis. The file system observer 1307 may transmit the received video data chunks to file chunk hasher 1303. In some embodiments, the file chunk hasher 1303 may hash the metadata associated with received data chunks along with the TPM signature for the video, TMP signature for time stamp, local time source, etc. In some embodiments, the file chunk hasher 1303 may only hash the metadata to save local and cloud storage. In some embodiments, the file chunk hasher 1303 may hash the received data chunks using various techniques, for example, hash algorithms such as SHA-256 hash, ECC hash, MD5, SHA-1, SHA-2, NTLM, and LANMAN, etc. In some embodiments, the hash overlap analyzer 1304 may analyze the hashed data chunks by retrieving the device ID, file name, file date creation, file modification time, starting byte size, and/or hashing algorithm (SHA-256 hash, ECC hash), etc.
[0053] Overlap between hashes may be determined to validate authenticity of data. For example, if hashes have at least x% overlap, they will comply with the authenticity standards of the present systems. See FIG. 14. In some embodiments, the overlap may be configurable, and may be time-based. For example, the overlap may be 1, 2, 3, 4, 5, 10 seconds for a 1 minute video chunk. In another example, the overlap may be 1, 2, 3, 4, 5, 10 minutes for a 1 hour video chunk. In some embodiments, an individual data chunk hash may not have any overlap at the start point of the chunk, which may indicate that this chunk is the beginning of a video. As shown in FIG. 14, the first data chunk hash may denote the beginning of the video, and may only have an end point overlap with a subsequent data chunk. In some embodiments, referring back to FIG. 13, a VMS implementation observer 1309 may determine whether a file is complete, for example, by inquiring whether there is a subsequent data chunk (e.g., video file). When there is no subsequent data chunk, the system may determine this is the last chunk hash and the video is complete. In some embodiments, each hash chunk may be analyzed, for example, by hash overlap analyzer 1304, regarding the device ID, file name, file date creation, file modification time, starting byte size, and/or hashing algorithm (SHA-256 hash, ECC hash), etc. In some embodiments, receiving a final hash of a video feed may denote a complete of the video feed. In some embodiments, the generated hashes may be provided to MQTT reporting queue to system cloud API 1306. In some embodiments, the generated hashes are transmitted in a queue based on the video timestamp. Since the hashed metadata is generally smaller in size comparing to the full video, a secured MQTT protocol may be employed to transmit the hash. Other protocols may be selected and utilized to transmit this data.
[0054] A further aspect of authenticity standards relates to time alignment. Timely submission of metadata contributes to the validation process utilized by the present systems. In some embodiments, metadata generated by an endpoint is paired with additional network and clock metadata from the cloud platform, allowing for confirmation that the data collected has not been modified prior to, or subsequent to, submission. A cloud copy of the metadata is also collected, allowing the system to identify whether data has been cut, is incomplete, or has been deleted prematurely to the native age-off criteria. [0055] In some scenarios, timely submission of data may not be possible. In such cases, metadata is submitted on a best-effort basis. To accommodate the nature of mobile and internet-connected data streams, the system validation process is tolerant to temporary delays in transmission, as the local chain of content hashing would have to be interrupted by an attacker in order to circumvent detection. In such scenarios, such content cannot be validated to the same extent. One way to think of this is that the device caches metadata using a local blockchain, which is then validated by the cloud peer when the connection is re-established. If false data is substituted for real data, the entire local hash chain would have to be recalculated. If false data is spoofed in real-time onto the chain, then our anti-spoofing measures can run with full effect. FIG. 15 provides a flow chart depicting such a process. As shown in FIG. 15, the hashed metadata reporting queue received from endpoint collector (e.g., a security cam, a smart phone, an loT device, etc.) may be encoded to include cloud time. In some embodiments, the cloud time may indicate a time at which the cloud received the hash. Further, the could block chain table may compose the received hash into block by time sequence. Notably, the blocks may form a private block chain is immutable and may allow multiple parties to audit. In some embodiments, a third-party may hold a node of the cloud blockchain. This may provide additional visibility of the hashed value (which may write as a block in the block chain).
[0056] When the original full-content copy of the video is received at the platform, the system presented herein may identify the matching block range, i.e., the videos that marches the cloud copy of hash. In some embodiments, the platform may utilize a file chunk hasher to rehash the received full-content copy. This rehash may be compared with the cloud hash copy to ensure authenticity and thereby verify the video. This verification process may evaluate discrepancies between the rehashed copy and the cloud copy. In some embodiments, the hash technique used in hashing the metadate and in rehashing the fullcontent copy may be the same, to ensure that the same hash value may be generated when the content is the same. If the current full-content copy (i.e., the intended presented evidence) is not tampered, or otherwise comprised and/or altered, then the cloud copy hash should match with the rehash value. Various measures may aid in the validation process. In some embodiments, a time sanity measure may be utilized. For example, the time embedded in the video (i.e., timestamp on the video, creation time) should be prior to a medication time (i.e., a metadata associated with the recorded time). The medication time should be prior to the submission time (i.e., a metadata associated with the transmission time). The submission time should be prior to the could receipt time. In short, those timestamps should be in a monotonic increasing format. Any discrepancy from the above listed behaviors may indicate a tampered video. In some embodiments, when there is no video captured, or when there is no new data stored on storage, the metadata may still be hashed and streamed to the cloud. This may ensure that no spoofer may take advantage of an endpoint device idle time period. In some embodiments, the known manipulation techniques (e.g., deepfakes, visual time stamp manipulation) are taken into consideration, and the system presented herein may provide anti-tamper measures to prevent video being tampered using those techniques. In some embodiments, the verification process may utilize the TPM signature of the local device (i.e., endpoint device). For example, the TPM signature (i.e., a piece of metadata associated with video) may be hashed and transmitted to the cloud. The later rehashed copy may also include the TPM signature, and the system presented herein may compare the hash value between the two copies.
[0057] The timely retrieval of potential evidence is an important asset to investigators both when attempting to solve an active case, as well as to collect it before it may be accidentally deleted or aged off. For this reason, while it may be infeasible both from a technical bandwidth limitation and a storage cost perspective to collect all full-content data, the platform provided herein will try to retrieve full-content data or previews of this data immediately upon notification of a potential incident (an event). Events can come from a variety of sources, or “triggers”. In the API as a service, there is an event endpoint that can submit trigger criteria. When deployed to a mobile phone, for example, event triggers could come from manually submitted user events, such as from a mobile app. When deployed to a fixed camera installation, triggers could come from the submission of a police report, or an internal company reporting process.
[0058] By responding to event triggers automatically, the present systems minimize the risk of full-content data being lost due to retention expiry or other mishandling.
[0059] Another important source of event triggers is when an investigator enters a report request in the system’s portal. In this case, data may have not been previously retrieved as a result of an event trigger, the data will be retrieved when requested as a part of the report generation process.
[0060] While the archival of the full-content and metadata of an incident is an important feature of the system platform, the platform does not retain all the content for which it has metadata. It is simply not feasible from a technical (battery, network, storage) cost or privacy perspective to store all content such as video which may never be used and should primarily remain in the custody of the entity that owns the data. [0061] During the investigation of an incident, it may be advantageous to review content in either a compressed or truncated form. Such “summary data” is derived from full-content data. For instance, cameras from the area surrounding an incident may provide important context for the incident, either during or in the period leading up to or after an incident. Compressed resolution or lower framerate previews of the content are needed in a timely manner to determine if the content is relevant to an incident. The system platform, depending on the capabilities and/or limitations of the endpoint, can facilitate the automated retrieval and presentation of this summary content. Additionally, in response to an event trigger or a report generation request, the system can retrieve and archive full-content data. Summary data and full -content data retrieval are triggered through an MQTT job and uploaded via an HTTPS session.
[0062] Upon receipt from endpoints, metadata is securely stored in the system cloud in two forms. First, the data is written to a document-based database and indexed for quick retrieval. Second, the data is time-bundled with other data matching a throughput-optimized time window and submitted to a Hyperledger Fabric private blockchain node. This node is then synchronized with a third-party node held by an independently contracted auditing entity. Validating any full-content data, in any digital format, is achieved through the strong reconciliation between the proposed full-content data copy, the metadata held in the system’s cloud platform, and the integrity of the blockchain. In the event that full-content data fails validation, the manner of failure can provide unique insight. A common example would be that missing data, either before, in the middle of, or after an incident could be flagged to indicate that either the full-content copy presented was spoofed, spliced, or incomplete.
[0063] The report based on the evidentiary data captured may include, for example, video files, audio files, biometric information or relevant collected data regarding the victim, and information on potential witnesses or devices detected nearby. A report generator of the system assembles full-content data and metadata from the system’s archive and blockchain and presents this data in standardized formats that can be customized based on the customer and intended use case. For instance, the template for regulatory track and trace compliance differs from the police and courtroom evidence submission use cases. Every formatted report includes a printable barcode, which encodes a unique identifier and web URL directing the user to the digital version of the report. This bridges the gap between paper reporting requirements and digital evidence validation.
[0064] The integrity of the systems’ metadata archive is backed by a private blockchain, where third-party node holders can both back up and guarantee the integrity of the blockchain, even in the event of an unfavorable security event in the system cloud. Industrystandard security measures are utilized in the implementation and operation of the system cloud, but this additional layer of security strengthens the courtroom admissibility case for digital evidence and is how the system platform exceeds current digital evidence immutability practices.
[0065] The systems provided herein validate that the data presented as evidence is authentic as it was submitted. This means that data and metadata such as timestamps have not been tampered with after the point at which metadata was submitted to the system platform. This is an important improvement over current evidentiary collection methods. Authentic data, however, is not necessarily genuine. The diverse nature of uncontrolled or customer- controlled data sources allows the, albeit unlikely, possibility for spoofed data submission. Inline CCTV loops, for example, could produce misleading data, although it could not be further tampered with after submission.
[0066] While this risk cannot be fully mitigated, there are statistical and multi-channel methods that can effectively harden system collection techniques against spoofing. The system Source Development Kit (SDK) provides developer functions to submit anti-spoofing data such as hardware token signatures as an additional collection channel. This information is treated in the same class as all system metadata and is also submitted to the metadata endpoint API.
[0067] In the API systems described herein, source development kits may further serve to validate data streams collected as forensically captured datapoints. A source development kit and pre-compiled binaries are used as an internal reference when integrating with a new endpoint, which could be a single camera, a video management system (VMS), or another live data aggregation point. A variant of this kit, binaries, and other sample scripts will be provided to external developers who plan to add support for additional endpoint devices or VMSes to the ForceField platform. This kit and scripts include functions to securely register, transmit metadata, and respond to summary data and full-content data requests from the ForceField cloud.
[0068] Both metadata producers and data consumers may wish to utilize the system platform for A/V, loT, or other streaming validation purposes that go beyond the use cases supported by the user portal. In such instances, an API as a Service subscription model is supported. ForceField maintains customer-facing APIs that support the independent submission of metadata and/or content for inclusion on the system’s private blockchain and archive. The pricing of this offering is dependent on the capabilities requested, such as content retention windows. API types are outlined in FIG. 12. API Types include metadata (for the submission of metadata), event submission (for submission of summary data or fullcontent data), Reporting (for reporter and validation requests), Derivative Processing (for independent requests to derivative processing techniques, such as radio mapping or device crown analysis).
[0069] Methods of forensic evidence capture and reporting, such as those performed using the systems described herein, may include ascribing guidance on implementation and certification of devices interacting with (e.g., streaming data to) the systems. For example, such guidance may indicate information as to quality, submission timeliness, and other criteria such as anti-spoofing mechanisms supported by the device that provides the streamed content.
Users and Uses of the Present Systems and Methods
[0070] Systems provided herein may be utilized by various classes of users, such as those provided in FIG. 11. A data owner can include, for example, a company that owns and operates a camera. Such a user would have permission to view any data that they have submitted if it has been retained by the system platform. Investigators in the B2B case have permission to query and use the report generator and validator for derivations of their data. Law enforcement investigators can query data from any geographic areas within their designated jurisdiction. In some cases, regulators may share this access. Investigator oversight is provided by auditors, which can include, for example, an internal affairs division of law enforcement. Another system user may include legal inquiry users. Such users include legal teams desiring confirmation of a full-content data copy presented as evidence or a regulator tasked with the verification of a report. An additional type of user includes platform administrators, such as engineers or administrative personnel of the platform, tasked with user or operation support responsibilities.
[0071] The platform systems may comprise memory that maintains an audit log of customer user activity. Generated reports and system queries are tracked to provide a record to oversight-class users. Oversight users are required for law enforcement usage of the FF platform, and auditors are required to regularly review all system usage.
[0072] In some embodiments of the presently disclosed systems, the system comprises a Reporting Portal. A Reporting Portal, or user portal, serves a variety of purposes. One such purpose is to facilitate the generation and presentation of incident reports containing fullcontent data and validation metadata. Another purpose of the user portal assists investigators in determining data of interest for either an active or retrospective time period and geographic region. A further purpose is to present and revalidate copies of system-signed incident reports or full-content data copies.
[0073] In an embodiment, the systems and methods disclosed herein may provide a means for preventing crime, reporting public emergencies, or alerting both laypersons and/or officials of the existence of an emergency event or other incident. Other embodiments may be directed towards assisting law enforcement or other investigating individuals in forensically collecting field data and securely storing and processing that data. In such embodiments, the combined report may be a forensic evidence report, an affidavit, or other useful tool for investigating incidents. Some embodiments of the present disclosure are directed towards helping victims and witnesses of an incident or even directly supplement a police report. In an embodiment, said combined report may be used to report one or more of: incidents, crimes, accidents, injuries, theft, property damage, equipment damage, sexual harassment, sexual assault, aggravated assault, environmental reports, or Occupational Safety and Health Administration violations. In some embodiments, the combined report can be used for field investigations by public/private safety organizations and/or law enforcement agencies.
[0074] FIG. 2 outlines a system or method as described herein. In an embodiment, the system described herein is implemented via an application on a tablet or other device. The application causes the forensic capture interface to scan and collect data (200). Evidence is collected (201), authenticated in the cloud-based storage medium (204), and a FIELD report is created (205). Concurrently or independently, the application sends a notification to a safety circle (202). A “forcefield” is activated (203), and the application may further provide the user with post-incident resources as necessary (208). After the forcefield is deactivated, the system may provide a user with trauma and legal references (207).
[0075] In some embodiments, the method disclosed herein further comprises encrypting said combined report. In some embodiments, said cloud-based storage medium encrypts said evidentiary data stored in said first report. In some embodiments, blockchain may be used to store and verify evidentiary data and/or cryptographic hashes. For example, the forensic evidence capture system may collect evidentiary data and/or cryptographic hashes and then use blockchain to store the data. In some embodiments, the forensic evidence capture system may comprise one or more processors, and memory storing instructions that, when executed by the one or more processors, cause the forensic evidence capture system to: generate a form comprising a first data field configured to receive evidentiary data; cause creation of a blockchain corresponding to the first data field and the form, wherein the blockchain is configured to store blockchain entries corresponding to data lineage of the first data field; cause a first blockchain entry to be added to the blockchain, wherein the first blockchain entry corresponds to a second computing device permitted to receive data associated with the first data field and comprises at least one rule associated with the first data field; receive first data via the first data field; and transfer the first evidentiary data to a second computing device based on evaluating the first blockchain entry.
[0076] In some embodiments, the present disclosure provides systems for evidence collection. For example, in an embodiment, the present disclosure provides a system for electronically securing forensic evidence, said system comprising: (a) a forensic capture interface in operative communication with one or more input devices comprising a plurality of sensors, wherein said plurality of sensors generate evidentiary data; (b) a central processing unit comprising one or more processors operatively coupled to said forensic capture interface, said processors configured to: receive said evidentiary data from said one or more input devices, take a checksum of said evidentiary data using a cryptographic hash, and generate a first forensic report; (c) a local memory operatively coupled to said central processing unit, said local memory storing said evidentiary data, said checksum, and said first forensic report; and (d) a communications module in networked communication and in local communication with said central processing unit, wherein said communications module uploads said evidentiary data, said checksum, and said first report to a cloud-based server, whereby a second forensic report is generated using said evidentiary data and said checksum and compared to said first forensic report to ensure matching, thereby electronically securing forensic evidence.
[0077] In some embodiments, said central processing unit receives said evidentiary data from one or more input devices following activation of the forensic capture interface due to a triggering event. In some embodiments, the input device comprises a wearable device, wherein said wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module. In some embodiments, said evidentiary data comprises positional information and said input device comprises one or more of: a location sensor, an inertial sensor, altitude sensor, attitude sensors, pressure sensors, or field sensors. Said evidentiary data may also comprise of cellular data. For example, cellular data comprises of one or more: calls made or received on a given day or a specific time-frame; text messages sent and/or received on a given day, or a specific time-frame; calendar events that a person has scheduled or pending; photos taken; videos taken; or browsing history, or other related or relevant hardware and/or software data. [0078] In embodiments of the present disclosure, said forensic capture interface becomes activated (i.e., begins receiving data from the input devices) by one or more trigger words, thus causing said forensic capture interface to transmit said evidentiary data to said central processing unit. In some embodiments, said forensic capture interface becomes activated by sudden noise or sudden movement detected by said plurality of sensors. In some embodiments, said forensic capture interface may be activated by the user falling, as detected by a defined accelerometer output, an audio sensor and/or a visual sensor.
[0079] In further embodiments, the triggering event comprises one or more of: haptic feedback, voice or sound-activated feedback, biometric feedback, or positional feedback. In certain embodiments, said forensic capture interface comprises: a smart watch, a mobile phone, an Internet of Things (IOT), a camera, an alarm, a panic button, a jewelry item, smart glasses, wearables, fitness bands, smart rings, smart watch bands, smart clothing, smart machines (ATMs), smart cars, or a close circuit television (CCTV). In some embodiments, said plurality of sensors comprise one or more of: humidity sensors, temperature sensors, cameras, microphones, biometric sensors, or positional sensors.
[0080] In some embodiments, said forensic capture interface comprises one or more of a wearable device. Wearable devices may include: glasses, watches, fitness bands, necklaces, rings, bracelets, earrings, accessories, smart clothing, smart accessors (e.g., smartwatches, smartglasses, FitBit, etc.). In certain embodiments, said plurality of sensors comprise one or more of: a location sensor; an inertial sensor selected from the group consisting of: accelerometers, gyroscopes, and inertial measurement units (IMUs); an altitude sensor, an attitude sensor; a barometer; a magnetometer; an electromagnetic sensor, or a humidity sensor. In some embodiments, the system described herein further comprises a user input interface operatively connected to said forensic capture interface, wherein said user input interface allows a user to manually activate said forensic capture interface.
User Devices
[0081] The systems and methods described herein may be provided via an application for use with a user device. The various types of user devices may include, but are not limited to, a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or a combination of any two or more of these data processing devices or other data processing devices. In some embodiments, the user device include one or more of the following; handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices. In some embodiments, the user device includes two or more of a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices. In some embodiments, the user device includes three or more of a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices. In some embodiments, the user device includes four or more of a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices. In some embodiments, the user device includes five or more, six or more, seven or more, eight or more, nine or more, or ten or more of a handheld device, a wearable device, a mobile device, a tablet device, a laptop device, a desktop device, a computing device, a telecommunication device, a media player, a navigation device, a game console, a television, a remote control, or other data processing devices.
[0082] The application may allow for real-time, instant and automatic recordation of video and audio; scanning of areas for Bluetooth, wifi, hardcoded device IDs; passively recordation of wifi and Bluetooth signals received by the users device at all times; active recordation via audio and video capture; capturing of information on nearby beacons and smart devices; manual manipulation; voice-activation, and other features that aid in crime reporting, investigation, and prevention. In some embodiments, the application may be push- activated, voice-activated, or biometrically activated. The application may allow a user to broadcast an incident to family, friends, emergency contacts, law enforcement agencies, or those whose jurisdiction the investigation would fall into, being a private or public investigatory authority. The app may allow the user to report in an anonymous or identified manner. The application may also store evidence package and data via a cloud-based server. [0083] In some embodiments, the systems and methods described herein may be implemented on existing devices, such as a smartphone or smartwatch via an Application Programming Interface (API). An API may be used to collect evidence using voice activation as a trigger and use this evidence to automatically generate an evidence report from existing apps and devices. In other embodiments, the systems and methods described herein may be implemented via a full-stack application. For example, the application may allow for use as a standalone and to synch -with other personal user devices, such as a smartphone or Fitbit. In some embodiments, the user device may be a mobile device (e.g., smartphone, tablet, pager, personal digital assistant (PDA)), a computer (e.g., laptop computer, desktop computer, server), or a wearable device (e.g., smartwatches). The user device may be portable. The user device may be handheld. The user device may be a network device capable of connecting to a network, such as a local area network (LAN), wide area network (WAN) such as the Internet, a telecommunications network, a data network, or any other type of network. In some embodiments, the user device comprises a wearable device comprising a biosensor, a motion sensor unit, a location sensor, or a haptic sensor, wherein the wearable device is wirelessly connected with a mobile device. In some embodiments, such a wearable device comprises a forensic capture interface coupled to a communication unit that transmits data collected by the forensic capture interface to a central processing unit for generating a checksum, and then transmits the checksum and data collected by the forensic capture interface to a network. [0084] The user device may comprise memory storage units which may comprise non- transitory computer readable medium comprising code, logic, or instructions for performing one or more steps. The user device may comprise one or more processors capable of executing one or more steps, for instance in accordance with the non-transitory computer readable media. The user device may comprise a display showing a graphical user interface. The user device may be capable of accepting inputs via a recipient interactive device. Examples of such recipient interactive devices may include a keyboard, button, mouse, touchscreen, touchpadjoystick, trackball, camera, microphone, motion sensor, heat sensor, inertial sensor, or any other type of recipient interactive device. The user device may be capable of executing software or applications provided by one or more evidence collection systems.
[0085] The user device may be an electronic device capable of collecting evidentiary data through one or more input devices comprising sensors. The user device may be mobile device (e.g., smartphone, tablet, pager, personal digital assistant (PDA)), a computer (e.g., laptop computer, desktop computer, server, or any other type of device. The user device may optionally be portable. The user device may be handheld. The user device may be a wearable device. In some embodiments, the user device comprising a smart watch, smart jewelry, smart clothes, or the like. [0086] The user device may be a network device capable of connecting a network, such as a local area network (LAN), wide area network (WAN) such as the Internet, a telecommunications network, a data network, or any other type of network. The user device may be capable of direct or indirect wireless communications. The user device may be capable of peer-to-peer (P2P) communications and/or communications with cloud-based infrastructure.
[0087] The user device may include a display. The display may include a screen, such as a liquid crystal display (LCD) screen, light-emitting diode (LED) screen, organic lightemitting diode (OLED) screen, plasma screen, electronic ink (e-ink) screen, touchscreen, or any other type of screen or display. The display may or may not accept user input. The display may show a graphical user interface. The graphical user interface may be part of a browser, software, or application that may aid in the user in generating a report using the device.
[0088] The user device may be capable of accepting inputs via a user interactive device. Examples of such user interactive devices may include a keyboard, button, mouse, touchscreen, touchpadjoystick, trackball, camera, microphone, motion sensor, heat sensor, inertial sensor, or any other type of user interactive device. The user device may comprise one or more memory storage units which may comprise non-transitory computer readable medium comprising code, logic, or instructions for performing one or more steps. The user device may comprise one or more processors capable of executing one or more steps, for instance in accordance with the non-transitory computer readable media. The one or more memory storage units may store one or more software applications or commands relating to the software applications. The one or more processors may, individually or collectively, execute steps of the software application.
[0089] A communication unit may be provided on the device. The communication unit may allow the user device to communicate with an external device. The external device may be, for example, a server or may be a cloud-based infrastructure. The communications may include communications over a network or a direct communication. The communication unit may permit wireless or wired communications. Examples of wireless communications may include, but are not limited to WiFi, 3G, 4G, 5G LTE, radiofrequency, Bluetooth, infrared, or any other type of communications.
[0090] The user device may comprise an imaging sensor that serves as an input device. The imaging input device may be on-board the user device. The input device can include hardware and/or software element. In some alternative embodiments, the sensor may be located external to the user device, and evidentiary data may be transmitted to the user device via communication means as described elsewhere herein. The input device can be controlled by an application/ software configured to scan a visual code. In some embodiments, the camera may be configured to scan a barcode on an ID card, a passport, a document, or displayed on an external display. In some embodiments, the software and/or applications may be configured to activate the camera on the user device to scan the code. In other embodiments, the camera can be controlled by a processor natively embedded in the user device. The imaging input device may be a fixed lens or auto focus lens camera. An input device may make use of complementary metal oxide semiconductor (CMOS) sensors that generates electrical signals in response to wavelengths of light. The resultant electrical signals can be processed to produce evidentiary data. The input device may include a lens configured to direct light onto an imaging sensor. A camera can be a movie or video camera that captures dynamic image data (e.g., video). A camera can be a still camera that captures static images (e.g., photographs). A camera may capture both dynamic image data and static images. A camera may switch between capturing dynamic image data and static images.
[0091] The input device may comprise a camera used to capture visual images around the device. Any other type of sensor may be used, such as an infra-red sensor that may be used to capture thermal images around the device. The imaging sensor may collect information anywhere along the electromagnetic spectrum, and may generate corresponding images accordingly. The input device may comprise a Light Detection And Ranging (LiDAR) sensor. The LiDAR sensor may collect three-dimensional location data.
[0092] The user device may comprise an audio sensor that serves as an input device. The audio input device may be on-board the user device. The audio input device can include hardware and/or software element. In some embodiments, the audio input device may be a microphone operably coupled to the user device. In some alternative embodiments, the audio input device may be located external to the user device, and image data of a graphical element such as barcode may be transmitted to the user device via communication means as described elsewhere herein. The audio input device can be controlled by an application/software configured to analyze audio input and determine the significance. In some embodiments, the software and/or applications may be configured to activate the microphone on the user device to record the audio input. In other embodiments, the camera can be controlled by a processor natively embedded in the user device.
[0093] The user device may comprise a location sensor that serves as a location input device. The user device may have one or more sensors on-board the device to provide instantaneous positional and attitude information of the device. In some embodiments, the positional and attitude information may be provided by sensors such as a location sensor (e.g., Global Positioning System (GPS)), inertial sensors (e.g., accelerometers, gyroscopes, inertial measurement units (IMUs)), altitude sensors, attitude sensors (e.g., compasses) pressure sensors (e.g., barometers), and/or field sensors (e.g., magnetometers, electromagnetic sensors) and the like.
[0094] The user device may comprise one or more additional sensors. The user device may comprise two or more additional sensors. The user device may comprise 3 or more additional sensors. The user device may comprise four or more additional sensors. The user device may comprise five or more additional sensors. The user device may comprise six or more additional sensors. The user device may comprise seven or more additional sensors. The user device may comprise eight or more additional sensors. The user device may comprise nine or more additional sensors. The user device may comprise ten or more additional sensors. The user device may comprise more than ten additional sensors.
[0095] The sensors of a user device may include, but are not limited to, location sensors (e.g., global positioning system (GPS) sensors, mobile device transmitters enabling location triangulation), vision sensors (e.g., imaging devices capable of detecting visible, infrared, or ultraviolet light, such as cameras), proximity sensors (e.g., ultrasonic sensors, lidar, time-of- flight cameras), inertial sensors (e.g., accelerometers, gyroscopes, inertial measurement units (IMUs)), altitude sensors, pressure sensors (e.g., barometers), audio sensors (e.g., microphones), time sensors (e.g., clocks), temperature sensors, sensors capable of detecting memory usage and/or processor usage, or field sensors (e.g., magnetometers, electromagnetic sensors). Any suitable number and combination of sensors can be used, such as one, two, three, four, five, or more sensors. Optionally, the data can be received from sensors of different types (e.g., two, three, four, five, or more types). Sensors of different types may measure different types of signals or information (e.g., position, orientation, velocity, acceleration, proximity, pressure, etc.) and/or utilize different types of measurement techniques to obtain data. For instance, the sensors may include any suitable combination of active sensors (e.g., sensors that generate and measure energy from their own source) and passive sensors (e.g., sensors that detect available energy).
[0096] Any number of sensors may be provided on-board the user device. The sensors may include different types of sensors, or the same types of sensors. The sensors and/or any other components described herein may be enclosed within a housing of the device, embedded in the housing of the device, or on an external portion of the housing of the device. [0097] The one or more sensors may collect information continuously in real-time or may be collecting information on a periodic basis. In some embodiments, the sensors may collect information at regular time intervals, or at irregular time intervals. The sensors may collect information at a high frequency (e.g., every minute or more frequently, every 10 seconds or more frequently, every second or more frequently, every 0.5 seconds or more frequently, every 0.1 seconds or more frequently, every 0.05 seconds or more frequently, every 0.01 seconds or more frequently, every 0.005 seconds or more frequently, every 0.001 seconds or more frequently, every 0.0005 seconds or more frequently, or every 0.0001 seconds or more frequently). The sensors may collect information according to a regular or irregular schedule. The sensors may collect information only after a triggering event has occurred.
[0098] A state of the user device may include positional information relating to the user device. For instance, positional information may include spatial location of the user device (e.g., geo-location). In some embodiments, positional information may include a latitude, longitude, and/or altitude of the user device. In some embodiments, the positional information may be expressed as coordinates. The positional information may include an orientation of the user device. For instance, the positional information may include an orientation of the device with respect to one, two, or three axes (e.g., a yaw axis, pitch axis, and/or roll axis). The positional information may be an attitude of the user device. The positional information may be determined relative to an inertial reference frame (e.g., environment, Earth, gravity), and/or a local reference frame. In some embodiments, positional information may be processed by the central processing unit to ascertain the crime rate of a given area. If a high crime rate is detected, the forensic capture interface may begin capturing evidentiary data. In some embodiments, the evidentiary data may be captured at a frequency that correlates to the risk associated with the area or location of the user device. [0099] The positional information may include movement information of the imaging device. For instance, the positional information may include linear speed of the device or linear acceleration of the device relative to one, two, or three axes. The positional information may include angular velocity or angular acceleration of the device about one, two, or three axes. The positional information may be collected with aid of one or more inertial sensors, such as accelerometers, gyroscopes, and/or magnetometers. In some embodiments, the positional information may trigger the forensic capture interface of the user device to initiate collection of evidentiary data. For example, a sudden drop of the device or movement into a high-crime area may trigger the device to capture images, video, or sound recordings. [0100] A state of the user device may also include environmental information collected by the user device at the time evidentiary data is captured. The environmental information may include audio information collected by a microphone of the device. The environmental information may include information collected by a motion detector, an ultrasonic sensor, lidar, temperature sensor, pressure sensor, or any other type of sensor that may collect environmental information about the device. The environmental information may include detecting the touch or hand position of a user holding the device, and collecting which portions of the device or touched or held by the user.
[0101] Evidentiary data include any type of environmental data about the device that may be collected. Environmental data about the device may refer to data pertaining to the environment outside the device. In some instances, the environmental data may include collection of data of external conditions outside the device. Such data may be visual, thermal, humidity, audio, or positional data, for example. The environmental data may be collected at a single point in time, multiple successive points in time, or over a time interval.
[0102] In one example, the environmental data about the device may refer to data collected by an image sensor of the device. One or more image sensors of the device may be used to collect an image or video of an environment outside the device. The image or video collected by the image sensor may include an image of one or more landmark features around a device and/or an image of the user of the device, or any combination thereof. The environmental data may include the image or images captured by one or more image sensors, or may include data about the image(s) captured by the one or more image sensors. The environmental data may include snapshots collected over a period of time (e.g., dynamic display).
[0103] In some embodiments, the sensor used to collect environmental data may include lidar, sonar, radar, ultrasonic sensors, motion sensors, or any other sensor that may generate a signal that may be reflected back to the sensor. Such sensors may be used to collect information about the environment, such as the presence and/or location of objects within the environment.
[0104] Another example of environmental data may relate to one or more audio sensors that may be used to collect environmental information. The audio information may include sounds captured from the environment. This may include ambient noise, and/or noise generated by the device itself. The audio data may include an audio snapshot collected at a single point in time, or may include an audio clip collected over a period of time. [0105] An analysis of audio characteristics of the audio data may be collected or determined. For instance, a fast Fourier transform (FFT) analysis or similar type of analysis may be performed on the audio data. In some embodiments, it may be determined that a change in the audio captured may reduce the likelihood of a replay attack. The raw audio data and/or an analysis of the raw audio data may be provided as the environmental data. An audio fingerprint may thus be generated. An audio fingerprint may be expected to be unique to a particular time at which it is collected. Completely identical audio data may be extremely unlikely, and may indicate a higher likelihood of a replay attack.
[0106] In some embodiments, one or more audio sensors may be used to collect information. An audio sensor may be a microphone. The microphone may collect information from a wide range of directions, or may be a directional or parabolic microphone which has a limited range of directions. The microphone may be a condenser microphone, dynamic microphone, ribbon microphone, carbon microphone, piezoelectric microphone, fiber optic microphone, laser microphone, liquid microphone, MEMs microphone, or any other type of microphone. The audio sensors may be capable of collecting audio data with a high degree of sensitivity.
Checksum Function for Evidentiary Data
[0107] The evidentiary data may relate to a checksum function. . For instance, any type of data input may be used to derive a checksum value, via a checksum function. . The checksum value may be significantly different, even if an input value just differs slightly. . Any type of checksum function known or later developed may be used. . For instance, a checksum may utilize a parity check, such as a longitudinal parity check. . In some instances, a modular sum may be used. . Some checksums may be position-dependent, such as Fletcher’s checksum, Adler-32, or cyclic redundancy checks (CRCs). . In some instances, the checksum values may be used as nonce data. The checksum function may be an existing checksum function and/or utilized by the system for the purpose of detecting errors that may have been introduced during its transmission or storage. The checksum value may be further utilized for generating nonce data in addition to the aforementioned purpose. The checksum value may be a small-sized datum derived from a block of digital data (e.g., image data) such that memory or data transmission bandwidth required for the forensic evidence capture may be reduced.
[0108] For example, sensor data, may undergo a checksum function to yield a checksum value. For instance, data from any sensors described elsewhere herein, such as position sensors, image sensors, audio sensors, vibration sensors, motion sensors, infrared sensors, or any other type of sensor may undergo a checksum function to yield a checksum value.
[0109] Similarly, image data may undergo a checksum function to yield a checksum value. Data derived from an image itself (e.g., image of a user’s identification document, selfie image, etc.), or various parameters relating to the image, may undergo a checksum function to yield a checksum value. Furthermore, device component data may undergo a checksum function to yield a checksum value. Any type of data, including local or operational data, or environmental data, may be used to yield a checksum value.
[0110] The checksum values may be generated on-board the device or in a cloud-based storage medium. For instance, the checksum values may be generated using one or more processors of the device. The checksum values may be generated using a central processing unit. The checksum values may be stored in one or more memory storage units of the device. The checksum values may be transmitted with aid of a communication unit to an external device or network. Alternatively, the checksum values may be generated off-board the device. Data from a data source, such as sensors or processors, may or may not be stored in a memory before being transmitted. The data from the data sources may be used at an external device or network to generate checksum data.
[OHl] In some cases, the checksum is derived from information collected on the input device. The checksum may also be derived from information once the is collected into the forensic report.
[0112] Identification data may contain information used to authenticate or verify identity of a user. The identification data may contain personal information such as name, date of birth, address, nationality and the like that describe identity of a user.
[0113] In some embodiments, the evidentiary data may be about a state of the user device. The local data of the imaging device may include positional information, such as orientation of the device, geo-location and the like. The local data of the imaging device may include timestamps, such as the time evidentiary data is captured. In some embodiments, the local data may be collected from one or more sensors of the user device such as the GPS, IMU, accelerometers, and barometers as described elsewhere herein. The local data may also include Know Your Customer (KYC) Biometric data. For example, in some cases, the KYC Biometric data may verify the signatory of the generated forensic report.
[0114] In some embodiments, the local data about a user device may be obtained from metadata of an image. The metadata may be data automatically attached to a photo. The metadata may contain variable data including technical information about evidentiary data and its capture method, such as settings, capture time, and GPS location information. In some embodiments, the metadata is generated by a microprocessor of the device.
[0115] The local data of a user device may include operational parameters. In some embodiments, the operational parameters may be event-based parameters. One or more processors on- the user device may be provided that may aid in collecting operational parameters about the user device. The local data of a device may include positional information. In some embodiments, positional information may include a latitude, longitude, and/or altitude of the device. In some embodiments, the positional information may be expressed as coordinates. The positional information may include an orientation of the device. For instance, the positional information may include an orientation of the device with respect to one, two, or three axes (e.g., a yaw axis, pitch axis, and/or roll axis). The positional information may be an attitude of the device. The positional information may be determined relative to an inertial reference frame (e.g., environment, Earth, gravity), and/or a local reference frame.
[0116] In some embodiments, the local data about a user device may be obtained from metadata of evidentiary data. The metadata may be data automatically attached to the evidentiary data. The metadata may contain variable data including technical information about the evidentiary data and its capture method. In some embodiments, the metadata is generated by a microprocessor on-board the user device.
[0117] The local data of a user device may include operational parameters. In some embodiments, the operational parameters may be event-based parameters. The local data of an user device may include positional information. In some embodiments, positional information may include a latitude, longitude, and/or altitude of the device. In some embodiments, the positional information may be expressed as coordinates. The positional information may include an orientation of the device. For instance, the positional information may include an orientation of the device with respect to one, two, or three axes (e.g., a yaw axis, pitch axis, and/or roll axis). The positional information may be an attitude of the device. The positional information may be determined relative to an inertial reference frame (e.g., environment, Earth, gravity), and/or a local reference frame. One or more sensors may be provided that may aid in collecting positional information about the user device or about the user (e.g., phone conversations, social media profiles, Bluetooth connections nearby, etc.). [0118] The data may be the historic data collected from one or more emergency events, which can include, for example, repeated violent crimes in an area or a history of flooding. The historic data may all be stored together in a single memory unit or may be distributed over multiple memory units. Data distributed over multiple memory units may or may not be simultaneously accessible or linked. The historic data can be saved to a cloud-based network. The historic data may include data for a single user, or from multiple users. Data from multiple users may all be stored together or may be stored separately from one another. The historic data may include data collected from a single user device or from multiple user devices. The historic data can relate to a type of event or a location of one or more incidents. Data from multiple user devices may all be stored together or may be stored separately from one another.
[0119] Any type of identification related data may be used to authenticate the user identity and the related evidentiary reports generated. The identification data may include the user’s name, an identifier unique to the user, or any personal information about the user (e.g., user address, email, phone number, birthdate, birthplace, website, social security number, account number, gender, race, religion, educational information, health-related information, employment information, family information, marital status, dependents, or any other information related to the user). The personal information about the user may include financial information about the user. For instance, financial information about the user may include user payment card information (e.g., credit card, debit card, gift card, discount card, pre-paid card, etc.), user financial account information, routing numbers, balances, amount of debt, credit limits, past financial transactions, or any other type of information.
[0120] The identification data may pertain to the user’s device. For instance, a unique device identifier may be provided. Device fingerprint data (e.g., information about one or more characteristics of the device) may be provided. Information collected regarding the device’s clock, model number, serial number, device’s IP address, Bluetooth mac-address, Wi-Fi MAC address, applications running on the device, or any other information relating to the device may be collected.
[0121] An authentication system may include one or more user devices that may communicate with one or more external devices, such as devices held by law enforcement. The one or more user devices may be associated with one or more respective users. Data from the one or more user devices may be conveyed to the one or more external devices or entities. In some embodiments, data received by a first external device may be the same as data received by a second external device, or the data may be different. In one example, a first external device may be or belong to an authentication server system (e.g., a server system configured to provide secure authentication), and/or a second external device may be or belong to one or more third parties (e.g., a school, law enforcement, agency, employer, company, transportation agency, or a medical professional).
[0122] The network may be a communication network. The communication network(s) may include local area networks (LAN) or wide area networks (WAN), such as the Internet. The communication network(s) may comprise telecommunication network(s) including transmitters, receivers, and various communication channels (e.g., routers) for routing messages in-between. The communication network(s) may be implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), WiMAX, or any other suitable communication protocols.
[0123] In some embodiments, blockchain may be used to store and verify evidentiary data. For example, the forensic evidence capture system may collect evidentiary data and then use blockchain to store the data. In some embodiments, the forensic evidence capture system may comprise one or more processors, and memory storing instructions that, when executed by the one or more processors, cause the forensic evidence capture system to: generate a form comprising a first data field configured to receive evidentiary data; cause creation of a blockchain corresponding to the first data field and the form, wherein the blockchain is configured to store blockchain entries corresponding to data lineage of the first data field; cause a first blockchain entry to be added to the blockchain, wherein the first blockchain entry corresponds to a second computing device permitted to receive data associated with the first data field and comprises at least one rule associated with the first data field; receive first data via the first data field; and transfer the first evidentiary data to a second computing device based on evaluating the first blockchain entry.
[0124] The forensic capture interface can be triggered via an event or user action, as described below.
Safety and/or Emergency Alerts
[0125] The systems and methods described herein may also provide user alerts. For example, the system may evaluate nearby threats or dangers, current events happening if reported en masse, such as in the case of public threats, and the timeline of evidence collected. Alerts may be discreet. For example, if the systems and methods herein are implemented on a wearable device, the wearable device may provide a vibrational pattern that indicates to the user that they should be aware, alert, and check with their safety circle/base. The system may further comprise an SOS button with real-time response, wherein activation of said SOS button causes an alert to be sent to one or more of: a user’s emergency contacts, emergency services, or law enforcement.
[0126] Systems of the present disclosure may be activated when a pre-determined trigger word or phrase, such as “Help Me,” is spoken into a microphone on the device, and recognized by the device. Alternatively, more discrete trigger phrases may be used, such as “The train is leaving.” Systems of the present disclosure may be activated upon a triggering event. Triggering events may include, for example, moving or shaking the mobile device in a pre-determined pattern, so that an accelerometer on one of the input devices of the present systems may detect the pattern to initiate the duress trigger. The systems of the present disclosure may also be activated based on vital signs harvested from bracelets such as a fitness bracelet, or a security vest with bio monitoring, or any other vital sign monitoring device. The person can pre-determine the specific vital signs and thresholds which would indicate a triggering event, and thus activate the device. Once activated, systems described herein may collect evidentiary data.
[0127] Systems and methods of the present disclosure may be activated (via activation of the forensic capture interface). For example, the forensic capture interface may be activated, and thus collect evidentiary data from the one or more input devices, as a result of sounds having a certain frequency, pitch, duration, or other identifiable characteristics. For example, the forensic capture interface may be activated by a scream, a yell, an alarm, or a sound of a gunshot. In some embodiments, sudden movements may cause the forensic capture interface to activate.
[0128] In some embodiments, the forensic capture interface may be triggered based on the location of the device user or the location of nearby individuals. In some embodiments, a peer-to-peer system may be used to identify nearby witnesses or other potential victims. For instance, devices of nearby potential witnesses may be detected by the user device, for instance if they are within a particular proximity. In some instances, the forensic capture interface may track the location of individuals with devices, and based on calculated locations be able to identify potential witnesses without requiring direct peer to peer interaction. In some instances potential witnesses may be identified to aid police in collecting witness statements. In some instances, witness devices may automatically be used to aid in collection of evidence (e.g., video, audio, other sensor data). In some embodiments, the forensic capture interface may be activated by user action. User action may include, for example, audio feedback, visual feedback, haptic feedback, biometric feedback, or other. The forensic capture interface may be triggered to collect evidence based on the mode of activation. For example, an alert mode may provide a public-safety alert for other device users. A FIELD investigation mode may be used to assist law enforcement in investigating an incident or event, such as a natural disaster or crime. A reporting mode may be triggered to collect evidence in order to aid an individual in reporting a crime.
[0129] In described embodiments of the present invention, an alert mode, a FIELD investigation mode, or a reporting mode of forensic evidence capture system might be triggered. Any of the alert mode, field investigation mode, or reporting modes may be triggered by, for example, a mechanical trigger, an audible or visual trigger, and a sequence of one or more keys, motion detectors, velocity detectors and other means of quickly triggering the alert. Further, activation of the alert mode might be based on biomedical indicators that allow an alert mode to be triggered in a “stealth mode” such that it is not apparent to others, such as a robber or kidnapper, that an alert mode has been activated. For example, in a wearable device, a vibrational pattern may activate to indicate to a user that the alert mode has been activated. In such scenarios, a user would feel the vibrational pattern, but third parties would not notice.
[0130] Embodiments of the present invention may include peer-to-peer networking. For example, an alert mode may be triggered as a result of an action of a user or other individual. [0131] FIG. 3 provides an exemplary embodiment of the systems and methods described herein. The systems described herein may identify an event or incident within a geographical area (300). This may be identified by a first user device (301), which may trigger the forensic capture interface of a second user’s device (302) to activate as well based on the location of the second user. For example, this may cause the second user’s device to begin collecting evidentiary data through the one or more input devices of the forensic capture interface (e.g., by activating the camera or microphone of a second user’s device). Such evidentiary data may be shared with others, such as emergency contact, emergency personnel, law enforcement officers, or agencies. In some embodiments, a user device (302) within the geographical area of an incident (300) may communicate to a user device outside of the geographical area (300). For example, the user device (302) may communicate with a user device of a member of the safety circle/emergency contacts or it may be an emergency personnel (303). Additionally, various user devices may simultaneously stream evidentiary data (e.g., 304, 303), checksum, and reports to a cloud-based storage medium (305). [0132] In an embodiment of the invention, an alert and/or reporting system is provided comprising a processing and communication unit located in a user device and having a processor executing software from a non-transitory medium and a coupled data repository, the processor interfacing to a plurality of sensors, a communication module coupled to the processor and enabled to at least send communications to an Internet network, a global positioning system (GPS) coupled to the processor, determining geographic location of the user device. The processor monitors data from the plurality of sensors, consults status information based on one or both of one or more sensor readings or combinations of sensor readings, and selects and sends according to the one or more sensor readings or combinations of sensor readings, by the communications module, a preprogrammed communication addressed to a particular Internet destination. In at least one embodiment, the plurality of sensors comprises a motion sensor. A motion sensor may include, for example, an accelerometer, gyroscope, or other inertial, triangulation, tactile, proximity or position sensor, such as a global positioning system sensor.
[0133] Embodiments of the present invention might be in communication with one or more peripheral triggering devices that are in communication with the user device. These peripheral triggering devices might be employed by a user to covertly activate an alert mode of mobile device. For example, such peripheral triggering devices might include smart eyewear. Such smart eyewear might be contact lenses or glasses that display prompts visible only to the user wearing the eyewear. The smart eyewear might then allow the user to communicate an emergency situation with law enforcement or other emergency responders through, for example, eye movement or blinking. The smart eyewear may also trigger the forensic capture interface to gather evidentiary data for generating an evidence report.
[0134] Similarly, other clothing accessories might be configured as peripheral triggering devices. For example, such clothing accessories might include alarm wallet. Alarm wallet might be in communication with a user device, for example, by Bluetooth transceiver, radio frequency identification (RFID), or any other wireless communication. When alarm wallet is removed from within a given range of, for example, a user device, a user’s mobile phone, or another peripheral triggering device, an alert might be triggered to cause the forensic capture interface to begin collecting evidentiary data from the one or more input devices for evidence preservation. Evidence Reports and Evidentiary Data
[0135] In some embodiments, the systems described herein may analyze evidentiary data collected through the forensic capture interface, such as to determine the gender or other information about a speaker, perpetrator, or a bystander. Activation of said SOS button may be triggered by “trigger words” used by the user. A subset of trigger words may exist, such that one trigger word would activate evidence collection and another activates sending of an SOS alert, for example. In some embodiments, information relating to a user’s emergency contacts may be collected and stored within a user device. Such emergency contacts may be described as a “safety circle.” When an alert is sent to a safety circle, it may ping the user device. If certain trigger words are used, an SOS for help may be sent out.
[0136] In some embodiments, a user may incorporate after-the-fact information about the incident in the first report. For example, a user may select the type of event that occurred, such as property damage, abuse (emotional, physical, verbal); assault (verbal, physical, sexual); harassment (verbal, physical, sexual, or social). Such information may be provided via a user statement. For example, in the case of an assault, the system may ask a user to provide a user statement in response to question prompts, which may ask about: the type of abuse (physical, sexual, emotional); relationship with perpetrator; name of perpetrator; social media handles of the perpetrator (if known); whether the victim reported to the police; whether the victim went to a hospital for treatment; whether witnesses were present; whether the act constituted a hate crime; whether others were injured; whether evidence of the incident has been collected; and whether DNA evidence might be available.
[0137] In some embodiments, the systems described herein may use biometric data to verify a user. In some cases, the user is a law enforcement officer. In such cases, biometrics may be used to verify the identity of the officer. In some embodiments, locational or positional data is also used to verify the identity of the user. For example, KYC Biometrics may be used in the case of a law enforcement officer as a means of signing the forensic report. Biometric data may further be used as a means of verifying the law enforcement officer’s identity. In other embodiments, such biometrics may be used to verify the identity of a non-law enforcement user, such as a victim of a crime who is submitting a report.
[0138] In some embodiments, the system may collect evidentiary data, analyze this data, and use the data to perform a predictive analysis for public safety. For example, evidentiary data in the form of biometrics may be collected before, during, or after the event. Information collected for this predictive analysis may also include: the timing of events (to track trends); location of events (to triage future emergencies); identify real-time threats (as it happens, via SOS warnings); and hot spots, wherein incidents are grouped together based on correlations detected.
[0139] In some embodiments, the system may collect evidentiary data for purposes of field analysis or investigation. In some embodiments, the system may collect evidentiary data for purposes of assisting in reporting of a crime or offense. Evidentiary data may include, for example, GPS data, video files, image files, audio files, documents, local phone data, a list of identified potential witnesses (e.g., based on Bluetooth connections available and detected by the user device or by location data based on other user devices). In some embodiments, the system may collect evidentiary data to confirm the accuracy of information provided by an individual in reporting an incident.
[0140] An Evidence Package for Investigation (EPI) may be generated using the methods and systems described herein. EPI reports compile all data and footage captured once a user says “trigger word” to activate (for example) video,, audio and metadata capture or once user presses/activated the device, all accepted by the user to provide information. Information can relate to circumstances of the event, such as whether the person was at an event or on a date, based on metadata analysis of a user’s recent camera pictures, recent online communications, and online searches within a time frame surrounding the event. The report may have autopopulated fields filled in with data captured, links to media, or the media itself attached to the report. The report would then get mailed to, for example, the suer’s emergency contacts, other user, and an appropriate agency or authority, such as a university, management, school, hospital or insurance company.
[0141] According to various aspects of the present invention, an evidence report may include a combination of information (herein called incident information) including audio, photographs, video, forms, text, graphics, scans, detected signals, and electronic documents (e.g., email, word processing, spreadsheets, graphical models, photographs, equipment configuration data, equipment operation event logs) and/or linked data stored on a web-based server. Detected signals may include intercepted remote control signals (e.g., for mechanical and electrical equipment); intercepted communications systems simultaneously operating during the incident such as land line phones, cell phones, pagers, radios, tracking devices, media broadcasting stations, wireless and wired computer network links, and sources of interference with these systems; and measurements (e.g., environmental sensors for temperature, sensors for hazardous conditions, monitors for physical conditions). In some embodiments, only a subset of the incident information may be displayed. [0142] In some embodiments, the evidence report may display all of the evidentiary data collected by the forensic capture interface. In some embodiments, the evidence report may provide a subset of the evidentiary data collected by the forensic capture interface. The evidence report may display all of the evidentiary data on a single display screen. The evidence report may display only a subset of the evidentiary data on a display screen. The content of the report may be viewable by a user only, by police only, by members of a safety circle, and/or by third parties given permission by the user. In some embodiments, the evidence report may be viewable by the user, by police, by members of the user’s safety circle, or by public officials. The user may select who to allow access to.
[0143] The systems and methods herein may generate a timeline of events relating to the evidentiary data and evidence reports. Evidentiary data may include biometric information. Biometric information may include fingerprints, swipe patterns, facial recognition, retina scans, DNA analysis, voice recognition, finger vein patterns. Evidentiary data may include video files or image files with timestamps providing data, location, and time of said evidence capture, along with metadata. In some embodiments, encrypted evidentiary data may be stored on a server for FBI-standardized collection protocol, such that the evidentiary data is reflected in big data but anonymized. Examples of the FBI-standardized collection protocols include but are not limited to National Incident-based Reporting System (NIBRS) and/or Uniform Crime Reporting (UCR).
Computing system
[0144] Referring to FIG. 4, a block diagram is shown depicting an exemplary machine that includes a computer system 1000 (e.g., a processing or computing system) within which a set of instructions can execute for causing a device to perform or execute any one or more of the aspects and/or methodologies for static code scheduling of the present disclosure. The components in FIG. 4 are examples only and do not limit the scope of use or functionality of any hardware, software, embedded logic component, or a combination of two or more such components implementing particular embodiments.
[0145] Computer system 1000 may include one or more processors 1001, a memory 1003, and a storage 1008 that communicate with each other, and with other components, via a bus 1040. The bus 1040 may also link a display 1032, one or more input devices 1033 (which may, for example, include a keypad, a keyboard, a mouse, a stylus, etc.), one or more output devices 1034, one or more storage devices 1035, and various tangible storage media 1036. All of these elements may interface directly or via one or more interfaces or adaptors to the bus 1040. For instance, the various tangible storage media 1036 can interface with the bus 1040 via storage medium interface 1026. Computer system 1000 may have any suitable physical form, including but not limited to one or more integrated circuits (ICs), printed circuit boards (PCBs), mobile handheld devices (such as mobile telephones or PDAs), laptop or notebook computers, distributed computer systems, computing grids, or servers.
[0146] Computer system 1000 includes one or more processor(s) 1001 (e.g., central processing units (CPUs) or general purpose graphics processing units (GPGPUs)) that carry out functions. Processor(s) 1001 optionally contains a cache memory unit 1002 for temporary local storage of instructions, data, or computer addresses. Processor(s) 1001 are configured to assist in execution of computer readable instructions. Computer system 1000 may provide functionality for the components depicted in FIG. 4 as a result of the processor(s) 1001 executing non-transitory, processor-executable instructions embodied in one or more tangible computer-readable storage media, such as memory 1003, storage 1008, storage devices 1035, and/or storage medium 1036. The computer-readable media may store software that implements particular embodiments, and processor(s) 1001 may execute the software. Memory 1003 may read the software from one or more other computer-readable media (such as mass storage device(s) 1035, 1036) or from one or more other sources through a suitable interface, such as network interface 1020. The software may cause processor(s) 1001 to carry out one or more processes or one or more steps of one or more processes described or illustrated herein. Carrying out such processes or steps may include defining data structures stored in memory 1003 and modifying the data structures as directed by the software.
[0147] The memory 1003 may include various components (e.g., machine readable media) including, but not limited to, a random access memory component (e.g., RAM 1004) (e.g., static RAM (SRAM), dynamic RAM (DRAM), ferroelectric random access memory (FRAM), phase-change random access memory (PRAM), etc.), a read-only memory component (e.g., ROM 1005), and any combinations thereof. ROM 1005 may act to communicate data and instructions unidirectionally to processor(s) 1001, and RAM 1004 may act to communicate data and instructions bidirectionally with processor(s) 1001. ROM 1005 and RAM 1004 may include any suitable tangible computer-readable media described below. In one example, a basic input/output system 1006 (BIOS), including basic routines that help to transfer information between elements within computer system 1000, such as during startup, may be stored in the memory 1003.
[0148] Fixed storage 1008 is connected bidirectionally to processor(s) 1001, optionally through storage control unit 1007. Fixed storage 1008 provides additional data storage capacity and may also include any suitable tangible computer-readable media described herein. Storage 1008 may be used to store operating system 1009, executable(s) 1010, data 1011, applications 1012 (application programs), and the like. Storage 1008 can also include an optical disk drive, a solid-state memory device (e.g., flash-based systems), or a combination of any of the above. Information in storage 1008 may, in appropriate cases, be incorporated as virtual memory in memory 1003.
[0149] In one example, storage device(s) 1035 may be removably interfaced with computer system 1000 (e.g., via an external port connector (not shown)) via a storage device interface 1025. Particularly, storage device(s) 1035 and an associated machine-readable medium may provide non-volatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for the computer system 1000. In one example, software may reside, completely or partially, within a machine-readable medium on storage device(s) 1035. In another example, software may reside, completely or partially, within processor(s) 1001.
[0150] Bus 1040 connects a wide variety of subsystems. Herein, reference to a bus may encompass one or more digital signal lines serving a common function, where appropriate. Bus 1040 may be any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures. As an example and not by way of limitation, such architectures include an Industry Standard Architecture (ISA) bus, an Enhanced ISA (EISA) bus, a Micro Channel Architecture (MCA) bus, a Video Electronics Standards Association local bus (VLB), a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, an Accelerated Graphics Port (AGP) bus, HyperTransport (HTX) bus, serial advanced technology attachment (SATA) bus, and any combinations thereof.
[0151] Computer system 1000 may also include an input device 1033. In one example, a user of computer system 1000 may enter commands and/or other information into computer system 1000 via input device(s) 1033. Examples of an input device(s) 1033 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device (e.g., a mouse or touchpad), a touchpad, a touch screen, a multi-touch screen, a joystick, a stylus, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), an optical scanner, a video or still image capture device (e.g., a camera), and any combinations thereof. In some embodiments, the input device is a Kinect, Leap Motion, or the like. Input device(s) 1033 may be interfaced to bus 1040 via any of a variety of input interfaces 1023 (e.g., input interface 1023) including, but not limited to, serial, parallel, game port, USB, FIREWIRE, THUNDERBOLT, or any combination of the above.
[0152] In particular embodiments, when computer system 1000 is connected to network 1030, computer system 1000 may communicate with other devices, specifically mobile devices and enterprise systems, distributed computing systems, cloud storage systems, cloud computing systems, and the like, connected to network 1030. Communications to and from computer system 1000 may be sent through network interface 1020. For example, network interface 1020 may receive incoming communications (such as requests or responses from other devices) in the form of one or more packets (such as Internet Protocol (IP) packets) from network 1030, and computer system 1000 may store the incoming communications in memory 1003 for processing. Computer system 1000 may similarly store outgoing communications (such as requests or responses to other devices) in the form of one or more packets in memory 1003 and communicated to network 1030 from network interface 1020. Processor(s) 1001 may access these communication packets stored in memory 1003 for processing.
[0153] Examples of the network interface 1020 include, but are not limited to, a network interface card, a modem, and any combination thereof. Examples of a network 1030 or network segment 1030 include, but are not limited to, a distributed computing system, a cloud computing system, a wide area network (WAN) (e.g., the Internet, an enterprise network), a local area network (LAN) (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a direct connection between two computing devices, a peer-to-peer network, and any combinations thereof. A network, such as network 1030, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.
[0154] Information and data can be displayed through a display 1032. Examples of a display 1032 include, but are not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT-LCD), an organic liquid crystal display (OLED) such as a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display, a plasma display, and any combinations thereof. The display 1032 can interface to the processor(s) 1001, memory 1003, and fixed storage 1008, as well as other devices, such as input device(s) 1033, via the bus 1040. The display 1032 is linked to the bus 1040 via a video interface 1022, and transport of data between the display 1032 and the bus 1040 can be controlled via the graphics control 1021. In some embodiments, the display is a video projector. In some embodiments, the display is a head-mounted display (HMD) such as a VR headset. In further embodiments, suitable VR headsets include, by way of non-limiting examples, HTC Vive, Oculus Rift, Samsung Gear VR, Microsoft HoloLens, Razer OSVR, FOVE VR, Zeiss VR One, Avegant Glyph, Freefly VR headset, and the like. In still further embodiments, the display is a combination of devices such as those disclosed herein.
[0155] In addition to a display 1032, computer system 1000 may include one or more other peripheral output devices 1034 including, but not limited to, an audio speaker, a printer, a storage device, and any combinations thereof. Such peripheral output devices may be connected to the bus 1040 via an output interface 1024. Examples of an output interface 1024 include, but are not limited to, a serial port, a parallel connection, a USB port, a FIREWIRE port, a THUNDERBOLT port, and any combinations thereof.
[0156] In addition, or as an alternative, computer system 1000 may provide functionality as a result of logic hardwired or otherwise embodied in a circuit, which may operate in place of or together with software to execute one or more processes or one or more steps of one or more processes described or illustrated herein. Reference to software in this disclosure may encompass logic, and reference to logic may encompass software. Moreover, reference to a computer-readable medium may encompass a circuit (such as an IC) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware, software, or both.
[0157] Those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality.
[0158] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0159] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by one or more processor(s), or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
[0160] In accordance with the description herein, suitable computing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Those of skill in the art will also recognize that select televisions, video players, and digital music players with optional computer network connectivity are suitable for use in the system described herein. Suitable tablet computers, in various embodiments, include those with booklet, slate, and convertible configurations, known to those of skill in the art.
[0161] In some embodiments, the computing device includes an operating system configured to perform executable instructions. The operating system is, for example, software, including programs and data, which manages the device’s hardware and provides services for execution of applications. Those of skill in the art will recognize that suitable server operating systems include, by way of non-limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®. Those of skill in the art will recognize that suitable personal computer operating systems include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®. In some embodiments, the operating system is provided by cloud computing. Those of skill in the art will also recognize that suitable mobile smartphone operating systems include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®. Those of skill in the art will also recognize that suitable media streaming device operating systems include, by way of non-limiting examples, Apple TV®, Roku®, Boxee®, Google TV®, Google Chromecast®, Amazon Fire®, and Samsung® HomeSync®. Those of skill in the art will also recognize that suitable video game console operating systems include, by way of non-limiting examples, Sony® PS3®, Sony® PS4®, Microsoft® Xbox 360®, Microsoft Xbox One, Nintendo® Wii®, Nintendo® Wii U®, and Ouya®.
Non-transitory computer readable storage medium
[0162] In some embodiments, the platforms, systems, media, and methods disclosed herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked computing device. In further embodiments, a computer readable storage medium is a tangible component of a computing device. In still further embodiments, a computer readable storage medium is optionally removable from a computing device. In some embodiments, a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, distributed computing systems including cloud computing systems and services, and the like. In some cases, the program and instructions are permanently, substantially permanently, semi-permanently, or non- transitorily encoded on the media.
Computer program
[0163] In some embodiments, the platforms, systems, media, and methods disclosed herein include at least one computer program, or use of the same. A computer program includes a sequence of instructions, executable by one or more processor(s) of the computing device’s CPU, written to perform a specified task. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), computing data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, those of skill in the art will recognize that a computer program may be written in various versions of various languages.
[0164] The functionality of the computer readable instructions may be combined or distributed as desired in various environments. In some embodiments, a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof.
Web application
[0165] In some embodiments, a computer program includes a web application. In light of the disclosure provided herein, those of skill in the art will recognize that a web application, in various embodiments, utilizes one or more software frameworks and one or more database systems. In some embodiments, a web application is created upon a software framework such as Microsoff®.NET or Ruby on Rails (RoR). In some embodiments, a web application utilizes one or more database systems including, by way of non-limiting examples, relational, nonrelational, object oriented, associative, and XML database systems. In further embodiments, suitable relational database systems include, by way of non-limiting examples, Microsoft® SQL Server, mySQL™, and Oracle®. Those of skill in the art will also recognize that a web application, in various embodiments, is written in one or more versions of one or more languages. A web application may be written in one or more markup languages, presentation definition languages, client-side scripting languages, server-side coding languages, database query languages, or combinations thereof. In some embodiments, a web application is written to some extent in a markup language such as Hypertext Markup Language (HTML), Extensible Hypertext Markup Language (XHTML), or extensible Markup Language (XML). In some embodiments, a web application is written to some extent in a presentation definition language such as Cascading Style Sheets (CSS). In some embodiments, a web application is written to some extent in a client-side scripting language such as Asynchronous Javascript and XML (AJAX), Flash® ActionScript, JavaScript, or Silverlight®. In some embodiments, a web application is written to some extent in a server-side coding language such as Active Server Pages (ASP), ColdFusion®, Perl, Java™, JavaServer Pages (JSP), Hypertext Preprocessor (PHP), Python™, Ruby, Tel, Smalltalk, WebDNA®, or Groovy. In some embodiments, a web application is written to some extent in a database query language such as Structured Query Language (SQL). In some embodiments, a web application integrates enterprise server products such as IBM® Lotus Domino®. In some embodiments, a web application includes a media player element. In various further embodiments, a media player element utilizes one or more of many suitable multimedia technologies including, by way of non-limiting examples, Adobe® Flash®, HTML 5, Apple® QuickTime®, Microsoft® Silverlight®, Java™, and Unity®.
[0166] Referring to FIG. 5, in a particular embodiment, an application provision system comprises one or more databases 1100 accessed by a relational database management system (RDBMS) 1110. Suitable RDBMSs include Firebird, MySQL, PostgreSQL, SQLite, Oracle Database, Microsoft SQL Server, IBM DB2, IBM Informix, SAP Sybase, SAP Sybase, Teradata, and the like. In this embodiment, the application provision system further comprises one or more application severs 1120 (such as Java servers, .NET servers, PHP servers, and the like) and one or more web servers 1130 (such as Apache, IIS, GWS and the like). The web server(s) optionally expose one or more web services via app application programming interfaces (APIs) 1140. Via a network, such as the Internet, the system provides browser-based and/or mobile native user interfaces.
[0167] Referring to FIG. 6, in a particular embodiment, an application provision system alternatively has a distributed, cloud-based architecture 1200 and comprises elastically load balanced, auto-scaling web server resources 1210 and application server resources 1220 as well synchronously replicated databases 1230.
[0168] In some embodiments, one or more systems or components of the present disclosure are implemented as a containerized application (e.g., application container or service containers). The application container provides tooling for applications and batch processing such as web servers with Python or Ruby, JVMs, or even Hadoop or HPC tooling. Application containers are what developers are trying to move into production or onto a cluster to meet the needs of the business. Methods and systems of the invention will be described with reference to embodiments where container-based virtualization (containers) is used. The methods and systems can be implemented in application provided by any type of systems (e.g., containerized application, unikernel adapted application, operating-system- level virtualization or machine level virtualization).
Mobile Application
[0169] In some embodiments, a computer program includes a mobile application provided to a mobile computing device. In some embodiments, the mobile application is provided to a mobile computing device at the time it is manufactured. In other embodiments, the mobile application is provided to a mobile computing device via the computer network described herein.
[0170] In view of the disclosure provided herein, a mobile application is created by techniques known to those of skill in the art using hardware, languages, and development environments known to the art. Those of skill in the art will recognize that mobile applications are written in several languages. Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective-C, Java™, Javascript, Pascal, Object Pascal, Python™, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof.
[0171] Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, Airplay SDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, Android™ SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and Windows® Mobile SDK.
[0172] Those of skill in the art will recognize that several commercial forums are available for distribution of mobile applications including, by way of non-limiting examples, Apple® App Store, Google® Play, Chrome WebStore, BlackBerry® App World, App Store for Palm devices, App Catalog for webOS, Windows® Marketplace for Mobile, Ovi Store for Nokia® devices, Samsung® Apps, and Nintendo® DSi Shop.
Standalone Application
[0173] In some embodiments, a computer program includes a standalone application, which is a program that is run as an independent computer process, not an add-on to an existing process, e.g., not a plug-in. Those of skill in the art will recognize that standalone applications are often compiled. A compiler is a computer program(s) that transforms source code written in a programming language into binary object code such as assembly language or machine code. Suitable compiled programming languages include, by way of non-limiting examples, C, C++, Objective-C, COBOL, Delphi, Eiffel, Java™, Lisp, Python™, Visual Basic, and VB.NET, or combinations thereof. Compilation is often performed, at least in part, to create an executable program. In some embodiments, a computer program includes one or more executable complied applications.
Web Browser Plug-in
[0174] In some embodiments, the computer program includes a web browser plug-in (e.g., extension, etc.). In computing, a plug-in is one or more software components that add specific functionality to a larger software application. Makers of software applications support plug-ins to enable third-party developers to create abilities which extend an application, to support easily adding new features, and to reduce the size of an application. When supported, plug-ins enable customizing the functionality of a software application. For example, plug-ins are commonly used in web browsers to play video, generate interactivity, scan for viruses, and display particular file types. Those of skill in the art will be familiar with several web browser plug-ins including, Adobe® Flash® Player, Microsoft® Silverlight®, and Apple® QuickTime®. In some embodiments, the toolbar comprises one or more web browser extensions, add-ins, or add-ons. In some embodiments, the toolbar comprises one or more explorer bars, tool bands, or desk bands.
[0175] In view of the disclosure provided herein, those of skill in the art will recognize that several plug-in frameworks are available that enable development of plug-ins in various programming languages, including, by way of non-limiting examples, C++, Delphi, Java™, PHP, Python™, and VB.NET, or combinations thereof.
[0176] Web browsers (also called Internet browsers) are software applications, designed for use with network-connected computing devices, for retrieving, presenting, and traversing information resources on the World Wide Web. Suitable web browsers include, by way of non-limiting examples, Microsoft® Internet Explorer®, Mozilla® Firefox®, Google® Chrome, Apple® Safari®, Opera Software® Opera®, and KDE Konqueror. In some embodiments, the web browser is a mobile web browser. Mobile web browsers (also called microbrowsers, mini-browsers, and wireless browsers) are designed for use on mobile computing devices including, by way of non-limiting examples, handheld computers, tablet computers, netbook computers, subnotebook computers, smartphones, music players, personal digital assistants (PDAs), and handheld video game systems. Suitable mobile web browsers include, by way of non-limiting examples, Google® Android® browser, RIM BlackBerry® Browser, Apple® Safari®, Palm® Blazer, Palm® WebOS® Browser, Mozilla® Firefox® for mobile, Microsoft® Internet Explorer® Mobile, Amazon® Kindle® Basic Web, Nokia® Browser, Opera Software® Opera® Mobile, and Sony® PSP™ browser.
Software Modules
[0177] In some embodiments, the platforms, systems, media, and methods disclosed herein include software, server, and/or database modules, or use of the same. In view of the disclosure provided herein, software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art. The software modules disclosed herein are implemented in a multitude of ways. In various embodiments, a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof. In further various embodiments, a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof. In various embodiments, the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application. In some embodiments, software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on a distributed computing platform such as a cloud computing platform. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location.
Databases
[0178] In some embodiments, the platforms, systems, media, and methods disclosed herein include one or more databases, or use of the same. In view of the disclosure provided herein, those of skill in the art will recognize that many databases are suitable for storage and retrieval of digital media, advertising, and game information. In various embodiments, suitable databases include, by way of non-limiting examples, relational databases, nonrelational databases, object oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. Further non-limiting examples include SQL, PostgreSQL, MySQL, Oracle, DB2, and Sybase. In some embodiments, a database is internet-based. In further embodiments, a database is web-based. In still further embodiments, a database is cloud computing-based. In a particular embodiment, a database is a distributed database. In other embodiments, a database is based on one or more local computer storage devices.
[0179] While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.

Claims

CLAIMS What is claimed is:
1. A method of electronically securing forensic evidence, said method comprising:
(a) receiving evidentiary data from an input device comprising a plurality of sensors;
(b) generating a checksum of said evidentiary data using a cryptographic hash;
(c) storing said evidentiary data and said checksum to a local storage medium;
(d) uploading said evidentiary data and said checksum to a cloud-based storage medium;
(e) generating a first report using said evidentiary data and said checksum from said local storage medium;
(f) streaming said first report to said cloud-based storage medium;
(g) generating a second report using said evidentiary data and said checksum in said cloud-based storage;
(h) comparing said first report and said second report to ensure matching of said reports; and
(i) preparing a combined report using matching content from said first report and said second report, thereby electronically securing forensic evidence.
2. The method of claim 1, wherein said evidentiary data comprises positional information.
3. The method of claim 1 or 2, wherein said input device comprises one or more of: a location sensor, an inertial sensor, altitude sensor, attitude sensors, pressure sensors, or field sensors.
4. The method of claim 1, wherein said evidentiary data comprises phone data.
5. The method of claim 4, wherein said phone data comprises one or more of: calls made or received on a given day; text messages sent and received on a given day; calendar events that a person has scheduled; dates a person may have had; photos taken; or browsing history.
6. The method of claim 1, wherein said cloud-based storage medium encrypts said evidentiary data stored in said first report.
7. The method of claim 1, wherein said evidentiary data comprises Calendar Application Programming Interface (API) data.
8. The method of claim 1, wherein said input device comprises a wearable device, wherein said wearable device comprises an audio receiving module, a location information receiving module, and a video receiving module.
9. The method of claim 1, further comprising encrypting said combined report.
10. The method of claim 6, wherein said combined report may be used to report one or more of: incidents, crimes, accidents, injuries, theft, property damage, equipment damage, sexual harassment, sexual assault, aggravated assault, environmental reports, or Occupational Safety and Health Administration violations.
11. A system for electronically securing forensic evidence, said system comprising:
(a) a forensic capture interface in operative communication with one or more input devices comprising a plurality of sensors, wherein said plurality of sensors generate evidentiary data;
(b) a central processing unit comprising one or more processors operatively coupled to said forensic capture interface, said processors configured to: receive said evidentiary data from said one or more input devices, take a checksum of said evidentiary data using a cryptographic hash, and generate a first forensic report;
(c) a local memory operatively coupled to said central processing unit, said local memory storing said evidentiary data, said checksum, and said first forensic report; and
(d) a communications module in networked communication and in local communication with said central processing unit, wherein said communications module uploads said evidentiary data, said checksum, and said first report to a cloud-based server, whereby a second forensic report is generated using said evidentiary data and said checksum and compared to said first forensic report to ensure matching, thereby electronically securing forensic evidence.
12. The system of claim 11, wherein said central processing unit receives said evidentiary data from one or more input devices following an activation event.
13. The system of claim 12, wherein said activation event comprises one or more of: haptic feedback, voice or sound-activated feedback, biometric feedback, or positional feedback.
14. The system of claim 11, wherein said forensic capture interface comprises one or more of a wearable device.
15. The system of claim 11, wherein said forensic capture interface comprises: a smart watch, a mobile phone, an Internet of Things (IOT), a camera, an alarm, a panic button, a jewelry item, smart glasses, wearables, fitness bands, smart rings, smart watch bands, smart clothing, smart machines (ATMs), smart cars, or a close circuit television (CCTV).
16. The system of claim 11, wherein said plurality of sensors comprise one or more of: humidity sensors, temperature sensors, cameras, microphones, biometric sensors, or positional sensors.
17. The system of claim 11, wherein said forensic capture interface becomes activated by one or more trigger words, thus causing said forensic capture interface to transmit said evidentiary data to said central processing unit.
18. The system of claim 11, wherein said forensic capture interface becomes activated by sudden noise or sudden movement detected by said plurality of sensors.
19. The system of claim 11, wherein said plurality of sensors comprise one or more of: a location sensor; an inertial sensor selected from the group consisting of: accelerometers, gyroscopes, and inertial measurement units (IMUs); an altitude sensor, an attitude sensor; a barometer; a magnetometer; an electromagnetic sensor, or a humidity sensor.
20. The system of claim 11, further comprising a user input interface operatively connected to said forensic capture interface, wherein said user input interface allows a user to manually activate said forensic capture interface.
PCT/US2022/054333 2021-12-30 2022-12-30 Forensic evidence collection systems and methods WO2023129706A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163295372P 2021-12-30 2021-12-30
US63/295,372 2021-12-30

Publications (1)

Publication Number Publication Date
WO2023129706A1 true WO2023129706A1 (en) 2023-07-06

Family

ID=87000294

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/054333 WO2023129706A1 (en) 2021-12-30 2022-12-30 Forensic evidence collection systems and methods

Country Status (1)

Country Link
WO (1) WO2023129706A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120209983A1 (en) * 2011-02-10 2012-08-16 Architecture Technology Corporation Configurable forensic investigative tool
US20180261025A1 (en) * 2014-07-28 2018-09-13 Dan Kerning Security and Public Safety Application for a Mobile Device with Audio/Video Analytics and Access Control Authentication
US20190172096A1 (en) * 2015-01-23 2019-06-06 Bluefox, Inc. Mobile device detection and tracking
US10740151B1 (en) * 2018-08-27 2020-08-11 Amazon Technologies, Inc. Parallelized forensic analysis using cloud-based servers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120209983A1 (en) * 2011-02-10 2012-08-16 Architecture Technology Corporation Configurable forensic investigative tool
US20180261025A1 (en) * 2014-07-28 2018-09-13 Dan Kerning Security and Public Safety Application for a Mobile Device with Audio/Video Analytics and Access Control Authentication
US20190172096A1 (en) * 2015-01-23 2019-06-06 Bluefox, Inc. Mobile device detection and tracking
US10740151B1 (en) * 2018-08-27 2020-08-11 Amazon Technologies, Inc. Parallelized forensic analysis using cloud-based servers

Similar Documents

Publication Publication Date Title
US11917514B2 (en) Systems and methods for intelligently managing multimedia for emergency response
US11716605B2 (en) Systems and methods for victim identification
US20220272201A1 (en) Systems and methods for emergency communications
US9965728B2 (en) Attendance authentication and management in connection with mobile devices
US10693872B1 (en) Identity verification system
US20200402193A1 (en) System for validating and appending incident-related data records in a distributed electronic ledger
US20210365445A1 (en) Technologies for collecting, managing, and providing contact tracing information for infectious disease response and mitigation
Barcena et al. How safe is your quantified self
US20200075056A1 (en) Method to log audio in a distributed, immutable transaction log for end-to-end verification and auditing
WO2020091953A1 (en) Method for weighted voting in a public safety distributed ledger
US20170374076A1 (en) Systems and methods for detecting fraudulent system activity
US20220027417A1 (en) Modular application programming interface system
US20200365244A1 (en) Video-based asynchronous appointments for securing medication adherence
CN112654984A (en) Authentication system
US10515317B1 (en) Machine learning algorithm for user engagement based on confidential data statistical information
US10261958B1 (en) Generating an association between confidential data and member attributes
US10558923B1 (en) Machine learning model for estimating confidential information response
US10484387B1 (en) Tracking submission of confidential data in a computer system
WO2023129706A1 (en) Forensic evidence collection systems and methods
US10044693B1 (en) Security for confidential data
US10970644B1 (en) Estimating the conditional response time to a request for confidential data
US20240020879A1 (en) Proof-of-location systems and methods
US20240089089A1 (en) Using decentralized networks to ensure transparency in remote device operation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22917383

Country of ref document: EP

Kind code of ref document: A1