WO2020181225A1 - System and method for the universal control of uniform initial judgment - Google Patents

System and method for the universal control of uniform initial judgment Download PDF

Info

Publication number
WO2020181225A1
WO2020181225A1 PCT/US2020/021476 US2020021476W WO2020181225A1 WO 2020181225 A1 WO2020181225 A1 WO 2020181225A1 US 2020021476 W US2020021476 W US 2020021476W WO 2020181225 A1 WO2020181225 A1 WO 2020181225A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
user
behavior
dimensional image
captured
Prior art date
Application number
PCT/US2020/021476
Other languages
French (fr)
Inventor
Jason AHRENS
Original Assignee
ATMO Auto Power LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ATMO Auto Power LLC filed Critical ATMO Auto Power LLC
Publication of WO2020181225A1 publication Critical patent/WO2020181225A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/22Status alarms responsive to presence or absence of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Definitions

  • the present invention relates to a method and system for identifying behavior from captured image data which includes but is not limited to three-dimensional (hereinafter “3-D”) images captured in real time that are compared to prior image data to evaluate a behavior against a behavior standard.
  • 3-D three-dimensional
  • the present invention is directed to a system and method for evaluating potential behavior or threat confronting an individual in real-time.
  • the system and method captures images, advantageously 3-D images using an image capturing device or camera positioned away from a user, capturing both the user and his or her surroundings.
  • This 3-D image data is analyzed by comparing it to a database of prior captured image data that has been previously evaluated and assessed for possible threats or behavior.
  • real-time captured 3-D data matches prior stored image data, if the prior stored image data includes a prior associated behavior value including a possible threat or “trigger event”, a user of the method and system will be alerted accordingly.
  • the user e.g., an officer, may be alerted by an alert sent to his or her smart watch.
  • the prior stored 3-D image data is advantageously stored in an electronic database.
  • Associated behavior values can be assigned to the prior stored 3-D image data by an individual viewing the previously stored 3-D image data, for example using a virtual reality system, and assigning behavior values to the prior stored 3-D image data when the individual determines that actions viewed within the 3-D image data is a threat.
  • the present invention in one form thereof, is directed to a system for evaluating a potential behavior from captured image data.
  • the system has an image capturing device adapted to capture 3-D image data.
  • An electronic database stored in computer memory of prior 3-D image data, contains prior stored images having an associated behavior value.
  • a computer processor is adapted to compare captured 3-D image data in real-time with prior stored 3-D image data to identify a match.
  • the computer processor sets a potential behavior value for the captured image data to the behavior value of a respective behavior value associated with the image data in the database that matches the captured image data.
  • An output device receives the potential behavior value from the computer processor wherein the output device is observable by a user.
  • the system in one further specific form thereof has a behavior assessment input device for an individual to assign a behavior value to the prior stored image data in the database wherein the behavior value is subsequently associated with the respective 3-D image data in the electronic database.
  • the present invention in another form thereof is directed to a method for evaluating a potential behavior from captured image data.
  • the method includes capturing 3-D image data using an image capturing device of a locus that includes a user and immediate surroundings of the user.
  • the captured 3-D image data is compared to prior stored 3-D image data present in an electronic database using a computer processor to determine a match wherein the prior stored image data have respective associated behavior values.
  • a potential behavior is identified in the captured 3-D image data as the behavior value of the prior stored image data that matches the captured 3-D image data. The user is alerted of the potential behavior accordingly.
  • the present system and method are advantageously directed to reducing the liability and challenges (e.g., political, physical dangers, etc.) confronting law enforcement individuals such as police and the policed.
  • the present system and method can be adapted for additional wide-ranging applications for general security.
  • the present system and method can be implemented for use in autonomous ride-hailing services, for example, when there is no driver present, security can be provided by the present invention.
  • 3-D three-dimensional environment sensing technology
  • algorithmic processing of visual data and activation of various response mechanisms which, together, create a protective and judgement assistance system.
  • this system is duplicated and continuously updated, a universal line of initial judgment can be drawn among all departments, agencies, etc., which use the technology, instead of relying completely on individual judgment, while providing the individual, e.g., officer, with hyper situational awareness.
  • control and training of the present system and method is achieved by viewing images, video or reconstructed 3-D scenes, using virtual reality goggles or with a monitor, through the present system’s training application.
  • observers e.g., judges, can use a hand-held remote or mouse cursor to“paint” individuals displaying behavior that they deem worthy or unworthy of police reaction or intervention.
  • One especially beneficial component of the present system and method that provides for realization of specific aspects of the present method and system is at least one stereoscopic infra-red depth sensing camera - commonly referred to as an RGBD (Red, Green, Blue, and DEPTH) camera herein incorporated by reference.
  • RGBD Red, Green, Blue, and DEPTH
  • Such a device gathers a depth measurement for each pixel in its resolution, in addition to the more commonplace RGB color determination.
  • Conceptualization and deployment of the present system has been prevented in the past by inadequate available camera technology at a viable price.
  • the cameras required to gather adequate point-cloud data for the purpose of determining the visible mood of a human or animal in 3-D have been too limited in resolution and/or range up until the release of the MYNT EYE D on
  • the best applicable camera was the REALSENSE D400 SERIES by Intel Corporation, first released in January of 2018, which offers adequate resolution and frame rate, but inadequate effective range.
  • the Intel REALSENSE D400 SERIES offers 1280x720p resolution @60 frames per second (fps) but is limited to 10 meters of >90% accuracy.
  • fps frames per second
  • a minimum performance of 1280x720p resolution@60 fps with 20 meters of >90% accurate range was determined to be essential for the effective marketing of the present system.
  • the MYNT EYE D achieves 20 meters of >90% accuracy range with 1280x720p @60 fps.
  • Figure 1 is a schematic showing a system for evaluating a potential behavior from captured image data in accordance with one aspect of the present invention.
  • Figure 2 is a flow diagram of a method for evaluating a potential behavior from captured image data in accordance with the present invention.
  • system 10 includes an image capturing device 12, a server 20 with high-speed computer processor 22, a data storage device or memory 24, and a communication device(s) or communication system 26.
  • the image capturing device 12 is a unit that is in the form of a vehicle roof mounted base unit mounted on vehicle 30.
  • the image capturing device 12 (base unit) has four (4) stereoscopic infrared (IR) cameras 13 arranged radially (to allow 360 degrees of vision), a high-speed processor 14, memory 16, and a
  • Image capturing device 12 via communication device 18 is connected to an output device 40 such as a smart phone or smart watch.
  • a personal device to remain with the user of system 10 is not necessary (for example, alternatively system 10 could alert the user by sounding an alarm from the image capturing device 12 (i.e. , base unit), but highly improves the system’s performance.
  • the user software program or application which provides for control of some aspects of system 10, and is the primary means to notify the user, can be downloaded onto a smartphone or other personal device, but a smartwatch is preferred. Additionally, a tablet computer can be included as a component to system 10, fitted inside the vehicle with an appropriate mount, to allow the user a way to review captured visual data from trigger events.
  • System 10 requires an operating system (OS), visual data management software (VDM), pose-detection software (PDS), object-detection software (ODS), pose and object tracking software (TS), machine learning software (MLS), trigger execution software (TES), trigger management application software (TMA), data management software (DMA), communication software (CS), user application software (UAP).
  • OS operating system
  • VDM visual data management software
  • PDS pose-detection software
  • ODS object-detection software
  • TS pose and object tracking software
  • MLS machine learning software
  • TES trigger execution software
  • TMA trigger management application software
  • DMA data management software
  • CS communication software
  • UAP user application software
  • OS operating system
  • the files include model libraries, function libraries, the VDM, PDS, ODS, TS, TES, DMA and some components of the CS.
  • the visual data management system usually consists of the software development kit provided by the camera manufacturer, and local programming necessary to integrate the camera into the present system. Upon integration, the cameras data can then be regulated and streamed into the high-speed processor of the present system. Additionally, depth data is retrieved through the VDM and streamed into the high-speed processor for modeling purposes.
  • Pose detection software (PDS) and Object detection software (ODS) are within a competitive field where several development teams have provided systems, networks and libraries to locate objects in a captured camera frame.
  • System 10 advantageously uses a competitive system by a trusted provider for both human pose and object detection, which returns good reliable detections on each frame we run through the PDS. This system allows one to adjust and pull data in great detail which we then use to feed the TS and TES. As the industry matures and specialization develops, system 10 can be updated to or modified as separate systems for pose detection and object detection.
  • the tracking system is also hired out from a competitive industry of pose and object tracking systems.
  • the TS of system 10 is used to correlate detections from the PDS and ODS in order to assign an id to each detection and track the detection from frame to frame, affecting motion tracking.
  • the machine learning software is also hired out from a competitive industry of machine learning and Al providers.
  • the MLS is necessary to operate the PDS, ODS, TS, and TMA as part of their function and output, and it is also necessary to adapt models from the TMA.
  • the trigger execution system was developed specifically for the present system and is used to monitor all data provided by the above detection and tracking systems and executing alerts and various commands as the result of conditional logic being met.
  • the conditional logic which results in the present system triggering and alerting the user is derived from stored models in the storage device, which are frequently updated by the DMA.
  • the trigger management application was developed specifically for the present system and uses adapted versions of the PDS, ODS, and TS to create a user friendly portal where scenes can be viewed by any user Gudge), and trigger data can be created by selecting a detection demarcated by a visual augmentation about an individual or object displaying trigger behavior as perceived by the judge, and deselected once absent the behavior.
  • the TMA can be used on a screen for 2-D data analysis, or on a screen in first-person-perspective for 3-D data analysis, but it is preferred to use a virtual reality system that includes virtual reality goggles, a beacon, and a cursor to make a detailed call on the behavior being analyzed.
  • the data management system developed specifically for system 10, mainly consists of a cloud-based file system and duplicated and secure storage across a domain-providers network.
  • the purpose of the DMA is to manage collected data for viewing and editing through the TMA, to process the trigger data uploaded from the TMA, and to distribute adjustment commands to all unit storage devices on a frequent basis.
  • the communication system (CS) was developed specifically for system 10 and is used in combination with communication hardware of the communication device 18 to send and receive data to and from the high-speed processor 14 of the image capturing device 12 and the peripherals (i.e. , wearables, smartphones, tablet), cellular towers, satellites and routers.
  • System 10 uses Bluetooth, Wi-Fi, cellular, i2c, and USB3 technologies to communicate with the abovementioned hardware and cloud-based systems.
  • the software user application or app was developed specifically for system 10 and is available in Android or iOS.
  • the UAP is notified by the base unit through the CS when a trigger has been executed, when the system loses tracking, or when the device goes out of range, and it in turn initiates a sound, vibration etc. to get the attention of the user.
  • the UAP frequently retrieves the device location data and reports it to the base unit.
  • the UAP also allows the user to control system’s 10 state and to activate or deactivate tracking and can also be used to control add-ons such as lights on the vehicle, sounds, listening devices among others. These integrations provide additional functionality for the customer or user of system 10.
  • the system’s primary function is to trigger on behaviors worthy of alerting the protected party.
  • the worthiness of those behaviors, or“trigger behavior” or“behavior value” is determined by any user through the trigger management application (TMA).
  • TMA is a portal where the user can view images, video or 3-D scenes from the DMA database using virtual reality system 60.
  • the user can then set (i.e., assign) or remove triggers (behavior values) by moving the cursor over the behavior and selecting it using Input device 50.
  • the same algorithms that are used to determine pose and tracking are then used to record all metrics while trigger behavior is displayed, and it stops when the user again selects the behavior.
  • This data is saved into the trigger models library and contributes to the point at which all base units trigger from that point on.
  • Events stored in the DMA servers are available to have triggers edited onto them via the TMA.
  • the trigger data is then associated with modular data from the event and refined by similar models and triggers in the DMA.
  • Refined trigger model data vetted by the DMA, is distributed frequently to all base units assuring uniformity and currency. Vetting consists of simple mathematic correction to a statistical analysis, such as averaging against stored values and eliminating outliers.
  • “Mood sensing” is the core objective of training, because it provides the earliest possible read on the intentions of an individual. After deep learning is stablished over time, and thousands of edits have been made to the present system’s model libraries, “mood-sensing” will be achieved. Subtle behavioral nuances will be detectable by the TES which will lead to a more diverse set of triggers that may include, but are not limited to, warnings of aggression or fear etc.
  • LOP locus of protection
  • the LOP is where the protected party is first established, and then protected by the system. This allows for a determination to be made regarding behaviors related to the position(s) and behaviors of an individual(s), or an object(s).
  • This system architecture contrasts a system where any specific behavior triggers it. Such a system is inferior to the present system because a police officer or other protected individuals may have the need to display behaviors that would trigger the system, thereby muting its central purpose. Additionally, behaviors deemed worth of triggering the system will usually be directed toward an individual(s) or object(s), which requires those elements to be set apart from other individual(s) and or object(s) in the view in TES logic.
  • System 10 can be implemented and modified in various ways to accommodate a desired alert for a user.
  • the method is adaptable for providing a LOP around the user or officer.
  • the locus of protection is established by actions of the persons associated with the vehicle 30. Being inside the vehicle 30 places the LOP on the vehicle, being outside the vehicle and requesting tracking from the system places the LOP over the person. System 10 will track and protect the person(s) who requested tracking until the person(s) is inside the vehicle, has switched off tracking, or tracking has been lost. [0049] Once a locus of protection has been established, the system then applies protective conditional logic with respect to the position and behavior of the persons and the elements within the LOP.
  • the image capturing device 12 identifies objects within its field of view. These objects are of prime importance for the system and establish a prime area defining the LOC. As the object moves, the image capturing device 12 tracks the object thus creating a dynamic or moving LOC that varies as the object moves.
  • objects within prior stored 3-D data are the focus of users when identifying behavior values which constitute trigger events. Accordingly, a user, viewing prior stored 3-D image data focuses one’s attention on the objects within the prior stored image when assessing whether the image includes a trigger event. Accordingly, a user while viewing the prior stored 3-D image data can tag specific objects within the stored 3-D data as trigger events or assign behavior values corresponding to a potential threat or non-threat as appropriate.
  • system 10 compares real-time captured 3-D image data with a library of prior stored 3-D image data
  • system 10 focuses on the objects in the real-time image data with objects in the stored data and assesses potential behavior values or possible trigger or threat alerts based on the prior assigned behavior value or threat or trigger events previously made and associated with the prior stored 3-D image data.
  • a police officer makes a routine traffic stop with system 10 installed on the vehicle 30 as previously described, and the police officer is wearing an output device 40 such as a personal wearable device 40 a smartwatch with system 10’s user application (UAP) installed and tracking switched on in the UAP.
  • an output device 40 such as a personal wearable device 40 a smartwatch with system 10’s user application (UAP) installed and tracking switched on in the UAP.
  • UAP user application
  • system 10 As the patrol car (i.e., vehicle 30) comes to a stop, system 10’s camera frame rate, which is slower and less abusive to system hardware while the vehicle is in motion, is increased to maximum allowable by system resources via the VDM.
  • the ODS recognizes the vehicle and license plate, records its identifying features and the license plate number.
  • the vehicle’s size, type and position contribute to logic in the TES.
  • the image capturing device 12 (e.g., base unit side of the DMA) will then store all data gathered during the stop. The data gathered is analyzed by comparing it to stored image data to determine whether any trigger events exist. If there are no trigger events during the stop, while the present system is armed, the DMA will then collect samples from the data and delete the bulk of it. The samples are then sent to the cloud side of the DMA for use in creating and perfecting pose and object models. If a trigger event does take place during the stop, all data collected is sent to the cloud side of the DMA and stored as an event. This uploading task is very large and is accomplished over time, after the traffic stop (or other armed period) through cellular and Wi-Fi communication as system resources are made available.
  • Such behavior which may trigger system 10, for the purpose of example, and that system 10 is not limited to, can include approaching the officer, pointing at the officer, yelling or verbal assault of the officer, hiding from the officer, running toward the officer, or walking between the vehicles involved in the traffic stop.
  • System 10 uses a four-letter scheme at the bottom of the watch screen. When it vibrates on alert, it will display these letters which indicate location and type of behavior that triggered the present system. System 10 can be modified to have a more intuitive, graphical approach so no additional training or thinking will be needed for the user to comprehend the detailed information upon an alert.
  • the UAP is also responsible for accessing the user’s device location, and frequently reporting that information back to the base unit. This data further assists in locating and tracking the present system’s users and regaining tracking or establishing tracking if tracking is for some reason lost or if tracking is requested while the user is away from the vehicle.
  • the point of intervention is naturally determined the moment the present system triggers. It is a line drawn by the inputs to the TMA, and it is universal due to regular updates from the DMA to all base units.
  • the present system was developed in-part for the autonomous ride-hailing future that is nearly upon us. When pursuing the great business appeal of fleet driverless transportation, personal safety must be considered. Persons hiring driverless transportation services are void the granted personal safety having a driver in the car naturally affords them. To correct this, the present system can be used to keep watch around the vehicle.
  • the present system In order to establish tracking on the person(s) who have hired the driverless ride, the present system’s UAP (likely superficially modified to better serve ride-hailing) is used to create a connection to the driverless car’s base unit as the car approaches the user(s). Once the connection is made the user(s) device location is frequently transmitted to the base unit on the vehicle, and the system repeatedly attempts to locate and track the user(s) until tracking is locked and confirmed to the user via the UAP.
  • UAP likely superficially modified to better serve ride-hailing
  • the present system uses the user device location via the UAP in combination with the base unit’s location to determine the direction of the user’s device relative to the sensor view. The system then places tracking on person(s) in the isolated section of the view frame. This method is less effective in a crowd, but less needed in a crowd, and conversely, more effective in the absence of people, and more needed.
  • Door and passage recognition is used via the ODS in combination with position reporting from the UAP to determine a doorway or opening where the user is likely to appear, and once a pose is detected in that location, tracking is locked on that
  • facial recognition can assist in a positive tracking lock on the user(s) as the driverless vehicle approaches and comes to a stop.
  • Facial recognition is part of pose detection and tracking, and very useful in identifying system users and keeping tracking locked, and thus the LOP over them as they move about the view.
  • the base unit having 360 degree IR stereoscopic depth sensing vision, and being ever watchful, provides the user with hyper situational awareness.
  • Any offenses toward the user or other parties or other crimes committed in the sensor view is also recorded in HD 3-D, and the scene can be recreated in virtual reality to assist in prosecution.
  • the user Upon reaching their destination in the driverless vehicle, the user can be notified of any dangers around the vehicle before exiting it.
  • the present system After exiting the vehicle, the present system will track and protect the user(s) until they move out of sensor range or until the user(s) choose to stop the protective service via the UAP.
  • Unique advantageous elements of the present system and method include creating a dynamic LOC around an individual by identifying and tracking objects within 3-D image data.
  • This captured image data is stored in a database or a library used for comparing with real-time captured image data to identify a threat specifically based on objects tracked within the image data.
  • individuals viewing prior stored image data can focus attention on objects within the image data and tag or mark objects giving behavior values, i.e., assessing threat or trigger events based on viewing the tracked object within the prior stored image data.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Emergency Management (AREA)
  • Multimedia (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Image Analysis (AREA)

Abstract

A system for evaluating potential behavior from a captured image data includes an image capturing device that is adapted to capture three-dimensional image data. The system also has an electronic database, stored in computer memory, of prior stored three-dimensional image data. The prior stored images have an associated behavior value. A computer processor is adapted to compare captured three-dimensional image data in real-time with the prior stored three-dimensional image data to identify a match. The computer processor sets a potential behavior value of the captured image data to the behavior of a respective behavior value associated with the image data in the database that matches the captured image data. An output device receives the potential behavior value from the computer processor wherein the output device is observable by a user. A method for evaluating a potential behavior from captured image data is also provided.

Description

SYSTEM AND METHOD FOR THE
UNIVERSAL CONTROL OF UNIFORM INITIAL JUDGMENT
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of priority to U.S. Provisional Application
No. 62/814,480, filed March 6, 2019, herein incorporated by reference.
FIELD OF THE INVENTION
[0001]The present invention relates to a method and system for identifying behavior from captured image data which includes but is not limited to three-dimensional (hereinafter “3-D”) images captured in real time that are compared to prior image data to evaluate a behavior against a behavior standard.
BACKGROUND OF THE INVENTION
[0002] Law enforcement officers often encounter potential threat situations while in the line of duty. Often the officer must make split second decisions when assessing a potential threat observed by the officer. Regrettably, current technology does not adequately assist the officer in making this determination. While conventional technology consists of officer body cameras and vehicle installed cameras to record officer encounters, the information recorded is used for subsequent evaluation and not to provide real-time analysis of potential threats and conditions confronting the officer.
[0003] Additionally, the officer has unfortunately been placed in a
judgement-compromising situation when threatening behavior suddenly presents itself, and the he or she is taken by surprise while attempting to perform another duty, which has had tragic results.
[0004] Some key variables that have contributed to the officer being placed in a judgement-compromising situation have been, sudden awareness of the close proximity to a potential threat, unusual but non-threatening behavior displayed by a potential threat, numerous potential threats, and the false perception of a weapon in the hand(s) of a potential threat.
[0005] Due to the lack of technological support for the officer in the past, the
above-mentioned scenarios have resulted in the loss of life, injury, public disgrace, and high litigious costs, and the above-mentioned variables have been left unaddressed as a means of resolution. SUMMARY OF THE INVENTION
[0006] The present invention is directed to a system and method for evaluating potential behavior or threat confronting an individual in real-time. In its purest form, the system and method captures images, advantageously 3-D images using an image capturing device or camera positioned away from a user, capturing both the user and his or her surroundings. This 3-D image data is analyzed by comparing it to a database of prior captured image data that has been previously evaluated and assessed for possible threats or behavior. When real-time captured 3-D data matches prior stored image data, if the prior stored image data includes a prior associated behavior value including a possible threat or “trigger event”, a user of the method and system will be alerted accordingly. For example, the user, e.g., an officer, may be alerted by an alert sent to his or her smart watch.
[0007] The prior stored 3-D image data is advantageously stored in an electronic database. Associated behavior values can be assigned to the prior stored 3-D image data by an individual viewing the previously stored 3-D image data, for example using a virtual reality system, and assigning behavior values to the prior stored 3-D image data when the individual determines that actions viewed within the 3-D image data is a threat.
[0008] The present invention, in one form thereof, is directed to a system for evaluating a potential behavior from captured image data. The system has an image capturing device adapted to capture 3-D image data. An electronic database stored in computer memory of prior 3-D image data, contains prior stored images having an associated behavior value. A computer processor is adapted to compare captured 3-D image data in real-time with prior stored 3-D image data to identify a match. The computer processor sets a potential behavior value for the captured image data to the behavior value of a respective behavior value associated with the image data in the database that matches the captured image data. An output device receives the potential behavior value from the computer processor wherein the output device is observable by a user.
[0009] The system in one further specific form thereof has a behavior assessment input device for an individual to assign a behavior value to the prior stored image data in the database wherein the behavior value is subsequently associated with the respective 3-D image data in the electronic database.
[0010] The present invention in another form thereof is directed to a method for evaluating a potential behavior from captured image data. The method includes capturing 3-D image data using an image capturing device of a locus that includes a user and immediate surroundings of the user. The captured 3-D image data is compared to prior stored 3-D image data present in an electronic database using a computer processor to determine a match wherein the prior stored image data have respective associated behavior values. Using the processor, a potential behavior is identified in the captured 3-D image data as the behavior value of the prior stored image data that matches the captured 3-D image data. The user is alerted of the potential behavior accordingly.
[0011]The present system and method are advantageously directed to reducing the liability and challenges (e.g., political, physical dangers, etc.) confronting law enforcement individuals such as police and the policed. In addition to this use or embodiment, the present system and method can be adapted for additional wide-ranging applications for general security. The present system and method can be implemented for use in autonomous ride-hailing services, for example, when there is no driver present, security can be provided by the present invention.
[0012] In one advantageous form, the present system and method incorporates
three-dimensional (hereinafter“3-D”) environment sensing technology, algorithmic processing of visual data and activation of various response mechanisms which, together, create a protective and judgement assistance system. When this system is duplicated and continuously updated, a universal line of initial judgment can be drawn among all departments, agencies, etc., which use the technology, instead of relying completely on individual judgment, while providing the individual, e.g., officer, with hyper situational awareness.
[0013] In another advantageous form, control and training of the present system and method is achieved by viewing images, video or reconstructed 3-D scenes, using virtual reality goggles or with a monitor, through the present system’s training application. Once the media is loaded into the training application, observers, e.g., judges, can use a hand-held remote or mouse cursor to“paint” individuals displaying behavior that they deem worthy or unworthy of police reaction or intervention.
[0014] In use, behavior of individuals within the range of the sensing equipment which matches that behavior deemed worthy of police intervention or reaction in the control application, will trigger the system to notify the officer and/or any other reactionary systems deemed necessary by law. As a result, the present system and method relieves the law enforcement officer by taking some of the liability of engagement from them, and it relieves the public by giving them input in universal initial judgement. This is what we refer to as a“social compromise” built into the system’s core design.
[0015] The design and an exemplary form of this novel and unique system and method is described in this disclosure in a form to outfit a police/law enforcement vehicle in order to demonstrate and describe its implementation in one form. However, the present system and method can be modified and adapted in any number of different configurations and forms which create a scannable scene about the law enforcement officer(s), person(s) or objects(s).
[0016] One especially beneficial component of the present system and method that provides for realization of specific aspects of the present method and system is at least one stereoscopic infra-red depth sensing camera - commonly referred to as an RGBD (Red, Green, Blue, and DEPTH) camera herein incorporated by reference. Such a device gathers a depth measurement for each pixel in its resolution, in addition to the more commonplace RGB color determination. Conceptualization and deployment of the present system has been prevented in the past by inadequate available camera technology at a viable price. The cameras required to gather adequate point-cloud data for the purpose of determining the visible mood of a human or animal in 3-D have been too limited in resolution and/or range up until the release of the MYNT EYE D on
February 14, 2019. Previously, the best applicable camera was the REALSENSE D400 SERIES by Intel Corporation, first released in January of 2018, which offers adequate resolution and frame rate, but inadequate effective range. The Intel REALSENSE D400 SERIES offers 1280x720p resolution @60 frames per second (fps) but is limited to 10 meters of >90% accuracy. In discussions with police, a minimum performance of 1280x720p resolution@60 fps with 20 meters of >90% accurate range was determined to be essential for the effective marketing of the present system. The MYNT EYE D achieves 20 meters of >90% accuracy range with 1280x720p @60 fps.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The present invention will now be described with regard to the figures as filed in which:
[0018] Figure 1 is a schematic showing a system for evaluating a potential behavior from captured image data in accordance with one aspect of the present invention.
[0019] Figure 2 is a flow diagram of a method for evaluating a potential behavior from captured image data in accordance with the present invention.
DETAILED DESCRIPTION
[0020] The present invention will now be described with reference to the figures including the system of Figure 1 and the method of Figure 2. G00211 SYSTEM HARDWARE
[0022] Referring to Fig. 1 , system 10 includes an image capturing device 12, a server 20 with high-speed computer processor 22, a data storage device or memory 24, and a communication device(s) or communication system 26.
[0023] In one advantageous form, the image capturing device 12 is a unit that is in the form of a vehicle roof mounted base unit mounted on vehicle 30. The image capturing device 12 (base unit) has four (4) stereoscopic infrared (IR) cameras 13 arranged radially (to allow 360 degrees of vision), a high-speed processor 14, memory 16, and a
communication device(s) 18. Image capturing device 12 via communication device 18 is connected to an output device 40 such as a smart phone or smart watch. This
configuration allows for simplified marketing of system 10 as a“plug-and-play” type of product, as opposed to a system which requires integration into a vehicle.
[0024] A personal device to remain with the user of system 10 is not necessary (for example, alternatively system 10 could alert the user by sounding an alarm from the image capturing device 12 (i.e. , base unit), but highly improves the system’s performance.
[0025] The user software program or application, which provides for control of some aspects of system 10, and is the primary means to notify the user, can be downloaded onto a smartphone or other personal device, but a smartwatch is preferred. Additionally, a tablet computer can be included as a component to system 10, fitted inside the vehicle with an appropriate mount, to allow the user a way to review captured visual data from trigger events.
[0026] The frame that triggers any event is stored on the tablet for review by the user. G00271 SYSTEM SOFTWARE
[0028] System 10 requires an operating system (OS), visual data management software (VDM), pose-detection software (PDS), object-detection software (ODS), pose and object tracking software (TS), machine learning software (MLS), trigger execution software (TES), trigger management application software (TMA), data management software (DMA), communication software (CS), user application software (UAP).
[0029]An operating system (OS) such as Linux, Windows etc. having a file system and access to the storage device is used in the present system’s base unit to organize and access files necessary for its function. The files include model libraries, function libraries, the VDM, PDS, ODS, TS, TES, DMA and some components of the CS.
[0030]The visual data management system (VDM) usually consists of the software development kit provided by the camera manufacturer, and local programming necessary to integrate the camera into the present system. Upon integration, the cameras data can then be regulated and streamed into the high-speed processor of the present system. Additionally, depth data is retrieved through the VDM and streamed into the high-speed processor for modeling purposes.
[0031] Pose detection software (PDS) and Object detection software (ODS) are within a competitive field where several development teams have provided systems, networks and libraries to locate objects in a captured camera frame. System 10 advantageously uses a competitive system by a trusted provider for both human pose and object detection, which returns good reliable detections on each frame we run through the PDS. This system allows one to adjust and pull data in great detail which we then use to feed the TS and TES. As the industry matures and specialization develops, system 10 can be updated to or modified as separate systems for pose detection and object detection.
[0032]The tracking system (TS) is also hired out from a competitive industry of pose and object tracking systems. The TS of system 10 is used to correlate detections from the PDS and ODS in order to assign an id to each detection and track the detection from frame to frame, affecting motion tracking.
[0033]The machine learning software (MLS) is also hired out from a competitive industry of machine learning and Al providers. The MLS is necessary to operate the PDS, ODS, TS, and TMA as part of their function and output, and it is also necessary to adapt models from the TMA.
[0034] The trigger execution system (TES) was developed specifically for the present system and is used to monitor all data provided by the above detection and tracking systems and executing alerts and various commands as the result of conditional logic being met. The conditional logic which results in the present system triggering and alerting the user is derived from stored models in the storage device, which are frequently updated by the DMA.
[0035] The trigger management application (TMA) was developed specifically for the present system and uses adapted versions of the PDS, ODS, and TS to create a user friendly portal where scenes can be viewed by any user Gudge), and trigger data can be created by selecting a detection demarcated by a visual augmentation about an individual or object displaying trigger behavior as perceived by the judge, and deselected once absent the behavior. The TMA can be used on a screen for 2-D data analysis, or on a screen in first-person-perspective for 3-D data analysis, but it is preferred to use a virtual reality system that includes virtual reality goggles, a beacon, and a cursor to make a detailed call on the behavior being analyzed. [0036] The data management system (DMA), developed specifically for system 10, mainly consists of a cloud-based file system and duplicated and secure storage across a domain-providers network. The purpose of the DMA is to manage collected data for viewing and editing through the TMA, to process the trigger data uploaded from the TMA, and to distribute adjustment commands to all unit storage devices on a frequent basis.
This creates the present system’s trigger uniformity.
[0037] The communication system (CS) was developed specifically for system 10 and is used in combination with communication hardware of the communication device 18 to send and receive data to and from the high-speed processor 14 of the image capturing device 12 and the peripherals (i.e. , wearables, smartphones, tablet), cellular towers, satellites and routers. System 10 uses Bluetooth, Wi-Fi, cellular, i2c, and USB3 technologies to communicate with the abovementioned hardware and cloud-based systems.
[0038] The software user application or app (UAP) was developed specifically for system 10 and is available in Android or iOS. The UAP is notified by the base unit through the CS when a trigger has been executed, when the system loses tracking, or when the device goes out of range, and it in turn initiates a sound, vibration etc. to get the attention of the user. The UAP frequently retrieves the device location data and reports it to the base unit. The UAP also allows the user to control system’s 10 state and to activate or deactivate tracking and can also be used to control add-ons such as lights on the vehicle, sounds, listening devices among others. These integrations provide additional functionality for the customer or user of system 10.
G00391 SYSTEM TRAINING
[0040] The system’s primary function is to trigger on behaviors worthy of alerting the protected party. The worthiness of those behaviors, or“trigger behavior” or“behavior value” is determined by any user through the trigger management application (TMA). The TMA is a portal where the user can view images, video or 3-D scenes from the DMA database using virtual reality system 60. The user can then set (i.e., assign) or remove triggers (behavior values) by moving the cursor over the behavior and selecting it using Input device 50. The same algorithms that are used to determine pose and tracking are then used to record all metrics while trigger behavior is displayed, and it stops when the user again selects the behavior. This data is saved into the trigger models library and contributes to the point at which all base units trigger from that point on. [0041] Events stored in the DMA servers are available to have triggers edited onto them via the TMA. The trigger data is then associated with modular data from the event and refined by similar models and triggers in the DMA.
[0042] Refined trigger model data, vetted by the DMA, is distributed frequently to all base units assuring uniformity and currency. Vetting consists of simple mathematic correction to a statistical analysis, such as averaging against stored values and eliminating outliers.
[0043]“Mood sensing” is the core objective of training, because it provides the earliest possible read on the intentions of an individual. After deep learning is stablished over time, and thousands of edits have been made to the present system’s model libraries, “mood-sensing” will be achieved. Subtle behavioral nuances will be detectable by the TES which will lead to a more diverse set of triggers that may include, but are not limited to, warnings of aggression or fear etc.
[0044] LOCUS OF PROTECTION
[0045] One central concept to the present system establishes a locus or view that encompasses a user and his or her immediate surrounding in what is referred to as a “locus of protection” (LOP). The LOP is where the protected party is first established, and then protected by the system. This allows for a determination to be made regarding behaviors related to the position(s) and behaviors of an individual(s), or an object(s). This system architecture contrasts a system where any specific behavior triggers it. Such a system is inferior to the present system because a police officer or other protected individuals may have the need to display behaviors that would trigger the system, thereby muting its central purpose. Additionally, behaviors deemed worth of triggering the system will usually be directed toward an individual(s) or object(s), which requires those elements to be set apart from other individual(s) and or object(s) in the view in TES logic.
[0046] EXEMPLARY METHOD
[0047] System 10 can be implemented and modified in various ways to accommodate a desired alert for a user. Advantageously, the method is adaptable for providing a LOP around the user or officer.
[0048] The locus of protection (LOP) is established by actions of the persons associated with the vehicle 30. Being inside the vehicle 30 places the LOP on the vehicle, being outside the vehicle and requesting tracking from the system places the LOP over the person. System 10 will track and protect the person(s) who requested tracking until the person(s) is inside the vehicle, has switched off tracking, or tracking has been lost. [0049] Once a locus of protection has been established, the system then applies protective conditional logic with respect to the position and behavior of the persons and the elements within the LOP.
[0050] Referring to the LOP in further detail, the image capturing device 12 identifies objects within its field of view. These objects are of prime importance for the system and establish a prime area defining the LOC. As the object moves, the image capturing device 12 tracks the object thus creating a dynamic or moving LOC that varies as the object moves.
[0051] Further, objects within prior stored 3-D data are the focus of users when identifying behavior values which constitute trigger events. Accordingly, a user, viewing prior stored 3-D image data focuses one’s attention on the objects within the prior stored image when assessing whether the image includes a trigger event. Accordingly, a user while viewing the prior stored 3-D image data can tag specific objects within the stored 3-D data as trigger events or assign behavior values corresponding to a potential threat or non-threat as appropriate. As a result, when system 10 compares real-time captured 3-D image data with a library of prior stored 3-D image data, system 10 focuses on the objects in the real-time image data with objects in the stored data and assesses potential behavior values or possible trigger or threat alerts based on the prior assigned behavior value or threat or trigger events previously made and associated with the prior stored 3-D image data.
G00521 EXEMPLARY METHOD
[0053] In one exemplary method to highlight how the present system can be used, a police officer makes a routine traffic stop with system 10 installed on the vehicle 30 as previously described, and the police officer is wearing an output device 40 such as a personal wearable device 40 a smartwatch with system 10’s user application (UAP) installed and tracking switched on in the UAP.
[0054] As the patrol car (i.e., vehicle 30) comes to a stop, system 10’s camera frame rate, which is slower and less abusive to system hardware while the vehicle is in motion, is increased to maximum allowable by system resources via the VDM.
[0055] Additionally, the ODS recognizes the vehicle and license plate, records its identifying features and the license plate number. The vehicle’s size, type and position contribute to logic in the TES.
[0056]The image capturing device 12 (e.g., base unit side of the DMA) will then store all data gathered during the stop. The data gathered is analyzed by comparing it to stored image data to determine whether any trigger events exist. If there are no trigger events during the stop, while the present system is armed, the DMA will then collect samples from the data and delete the bulk of it. The samples are then sent to the cloud side of the DMA for use in creating and perfecting pose and object models. If a trigger event does take place during the stop, all data collected is sent to the cloud side of the DMA and stored as an event. This uploading task is very large and is accomplished over time, after the traffic stop (or other armed period) through cellular and Wi-Fi communication as system resources are made available.
[0057] Once the officer stops behind the detained vehicle and exits the patrol car
(vehicle 30). At this point, because system 10 is on and tracking has been requested by the officer previously, tracking is locked on the officer as he or she enters the sensor’s view. This“hands-free” way of establishing tracking is achieved by first having system 10 armed, and tracking requested on the user application prior to making a stop (for example at the beginning of the officer’s shift), then stopping the vehicle (which causes system 10 to look for doors opening and persons entering the view from the bottom), and finally person(s) exiting the vehicle. This isolates the figures of the individuals associated with the vehicle from the other individuals in the view, which establishes the LOP over the officer in this case.
[0058] Once the system is tracking the officer, he or she can move anywhere within the view of the sensor, and even briefly out of view, and system 10 will maintain tracking on them. Tracking is further supported by frequent user device location reports to the base unit via the UAP.
[0059]While tracked by the present system, the behaviors of all elements in the view are continuously examined against the library of trigger models (i.e., previous 3-D image data with associated trigger or threat values stored in the electronic database) to determine any behavior worthy of notifying the police officer.
[0060] Such behavior which may trigger system 10, for the purpose of example, and that system 10 is not limited to, can include approaching the officer, pointing at the officer, yelling or verbal assault of the officer, hiding from the officer, running toward the officer, or walking between the vehicles involved in the traffic stop.
[0061]The behaviors listed previously which may trigger system 10, if it is trained to trigger on sight of them, will also do so if the officer is inside the vehicle and system 10 is armed. In this case the sensor cannot see the officer, and therefore moves the LOP over the vehicle.
[0062] When the LOP is over the vehicle, pointing toward it, nearing it, and hiding from it, if system 10 has been trained to trigger on such behaviors, will trigger system 10. [0063] If trigger conditions are met by any non-tracked individual in the view, the officer is notified by haptic alert on his or her smartwatch. This along with appropriate officer training will prompt a quick circle of awareness, and justify engagement with the individual displaying trigger behavior, per the present system’s training.
[0064] Specific indications can be transmitted to the wearable to notify the officer of the location of the trigger behavior and what type it is. System 10 uses a four-letter scheme at the bottom of the watch screen. When it vibrates on alert, it will display these letters which indicate location and type of behavior that triggered the present system. System 10 can be modified to have a more intuitive, graphical approach so no additional training or thinking will be needed for the user to comprehend the detailed information upon an alert.
[0065] The UAP is also responsible for accessing the user’s device location, and frequently reporting that information back to the base unit. This data further assists in locating and tracking the present system’s users and regaining tracking or establishing tracking if tracking is for some reason lost or if tracking is requested while the user is away from the vehicle.
G00661 SOCIAL COMPROMISE
[0067] The marketability of any Al powered policing system is hindered by widespread negative sentiment toward the advancement of technologies that replicate human cognition. It is the fundamental intention of this system architecture to solve this impasse by creating a social compromise. It is intended to cure a social problem that exists between the state and its people regarding the point the state is permitted by its people to intervene in a person’s life. The present method and system, by providing protection, release of liability, and investigative support to police, also provides the democratization of the point of intervention, public protection, and access to a detailed record of events.
[0068] The point of intervention is naturally determined the moment the present system triggers. It is a line drawn by the inputs to the TMA, and it is universal due to regular updates from the DMA to all base units.
[0069] Additionally, by providing such effective protection to police officers, their anxiety of personal assault is lowered, and a reactive response toward a civilian is further
diminished, allowing for more calculated decision making, and a reliance on de-escalation tactics as opposed to self-defensive tactics.
r00701 USE IN AUTONOMOUS RIDE-HAILING
[0071]The present system was developed in-part for the autonomous ride-hailing future that is nearly upon us. When pursuing the great business appeal of fleet driverless transportation, personal safety must be considered. Persons hiring driverless transportation services are void the granted personal safety having a driver in the car naturally affords them. To correct this, the present system can be used to keep watch around the vehicle.
[0072] In order to establish tracking on the person(s) who have hired the driverless ride, the present system’s UAP (likely superficially modified to better serve ride-hailing) is used to create a connection to the driverless car’s base unit as the car approaches the user(s). Once the connection is made the user(s) device location is frequently transmitted to the base unit on the vehicle, and the system repeatedly attempts to locate and track the user(s) until tracking is locked and confirmed to the user via the UAP.
[0073] The present system uses the user device location via the UAP in combination with the base unit’s location to determine the direction of the user’s device relative to the sensor view. The system then places tracking on person(s) in the isolated section of the view frame. This method is less effective in a crowd, but less needed in a crowd, and conversely, more effective in the absence of people, and more needed.
[0074] Door and passage recognition is used via the ODS in combination with position reporting from the UAP to determine a doorway or opening where the user is likely to appear, and once a pose is detected in that location, tracking is locked on that
individual(s). If someone approaches the door or opening from the outside of the structure the present system will warn the user(s) via the UAP.
[0075] Additionally, voluntary facial recognition can assist in a positive tracking lock on the user(s) as the driverless vehicle approaches and comes to a stop. Facial recognition is part of pose detection and tracking, and very useful in identifying system users and keeping tracking locked, and thus the LOP over them as they move about the view.
However, it is important to recognize that this system is not a“facial recognition” technology as interpreted by the public. There are no such system components or logic that can identify people as they walk by the present system’s sensors; that is not its purpose or function.
[0076] When user(s) are tracked by the present system as the driverless vehicle comes to a stop, they can be notified via the UAP of any dangers within the view before
approaching the vehicle. The base unit, having 360 degree IR stereoscopic depth sensing vision, and being ever watchful, provides the user with hyper situational awareness.
[0077] Any offenses toward the user or other parties or other crimes committed in the sensor view is also recorded in HD 3-D, and the scene can be recreated in virtual reality to assist in prosecution. [0078] Upon reaching their destination in the driverless vehicle, the user can be notified of any dangers around the vehicle before exiting it.
[0079] After exiting the vehicle, the present system will track and protect the user(s) until they move out of sensor range or until the user(s) choose to stop the protective service via the UAP.
[0080] It now will be clear that the system and method provides advantages not found in prior technology. Unique advantageous elements of the present system and method include creating a dynamic LOC around an individual by identifying and tracking objects within 3-D image data. The captured 3-D data within the LOC. This captured image data is stored in a database or a library used for comparing with real-time captured image data to identify a threat specifically based on objects tracked within the image data. To this end, individuals viewing prior stored image data can focus attention on objects within the image data and tag or mark objects giving behavior values, i.e., assessing threat or trigger events based on viewing the tracked object within the prior stored image data.
[0081] Although the invention has been described above in relation to preferred
embodiments thereof, it will be understood by those skilled in the art that variations and modifications can be accomplished in these preferred embodiments without departing from the scope and spirit of the invention.

Claims

1. A system for evaluating a potential behavior from captured image data, said system comprising:
an image capturing device adapted to capture three-dimensional image data;
an electronic database, stored in computer memory, of prior stored
three-dimensional image data, said prior stored images having an associated behavior value;
a computer processor adapted to compare captured three-dimensional image data, in real-time, with the prior stored three-dimensional image data to identify a match, and said computer processor setting a potential behavior value of the captured image data to the behavior value of a respective behavior associated with the image data in the database that matches the captured image data; and
an output device that receives the potential behavior value from the computer processor, wherein the output device is observable by a user.
2. The system of claim 1 , further comprising a behavior assessment input device for an individual to assign a behavior value to the prior stored image data in the database, wherein said behavior value is subsequently associated with the respective three dimensional image data in the electronic database.
3. The system of claim 2, further comprising a virtual reality system for viewing prior stored three dimensional image data whereby a user can view the prior stored three dimensional image data when evaluating a potential behavior and for assigning a behavior value to the prior stored three dimensional image data.
4. The system of claim 1 , the output device is a portable, wearable device.
5. The system of claim 4, wherein the wearable device is a watch or bracelet.
6. The system of claim 1 , wherein the image capturing device is disposed away from the user, capturing three dimensional images establishing a locus of view that includes both the user and immediate surrounding adjacent the user.
7. The system of claim 1 , wherein the image capturing device is attached to a vehicle.
8. A method for evaluating a potential behavior from captured image data, said method comprising:
capturing three-dimensional image data, using an image capturing device, of a locus that includes a user and the immediate surroundings of the user;
comparing the captured three-dimensional image data, in real-time, to prior stored three-dimensional image data present in an electronic database, using a computer processor, to determine a match, wherein the prior stored image data’s respective associated behavior values;
identifying, using the processor, a potential behavior in the captured
three-dimensional image data as the behavior value of the prior stored image data that matches the captured three-dimensional image data; and
alerting the user of the potential behavior.
9. The method of claim 8, further comprising assessing behavior, present in prior captured three-dimensional image data, by a user viewing the prior captured
three-dimensional image data and manually assigning a behavior value to the prior stored three-dimensional data using an input device, the behavior value being assigned to the prior stored three-dimensional image data in the electronic database.
10. The method of claim 9, wherein the assessing behavior comprises the user viewing the prior stored three-dimensional image data using a virtual reality system.
1 1. The method of claim 8, wherein altering the user comprises sending a signal from the processor to an output device worn by the user.
12. The method of claim 8, wherein capturing the three-dimensional image data comprises establishing a locus of view that includes both the user and immediate surroundings adjacent the user, using an image capturing device that is disposed away from the user.
13. The method of claim 12, wherein the image capturing device is attached to a vehicle.
14. The method of claim 8, wherein capturing three-dimensional image data comprises identifying an object within the image data and tracking the object using the image capturing device.
15. The method of claim 14, wherein associate behavior values of the prior stored image data correspond to actions of objects within the image data.
16. The method of claim 15, further comprising assessing behavior, present in prior captured three-dimensional image data, by a user viewing the prior captured
three-dimensional image data and manually assigning a behavior value of objects in the image data, using an input device, the behavior value being assigned to the prior stored three-dimensional image data in the electronic database.
17. The method of claim 8, wherein capturing the three dimensional image data comprises using the image capturing device that is disposed away from the user to identify an object within the image data and tracking the object using the image capturing device, thereby establishing a dynamic locus of view, that moves with the tracked object and includes both the user and immediate surroundings adjacent the user.
PCT/US2020/021476 2019-03-06 2020-03-06 System and method for the universal control of uniform initial judgment WO2020181225A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962814480P 2019-03-06 2019-03-06
US62/814,480 2019-03-06

Publications (1)

Publication Number Publication Date
WO2020181225A1 true WO2020181225A1 (en) 2020-09-10

Family

ID=72338410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/021476 WO2020181225A1 (en) 2019-03-06 2020-03-06 System and method for the universal control of uniform initial judgment

Country Status (1)

Country Link
WO (1) WO2020181225A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9208678B2 (en) * 2007-01-12 2015-12-08 International Business Machines Corporation Predicting adverse behaviors of others within an environment based on a 3D captured image stream
WO2018039076A1 (en) * 2016-08-22 2018-03-01 Vantedge Group, Llc Immersive and merged reality experience / environment and data capture via virtural, agumented, and mixed reality device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9208678B2 (en) * 2007-01-12 2015-12-08 International Business Machines Corporation Predicting adverse behaviors of others within an environment based on a 3D captured image stream
WO2018039076A1 (en) * 2016-08-22 2018-03-01 Vantedge Group, Llc Immersive and merged reality experience / environment and data capture via virtural, agumented, and mixed reality device

Similar Documents

Publication Publication Date Title
Laufs et al. Security and the smart city: A systematic review
US11638124B2 (en) Event-based responder dispatch
US11328163B2 (en) Methods and apparatus for automated surveillance systems
US10152858B2 (en) Systems, apparatuses and methods for triggering actions based on data capture and characterization
RU2316821C2 (en) Method for automatic asymmetric detection of threat with usage of reverse direction tracking and behavioral analysis
US8630820B2 (en) Methods and systems for threat assessment, safety management, and monitoring of individuals and groups
US9135808B2 (en) Systems, devices and methods to communicate public safety information
JP5560397B2 (en) Autonomous crime prevention alert system and autonomous crime prevention alert method
US20160019427A1 (en) Video surveillence system for detecting firearms
US20140118140A1 (en) Methods and systems for requesting the aid of security volunteers using a security network
US20140167954A1 (en) Systems, devices and methods to communicate public safety information
US9841865B2 (en) In-vehicle user interfaces for law enforcement
US11373513B2 (en) System and method of managing personal security
Bowling et al. Automated policing: the case of body-worn video
US20210358066A1 (en) Intelligent Traffic Violation Detection System
Marcondes et al. In-vehicle violence detection in carpooling: a brief survey towards a general surveillance system
US20210004928A1 (en) Novel communications system for motorists
WO2020181225A1 (en) System and method for the universal control of uniform initial judgment
US11785440B2 (en) Public safety system and method
GB2554948A (en) Video monitoring using machine learning
US20230230190A1 (en) Personal protector platform
US20230089720A1 (en) Systems and methods for providing assistance in an emergency
US11595486B2 (en) Cloud-based, geospatially-enabled data recording, notification, and rendering system and method
US20220189266A1 (en) System and method for real-time multi-person threat tracking and re-identification
US20230386212A1 (en) Method and system for monitoring activities and events in real-time through self-adaptive ai

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20766210

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20766210

Country of ref document: EP

Kind code of ref document: A1