US20200175289A1 - Moving Target of Interest Predictive Locating, Reporting, and Alerting - Google Patents

Moving Target of Interest Predictive Locating, Reporting, and Alerting Download PDF

Info

Publication number
US20200175289A1
US20200175289A1 US16/748,165 US202016748165A US2020175289A1 US 20200175289 A1 US20200175289 A1 US 20200175289A1 US 202016748165 A US202016748165 A US 202016748165A US 2020175289 A1 US2020175289 A1 US 2020175289A1
Authority
US
United States
Prior art keywords
moving object
computer
data
implemented method
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/748,165
Inventor
Joshua MAY
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aware Technologies Inc
Original Assignee
Aware Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aware Technologies Inc filed Critical Aware Technologies Inc
Priority to US16/748,165 priority Critical patent/US20200175289A1/en
Publication of US20200175289A1 publication Critical patent/US20200175289A1/en
Priority to US17/180,787 priority patent/US11876558B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • G06K9/00825
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06K9/6288
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • G06K2209/01
    • G06K2209/15
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • the present technology pertains in general to objects of many types, more specifically, to moving object determination, reporting, predicting, and/or alerting.
  • the present disclosure provides various embodiments of systems and methods for moving object predictive locating, reporting, and alerting, described herein.
  • the present technology may be embodied as an application (i.e., an “app”) where the application can project the potential current and future paths and locations of objects and notify individuals and other systems that may be impacted, or that are interested in obtaining this information, as detailed further herein.
  • an example method includes receiving moving object data corresponding to a moving object; receiving sensor data from a sensor; and merging the received moving object data and the received sensor data into a set of merged data.
  • the example method further includes based on the set of merged data, automatically determining one or more of a predicted location or range of locations for the moving object, a potential path of travel for the moving object, an alert concerning the moving object, and providing the alert.
  • the automatically determining may be further based on one or more historical traits concerning the object, and the geographic medium the object is moving through.
  • the geographic medium may include one or more of terrain, air, water, and space.
  • the object may be a soldier, vehicle, or drone. As described further below, the object may also be a ballistic.
  • the method includes identifying a specific type for the moving object; receiving historical traits and trends associated with the moving object, including statistical movement characteristics of the object, the statistical movement characteristics including acceleration and speed ability of the identified object; and adjusting the maximum acceleration, maximum speed, and maximum reachable range for the object as a function of geographic mediums that the object will move through in time over the projected course and trajectory of the object.
  • the method considers average speeds a particular object may traverse a particular geographic medium.
  • the present technology is a system that provides a service where inputs are received from third parties and outputs are provided to those or to other third parties.
  • Inputs could include all types of sensor data pertaining to users and moving objects and outputs provided could be a merged data set as well as additional information generated by the system pertaining to approximation and estimation of future location, proximity, trajectory and routing.
  • FIG. 1 illustrates an example plan view of a freeway with a moving vehicle type hazard and other vehicles, according to an example embodiment.
  • FIG. 2 illustrates a plan view of an area that includes a freeway along with a street map/grid with adjacent roads and freeway portions, according to an example embodiment.
  • FIG. 3 illustrates an example entry screen for an application for a method operating on a mobile device, according to various embodiments.
  • FIG. 4 illustrates an example screen that can be presented to a user in response to determining an unsafe driver is located, moving, and ahead of the user, according to an example embodiment.
  • FIG. 5 illustrates an example screen that can be presented to a user in response to determining that an unsafe driver in a subject vehicle is located, moving, and is presently behind the user or driver, according to an example embodiment.
  • FIG. 6 illustrates an example screen presented to a user in response to determining that a user is approaching a moving object, according to an example embodiment.
  • FIG. 7 illustrates an example screen presented to a user to route around a potential object in response to determining that the potential object was in the route the user had been taking, according to an example embodiment.
  • FIG. 8 illustrates a block diagram of an operational flow chart according to some embodiments.
  • FIG. 9 illustrates example entry screens for an application, according to other embodiments.
  • FIG. 10 illustrates an example screen providing a user interface to record a description for reporting a moving object, according to various embodiments.
  • FIG. 11 illustrates an example screen providing a user interface for a text input method for reporting a moving object, according to various embodiments.
  • FIG. 12 illustrates three example screens for alerts provided to the user overlaid on a map, according to an example embodiment.
  • FIG. 13 illustrates three more example screens for alerts provided to the user overlaid on a map, according to an example embodiment.
  • FIG. 14 illustrates three additional example screens for alerts provided to the user overlaid on a map, according to an example embodiment.
  • FIG. 13 illustrates screen shots, according to some embodiments.
  • FIG. 14 illustrates screen shots, according to some embodiments.
  • FIG. 15 illustrates two example screens and for providing amber alerts notification and information to a user, according to an example embodiment.
  • FIG. 16 is a block diagram of an operational flow chart of a method, according to an example embodiment.
  • FIG. 17 is a simplified block diagram of a computing system, according to some embodiments.
  • FIG. 18 illustrates another example embodiment showing the moving object polygon and some information regarding the moving object and report of same, according to various embodiments.
  • FIG. 19 illustrates another example embodiment of a screen in FIG. 9 for reporting and viewing hazards and other objects.
  • FIG. 20 illustrates another example embodiment of a screen in FIG. 9 for selecting voice recording or manual entering of hazard reports.
  • FIG. 21 illustrates another example embodiment of the screens in FIG. 9 for audio reporting of hazards and other objects.
  • FIG. 22 illustrates an additional simpler example embodiment of a screen for audio reporting of hazards.
  • FIG. 23 illustrates an example screen providing a user interface for selection of an icon for reporting characteristics a moving object, according to various embodiments.
  • FIG. 24 illustrates another example embodiment of a screen in FIG. 11 for providing identification information about a moving object.
  • FIG. 25 illustrates another example embodiment of a screen for providing more information about a moving object.
  • FIG. 26 illustrates another example embodiment of a screen in FIG. 23 for selecting characteristics of the moving object.
  • FIG. 27 illustrates an example embodiment of a screen for displaying status information about a moving object.
  • FIG. 28 illustrates an example embodiment of a screen for displaying identifying information about a moving object (in this case a road hazard driving aggressively) and enabling viewing of a map or list or doing an update concerning the moving hazard.
  • a moving object in this case a road hazard driving aggressively
  • Tracking and alerting drivers to traffic accidents, road construction, police presence, and stationary, fixed objects is used to guide drivers to the most time efficient route to get to their destination. Moving hazards present particularly difficult problems for tracking and alerting.
  • the present technology is not limited to the objects being hazards; other objects may be moving and processed to the present technology.
  • other objects may be moving and processed to the present technology.
  • some examples herein are described with respect to moving hazards that a driver could encounter on the road, the present technology is not limited to that type of moving hazards.
  • the objects that might be encountered include but are not limited to drunk or otherwise impaired drivers, texting drivers, kidnappers of missing children (e.g., “Amber” alert), battlefield objects, hiking encounters (e.g. predatory animals), in flight obstacles and hazards, maritime obstacles and hazards, and underwater dangers, to name just several possibilities.
  • drunk or otherwise impaired drivers texting drivers, kidnappers of missing children (e.g., “Amber” alert), battlefield objects, hiking encounters (e.g. predatory animals), in flight obstacles and hazards, maritime obstacles and hazards, and underwater dangers, to name just several possibilities.
  • the present technology in some embodiments may be embodied as an application where the application can project the potential current and future paths and locations of objects, an individual, group of individuals, vehicle, group of vehicles (to name some non-limiting examples), and notify individuals and other systems that may be impacted, or that are interested in obtaining this information.
  • the present technology can be utilized via a web browser, a corporate network server, the cloud, etc.
  • the present technology can also be configured to provide graphics in three dimensions and/or configured for virtual reality or augmented reality interaction.
  • FIG. 1 is an example plan view of an area 100 that includes a freeway 19 with a moving object being in this example a moving vehicle type hazard 30 .
  • FIG. 1 also shows on freeway 19 other vehicles in the vicinity of the hazard identified according to an example embodiment.
  • the freeway 19 is a divided roadway with three lanes 16 of traffic in each direction of travel and a shoulder 17 .
  • a primary user 20 observes a subject vehicle 30 which may be variously a slow vehicle, a weaving vehicle, a wrong way vehicle, a vehicle driven on the shoulder, etc.
  • the subject vehicle 30 can potentially cause harm to the primary user 20 and potentially to other vehicle.
  • indicia for drunk drivers may include, quick acceleration or decelerations; swerving, weaving, or zig-zagging across the road; stopping or turning abruptly or illegally; nearly hitting an object like a curb or another vehicle; drifting in and out of lanes or straddling multiple lanes; driving slower than 10 miles per hour below the speed limit; driving with headlights off at night; and driving on the wrong side of the road or anywhere that is not on the road; to name several non-limiting examples.
  • Such indicia are weighted in determining the priority of a moving object that may be a potential hazard.
  • determining that the subject vehicle nearly hits an object on the road may be higher priority and hence given more weighting than quick acceleration.
  • determining that the subject vehicle is driving on the wrong side of the road is much higher priority and thus weighted more than either determining the subject vehicle is accelerating.
  • the subject vehicle is driving on the wrong side of the road and is driving on the side of the road of the user, it will be given much higher priority and thus more weight than if the user was driving on the opposite side of the road from the subject vehicle.
  • the number of confirming reports from additional users or sensors can increase the priority and weight for determining the potential hazard and potential interaction with the hazard.
  • a moving object is a hazard and is identified and reported by multiple observers or sensors it can then be confirmed with increasing likelihood and as a high-level priority. For example if ten users of the technology report a wrong way driver within ten minutes within a twenty mile stretch of road then the priority/weighting of the hazard could then increase.
  • the driver 21 and the driver 22 are also in imminent danger from the subject vehicle 30 , but other vehicles in the vicinity may not be. It should be appreciated that objects such as vehicles shown in FIG. 1 are identified by their occupant driver; however, certain vehicles could be autonomous self-driving vehicles having no or some passengers therein.
  • the motorcycle 23 may be far enough in back of the subject vehicle 30 that the rider of motorcycle 23 should be able to presently observe the subject vehicle 30 , not be in immediate peril, but might be imperiled in the future.
  • Driver 24 and driver 25 are traveling in an opposite direction and should not be affected by the unsafe actions of the subject vehicle 30 , unless on an undivided roadway, or the subject vehicle 30 changed directions and veered over the divider 18 , for example.
  • Driver 22 is ahead of the subject vehicle 30 and may not be aware of the subject vehicle 30 , but if the subject vehicle 30 was speeding, an accident could still occur between the subject vehicle 30 and driver 22 .
  • FIG. 2 shows a plan view of an area 200 that includes a freeway 18 along with a street map/grid with adjacent roads and freeway portions, according to an example embodiment.
  • the subject vehicle 30 has been observed by the primary observer 20 .
  • the primary observer 30 would activate the application that runs in accordance with various embodiments of the methods of the present technology.
  • the primary observer can then enter or select relevant information about the subject vehicle 30 such as, but not limited to, the speed, direction of travel and type of subject vehicle 30 and optionally the license plate of the subject vehicle 30 .
  • the method can notify other drivers that could be affected by the object, i.e., subject vehicle 30 in this example.
  • the notification can be based upon various factors which may include the proximity to the subject vehicle 30 and the type or categorization of subject vehicle 30 .
  • the direction of travel of the subject vehicle 30 is approaching an off-ramp 41 and an interchange 40 .
  • a moving visual representation of this is displayed on the screen for the user, e.g., the primary observer 20 or others.
  • the representation and various embodiments may run variously on a mobile device, a device built into the vehicle, or some Internet of Things (IoT) device, etc.
  • IoT Internet of Things
  • various trajectory information can be updated.
  • the speed may be determined by various means, including but not limited to entry or selection by a user, or sensor data where the sensor may be on a vehicle, on the mobile device, or some other device.
  • all known routes that could be potentially traversed by the subject vehicle 30 and the user, primary observer 20 for instance can be highlighted on the display in a color and appearance can be set based upon a selectable range and specification of the notifying application.
  • driver 22 is more distant from the object (e.g., subject vehicle 30 ), but a potential for danger from this object still exists.
  • Various embodiments consider the speed, routes, direction, and proximity among the various factors. In addition, another factor would be whether the subject vehicle 30 was a drunk driver or a slow driver. In this example, the potential of affecting driver 22 may be very remote based on the distance from the subject vehicle 30 .
  • subject vehicles or other objects can be monitored by more than one sensor, sensor location, or reporting observer, and hazardous conditions and changes in subject trajectory and speed can be reported and updated by the system, providing updates to the alerted users, newly alerted users, and subscribers to the system.
  • the primary observer 20 can enter a vehicle license plate or the primary observer 20 's vehicle or device may be equipped with sensor(s) such as, but not limited to, a camera that could detect the subject vehicle 30 's license plate, in addition to other automated ways of identifying subject vehicle 30 's unique and semi-unique characteristics.
  • sensor(s) such as, but not limited to, a camera that could detect the subject vehicle 30 's license plate, in addition to other automated ways of identifying subject vehicle 30 's unique and semi-unique characteristics.
  • various embodiment utilize the vehicle license plate to obtain data which would provide a potential destination to use to map the subject vehicle 30 to the subject vehicle 30 's home or work 50 .
  • the projected travel path from the subject vehicle 30 to the home or work 50 of the subject vehicle 30 if known (based on vehicle license plate databases, etc.), and based on other methods of determining subject vehicle 30 's route or destination, vehicles that are less likely to be affected by this projected travel path, such as vehicles 24 and 26 , might not be notified (alerted) since it can be determined by the method that these vehicles may likely not be affected by the subject vehicle 30 .
  • the home or work location 54 , or other destination or route, of the driver 24 is known, the potential of interaction between driver 24 and the subject vehicle 30 may be unlikely because driver 24 will soon exit the freeway as he or she progresses home.
  • some embodiments can reduce excessive false warnings, e.g., by filtering based on the likelihood of an interaction in the foreseeable future.
  • various embodiment allow the driver 28 (or other sensors able to identify subject vehicle 30 ) to update system data regarding the subject vehicle 30 , e.g., regarding location. For example, based upon an updated location of the subject vehicle 30 being just past the driver 28 , various embodiments determine that the need to notify driver 22 (ahead in the travel path) increases, while it may also be determined that the risk to drivers 23 , 24 and 26 is virtually eliminated. The probabilities associated with the risk may be calculated by various embodiments based on the factors mentioned above and below.
  • the method may be configured to run as an application (“app”) on a device.
  • apps and various other aspects may be described in examples with reference to a mobile device, other devices including those built into the vehicle or other object, or IoT devices of some kind, to name a few non-limiting examples, may also be used to practice the present technology according to some embodiments.
  • FIG. 3 illustrates an example screen 300 that shows a graphical user interface (GUI) provided by an application, according to various embodiments.
  • GUI graphical user interface
  • the screen 300 screen is an example only for reference as a simplified display to enable a user to quickly enter information (e.g., to avoid the user from being overly distracted from the road).
  • the information may be entered on a mobile device where using available sensors on the mobile device (and/or proximity to base stations or other known objects) the present location, direction of travel, and speed of the mobile device is known. This use of available data apart from user entry/selection can substantially reduce the information that a user needs to enter and/or select.
  • the entry may comprise selecting information, such as selecting an icon such as “speeding” 62 .
  • the entries can be aggregated or merged with existing mapping, Global Positioning Satellite (GPS), and other information including information from other sensors the vehicle or mobile device may have.
  • GPS Global Positioning Satellite
  • the application for performing the method according to various embodiments can allow an observer to notify others that a hazardous vehicle is in the area or may be approaching other people in danger. In one example, the observer can simply open the application, and with a few selections can enter relevant information into the application as will be explained further below.
  • the primary observer 20 can selects the subject vehicle type 61 of subject vehicle 30 .
  • the speeding 62 type may be selected.
  • other types shown in the example GUI include drunk, wrong way, slow, weaving, and along shoulder. Several of the types may be exhibited by a drunk driver.
  • the screen 300 can have a slider for indicating the relative speed 65 . As the slider is moved in response to user action, the speed display 66 can show the perceived or relative speed 66 (“+25” in this example). The user action may be a touch, spoken command, and the like. Based upon this information and other acquired information or determinations, the method for the application can predict the direction of travel (e.g., relative to the primary observer 20 or others) and when the subject vehicle 30 may approach other drivers, off ramps and freeway interchanges.
  • the example application screen 300 may also include an interface for the user to provide vehicle license plate information, e.g., “license” 67 .
  • the example screen 300 shows an optional location for entry 69 of the license plate information 68 of the subject vehicle 30 .
  • various embodiments will display a keyboard which the user can use to enter the letters, number and symbols for the license plate as well as availability to utilize a devices camera and optical character recognition (OCR) technology to obtain license plate data.
  • this license plate information can be compared to the state department of motor vehicles accessible database to obtain the owner, color, make and model of the subject vehicle 30 .
  • the home address, work address, or route may be obtained also by the system to help determine a possible predicted destination for the subject vehicle 30 .
  • the home address, work address, route of subject vehicle 30 or other information obtained based on the license plate information and other information, including but not limited to facial recognition, may be used to also obtain a business address, and addresses of relatives associated with the subject vehicle 30 or other information useful in determining route or future destination. This information may be referenced to predict potential destinations where the subject vehicle 30 may be heading.
  • the senor used may be a camera
  • the method can receive a photo of the subject vehicle 30 taken by the primary observer 20 using a mobile device 60 's camera(s) or using a camera in the vehicle to capture an image or video of the vehicle license plate number of the subject vehicle 30 , and the GUI can provide a portion to support image capture.
  • a provided user interface and/or remote sensors
  • other descriptors for the subject vehicle can be obtained by sensor(s) including, but not limited to, damage, dents, level of cleanliness, stickers, and other aspects that could alone or together with other aspects, be helpful to identify the subject vehicle 30 .
  • various embodiments can determine the proximity and potential of another user encountering the subject vehicle 30 .
  • other drivers can setup to be linked to receive alerts (akin to an Amber alert or tornado warning alert) based on inputs from the primary observer 20 and/or remote sensors and determinations made therefrom and from sensor data, to alert the other drivers of particulars about the subject vehicle 30 and the respective likelihood of an encounter for that particular other driver.
  • alerts akin to an Amber alert or tornado warning alert
  • Various embodiments are particularly helpful if the subject vehicle 30 approaches from behind a vehicle such that the vehicle or driver may not be as cognizant of the moving object.
  • Other drivers can also use the application, and autonomous sensors can confirm presence of and potential for an encounter with the object and provide the system with additional data with which to update information related to the subject vehicle 30 (hazard type of moving object).
  • FIG. 4 shows an example screen 400 , for a device, presented to a user in response to determining that a moving object (hazard) is observed at point 22 and is presently ahead of the user.
  • Various embodiments may present the hazardous object with a moving visual representation of its predicted location along a route between the hazardous object and an alerted user that will be visible within the application. The method may achieve this based on calculating the relative speed/trajectory information by determining a distance and relative position which may be presented at 92 and updated.
  • the example screen 400 can show a series of distance areas 70 , in the form of concentric circles or polygons that become increasingly intense based upon the proximity to the object.
  • a polygon effect is used which equates to a reachable range in any direction over time based on the deduced capabilities of the identified object.
  • the distance areas 70 may be shown in red, or could be a different color based upon the type of object and the weighting of any potential interaction. The color could be customizable by a user or alert provider (i.e., service subscribing entity) depending on their preference.
  • the example screen 400 also provides a prompt 93 to allow for user input to update information on the object that is relayed to the system and can be relayed to other application users and entities receiving data from the service.
  • FIG. 5 shows an example screen 500 , for a device, presented to a user in response to determining a moving object (hazard) is observed at point 22 , and is presently to the east of the user or driver 20 .
  • the screen 500 shows a series of distance areas that become increasingly intense based upon the proximity to the object.
  • the distance and position of the moving object that is a potential moving hazard may be presented at 92 , shown relative to the position of the device that is running the application in accordance with various embodiments.
  • the areas 70 may be shown in red for emphasis, but could be a different color based upon the type of object (i.e., type of hazard) and weighting of any potential interaction, with the color customizable by the user or third party service provider.
  • a drunk driver may be deemed to be greater hazard than a slow moving vehicle. Both moving objects have a potential for danger and unpredictability, but the potential for the drunk driver to weave, get into an accident, and cause injury can be far greater.
  • the screen 500 also provides a prompt 93 to allow for user input to update information on the object that can be relayed to other application users and entities receiving data from the service.
  • FIG. 6 shows an example screen 600 presented to a user in response to determining that a user (e.g., driver 21 ) is approaching a moving hazard type of object.
  • the application collects reports received from multiple drivers and integrates the information to update proximate drivers determined to be potentially affected and also to cancel projected hazard trajectories for users that are determined to be no longer affected. As other drivers add to the information, some projected routes can be cancelled in response to determining that the subject vehicle has changed from one road to another; e.g., changed freeways or exited a freeway.
  • the warnings generated for providing to other drivers are localized for the subject vehicle's proximity to the hazard and are determined as a function of characteristics of the hazard, e.g., a slow, weaving drunk driver, a wrong way driver, a driver on the shoulder of the road in stopped traffic, motorcycle, fire truck, ambulance or a speeder weaving around drivers to maintain a higher rate of speed, to name just a few examples.
  • characteristics of the hazard e.g., a slow, weaving drunk driver, a wrong way driver, a driver on the shoulder of the road in stopped traffic, motorcycle, fire truck, ambulance or a speeder weaving around drivers to maintain a higher rate of speed, to name just a few examples.
  • Screen 600 in the example in FIG. 6 shows multiple example moving potential hazards 30 - 33 in proximity to the driver 21 .
  • each hazard 30 - 33 type is moving and is identified with a different color code based upon the potential severity of the hazard.
  • the driver 21 is shown approaching a hazard 30 which is along the guided trajectory, that is generated according to the method in various embodiments, because the method determined that the driver 21 needs to pass through the interchange 40 based on the route driver 21 is taking.
  • Screen 600 provides an alert 95 that shows the hazard type and provides instruction to advise the best lane to follow to avoid the hazard based upon information provided by other users.
  • the alert 95 in this example has a prompt to allow the user to tap for more information.
  • the method determined that some of the moving potential hazards 31 - 33 are not in the determined travel path of the driver 21 . Although these moving potential hazards 31 - 33 may have minimal or no effect of the driver 21 , the driver (or passenger) may be able to observe the hazards 31 - 33 so various embodiments allow the user to update or append information related to the hazards.
  • Screen 600 also shows trip information 96 which may include the estimated time that the driver will arrive at their destination, number of miles and amount of travel time. In various embodiments, the determination of the user's destination based will reduce updates on other moving potential hazards 31 - 33 because it can be determined that those potential hazards are unlikely to affect the travel trajectory that has been determined for the driver 21 .
  • Screen 600 also shows at 94 , in this example, the upcoming distance to a turn and identifies the turn along the project route of travel.
  • the determination of the travel trajectory can be based on the route and destination for the user.
  • the destination can be determined variously, for example, based on navigation system communication, user input/selection, or predicted based on historical data that can show a pattern of having certain destinations at the time and place of the trip, based on past destinations for the route taken, based on the user's calendar or other information the user had provided, with different weights given to the information depending on the source of the information, e.g., less weight to destinations based on distant history and more weight to destinations based on the venue for an event on the user's calendar accessible from the mobile device, for instance.
  • the form of movement needs to be matched with the polygon representing calculated potential locations within a determined timeframe based on mobility physics of the object
  • a person can physically run at maximum measured maximum speed of nearly 28 MPH (e.g., world's fastest runner Usain Bolt in the 100-Meter Sprint).
  • the average vehicle has an average maximum speed of between 100-120 miles per hour.
  • the method calculates the average maximum speed of an identified object/person/vehicle, along with conditions of the object's terrain and the limited mobility and speed based on same (i.e.
  • a moving object which could be a person, vehicle, or other object
  • the algorithm can also adjust estimations in real-time based on all possible locations that the identified object/person/vehicle could have been. This can create a level of awareness for all users that will allow for appropriate preparation of possible encounter.
  • FIG. 7 shows an example screen 700 presented to a user to route around a potential hazard (in response to determining that the potential hazard was in the route the user had been taking).
  • the system is configured to provide a database (or other data structure) that can be quickly cycled through to identify specific object type with associated predictive historic traits/trends including statistical movement choices in space as well as known acceleration and speed ability of identified object, e.g., to provide the ultimate alert/warning system.
  • the app can take into consideration, through the app's predictive algorithm and geographic system, the medium an object will move through in time including air/water/space etc. and adjust the maximum acceleration and top speed of maximum reachable range according to the known substance being traversed by the object.
  • the application uses traffic patterns and travel times on each street, highway or road, and in addition, tracks moving hazards and moving potential hazards to, alert to the potential hazard type and instructions 95 in FIG. 7 and trip information 96 . All known routes of the potential hazardous can be highlighted to further alert the user. While a recent accident may not generate a present delay in traffic speeds, a moving hazard can present different challenges so in various embodiments, a driver 21 that is approaching a hazard 32 along their route, will be re-routed or at a minimum be alerted before they encounter the hazard 32 . On screen 700 , the hazard is identified by a drink icon; other appropriate icon warnings may be used to identify various hazards.
  • FIG. 8 illustrates a block diagram of an operational flow chart for a method where the hazard is a moving vehicle on a roadway, according to some embodiments.
  • a user selection of the application is received at 80 .
  • a warnings-only mode can be provided, whereby the application can run in the background, alerts can be sent and received as notifications, alerts can be sent and received as text messages, or alerts can be sent and received within other applications, providing warnings as determined according to various embodiments described herein.
  • Upon initialization of the program or within application settings options may be presented to the user for selecting the sensitivity for notifications, e.g., selecting various setpoints.
  • the user has the option to only be notified when a speeder is approaching from the rear at a relatively close distance as determined by various embodiments.
  • the methods and system in various embodiments can also provide the option for the user to select a high level of user notification and specify a distance such that a notification will be generated and sent to the user for any reported moving objects of concern such as, for instance, any reported hazards within that distance of miles (e.g., where the user is driving on an unfamiliar road at night with potential for various unseen objects).
  • such settings may be determined by a third party subscriber to the system that is providing data to their users and customers.
  • the application may be provided to the user on the mobile device or preinstalled on the user's mobile device or within the vehicle, e.g., by an insurance company or the vehicle manufacturer or vehicle owner (other than the driver) who owns or otherwise controls the software for the application.
  • the insurance company may want the application settings to have setpoints that are more broad and to allow the application to disable the driver from adjusting those predetermined setpoints.
  • the application can provide an information entry screen for display to the user, at 81 in FIG. 8 .
  • the items on the screen that the user can adjust may vary depending on the options selected.
  • Screen 300 in FIG. 3 shows an example of an information entry screen.
  • the application allows the user to enter certain data on the information entry screen.
  • the application can received data entered by the user 82 as described herein in various embodiments. While not required, a license plate of the moving object (e.g., of subject vehicle 30 in FIG. 2 ), for example, can be manually entered at 83 by the user.
  • a sensor may detect the license plate and software, locally or in the cloud, could perform character recognition to ascertain the license plate number.
  • a camera is mentioned at 84 as detecting the license plate, the sensor is not limited to a camera, other suitable sensors may be used that are capable of obtaining an image of the license plate or capable of determining the license plate number.
  • the application can allow a driver (e.g., the primary observer 20 in FIG. 2 ) to take a photo or video of the moving subject vehicle.
  • images can be taken by remote sensors, dashcams, and the like that are networked and connected to the service.
  • the data can be merged, at 85 in FIG. 8 , with GPS information from the mobile device or vehicle.
  • the mobile device can determine location, direction and speed data for the mobile device and the primary observer 20 in FIG. 2 in the same vehicle, using various sensors.
  • the data can be merged with mapping programs or applications locally or in the cloud.
  • the various mergings provide capability that would be otherwise not available to the mapping programs, applications, users, or third party subscribers to the data.
  • the data can be used, locally or in the cloud, to calculate future path(s) of the subject vehicle 30 in FIG. 2 , as described further herein in various embodiments.
  • the relative positions between the subject vehicle 30 in FIG. 2 and other drivers are determined according to various embodiments. Based at least in part on the distance, potential travel routes, and travel speed of the subject vehicle 30 in FIG. 2 , as described in examples herein, the potential for interaction between the drivers and the subject vehicle 30 is calculated, locally or in the cloud, at 89 in FIG. 8 . In some embodiments, other users are provided with an image of the subject vehicle 30 which will reduce the amount of input required from those users and allow for a quicker identification and verification of the subject vehicle.
  • the application will then compare, locally or in the cloud, the determined potential for interaction with a threshold set for notification to determine the level of notification to provide to the particular user.
  • a threshold set for notification In response to the potential for interaction being determined to be above the set threshold, at 91 , an alert (warning) can be generated for delivery to a subject user, e.g., displayed on the user's mobile device.
  • the application will continue to monitor the moving objects (subject vehicle 30 shown in the examples in FIG. 1 and FIG. 2 , and other moving hazards determined to have potential for interaction along the user's route, for example) and compare the potential interaction with the threshold to determine if the alert should continue to be displayed, additional alerts sent, or if the potential for interaction with the object is no longer present.
  • a mobile device may be used, some embodiments of the methods and systems for the application may utilize static locations that include traffic cameras, for example, to determine a subject vehicle 30 from FIG. 2 and send notifications to subject users and data subscribers.
  • the application can also be used to identify both moving and static objects, or identify a subject vehicle that is emitting moving objects which are hazards such as releasing trash or dropping gravel.
  • Some embodiments may be used for warning pedestrians of a subject vehicle 30 in FIG. 1 and FIG. 2 , or other objects (including persons) approaching.
  • the method and system can provide protection to people walking around a city with a messaging and tracking system.
  • the method and system can connect a tracked network of individuals with others as well as police, when needed for immediate support.
  • the method and system tracks a user's location amongst their friends and broadcast to their friends or to all people in the immediate surrounding area if a call for help is initiated, for example, for networked 911 system.
  • the application uses sensor data to track the speed and direction of the user 20 in FIG. 2 who is inputting the data using the application, and based at least in part on that sensor data (which may include variously accelerometer data, gyrometer data, GPS data, Wi-Fi data, other radio data, and other user provided data), the application can determine the relative speed of the identified subject vehicle to project and calculate all possible trajectories of the identified subject vehicle 30 in FIG. 2 over an adjustable setpoint of time.
  • the license plate or other data identifying the subject vehicle 30 may be used to determine potential destinations for the subject vehicle 30 and to determine projected routes and how such routes might potential intersect with that of the system users.
  • FIG. 9 illustrates example entry screens 900 , 910 , and 920 for an application, according to other embodiments.
  • Example screen 900 is an introductory screen shown to a user in response to the application being started.
  • Screen 910 shows options for “Report Hazard” and “Hazards Nearby”, in various embodiments.
  • screen 920 may be displayed which provides the option for the user to select “Record Description” using audio and/or video, or to “Enter By Hand” for the user to manually enter.
  • Some embodiments provide users and other sources with the option to include audio, video, virtual reality systems, inputs from remote networks through sensors and photos as part of their reports; enable voice reports from the user to include in-depth information; and parse audio and video reports using artificial intelligence separately or together with human monitors.
  • FIG. 10 illustrates an example screen 1000 providing a user interface to record a description for reporting a hazard, according to various embodiments.
  • screen 1000 may be presented to the user to select initiation of an audio recording to report a hazard.
  • the user may be prompted with the following audio: “Please speak clearly. Do not take any action that might be dangerous. Please include the type of hazard, including color and model of any vehicles, the speed and direction they are traveling, and what you observe to be the hazard. Please begin recording after the tone.”
  • the present technology is not limited to this particular wording of the prompt; other suitable audio prompts may be used in some embodiments.
  • screen 1010 may be presented to the user. Screen 1010 provides after recording, “Submit,” “Playback,” “Re-record”, and “Delete” options, in some embodiment.
  • FIG. 11 illustrates three example screens 1100 , 1110 , and 1120 for alerts providing to the user overlaid on a map, according to an example embodiment.
  • Screen 1100 can provide selections for Speeding, Swerving, Distracted, Aggressive, and Slow, along with a Submit button, to characterize the other driver where multiple selections may be made.
  • Screen 1110 is presented to the user to provide for entry of more information (“info”) such as, in this example, License, Make/Model, Color, Speed, and Direction for the other vehicle, that can be submitted using the Submit selection.
  • Screen 1120 can be presented to the user to provide a voice audio input default after selecting each of the options shown in screens 1100 , for example to speak the license number, and text input if the user desires.
  • FIG. 12 , FIG. 13 , and FIG. 14 illustrate example screens and icons for examples of report of alerts provided to a user. For each screen, the user is prompted to confirm “See This Hazard” regarding the hazard identified by one of the icons on the screen, according to various embodiments.
  • FIG. 12 illustrates three example screens 1200 , 1210 , and 1220 for alerts provided to the user overlaid on a map, according to an example embodiment.
  • a moving object hazard report location is depicted along with route the moving object hazard is calculated to take towards a subject user driving in the same direction as, and ahead of the object hazard.
  • a zone or region depicting the most likely current location of the object hazard is shown as distinct, using an alternate or more intense graphic or color and which may include a representative icon.
  • the most likely current location of the object hazard is behind the subject user's vehicle.
  • 1200 illustrates example icons that may be used.
  • a moving object hazard report location is depicted along with route the moving object hazard is calculated to take when driving in the same direction as a subject user, and when driving ahead of the subject user and when the subject user is located behind the object hazard's previously reported location.
  • example zones or regions depicting the most likely current locations of the object hazard is shown as distinct, using an alternate or more intense graphic or color, and which may include a representative icon.
  • the most likely current locations of the object hazard are ahead of the subject user's vehicle.
  • a moving object hazard previously reported location is depicted along with route the moving object hazard is calculated to take when driving in the same direction as a subject user, and when it is predicted to be driving ahead of the subject user, and when the subject user is located ahead of the object hazard's previously reported location.
  • example zones or regions depicting the most likely current locations of the object hazard is shown as distinct, using an alternate or more intense graphic or color, and which may include a representative icon.
  • the most likely current locations of the object hazard are ahead of the subject user's vehicle.
  • a moving object hazard previously reported location is depicted [insert arrow/number as necessary] along with route the moving object hazard is calculated to take when driving in the same direction as a subject user and if on the same route as the subject user, and when it is predicted to be driving ahead of the subject user, and when the subject user is located ahead of the object hazard's previously reported location.
  • routes the moving object hazard may have taken, but which the subject user has not traveled, are not depicted and only the routes the subject user may take and which may be taken by the moving object hazard are displayed.
  • Zones or regions depicting the most likely current locations of the object hazard are shown as distinct, using an alternate or more intense graphic or color, and which may include a representative icon.
  • the most likely current locations of the object hazard are ahead of the subject user's vehicle.
  • FIG. 13 illustrates two more example screens 1300 and 1310 for alerts provided to the user overlaid on a map, according to an example embodiment.
  • a moving object hazard report location is depicted along with route the moving object hazard is calculated to take when if driving towards a subject user that is traveling towards the moving object hazard.
  • the zones or regions depicting the most likely current locations of the moving object hazard are not shown since it's calculated that the moving object hazard would be beyond the subject user's vehicle.
  • the most likely current locations of the object hazard are behind the subject user's vehicle.
  • a moving object hazard report location is depicted along with route the moving object hazard is calculated to take towards a subject user driving in the same direction as, and ahead of the object hazard.
  • the zones or regions depicting the most likely current locations of the moving object hazard are not shown since it is calculated that the moving object hazard would be beyond the subject user's vehicle.
  • the most likely current locations of the object hazard are ahead of the subject user's vehicle.
  • FIG. 14 illustrates two additional example screens 1400 and 1410 for alerts provided to the user overlaid on a map, according to an example embodiment.
  • a moving object hazard report location is depicted along with route the moving object hazard is calculated to take towards a subject user driving in the opposite direction and heading towards a moving object hazard (in this example moving object hazard is calculated to be headed north and subject user is calculated to be heading south).
  • a zone or region depicting the most likely current location of the object hazard is shown as distinct, using an alternate or more intense graphic or color and which may include a representative icon.
  • the most likely current location of the object hazard is ahead of the subject user's vehicle and between the subject user's vehicle and the previously reported moving object hazard location.
  • a moving object hazard report location is depicted along with route the moving object hazard is calculated to take if driving the same direction as a subject user that is traveling towards the moving object hazard's previously reported location.
  • Zones or regions depicting the most likely current location of the object hazard are shown as distinct, using an alternate or more intense graphic or color, and which may include a representative icon.
  • the most likely current location of the object hazard is ahead of the subject user's vehicle.
  • Amber alerts may be provided in various embodiments. Such alerts are critical and are directed to locate and report kidnappers of missing children.
  • the U.S. Department of Justice has estimated that nearly 800,000 children are reported missing every year.
  • the National Center for Missing and Exploited Children estimates that 203,000 children are kidnapped each year by family members.
  • a user such as law enforcement could identify the reported location of the abduction and view a timestamped area on a map, including polygons with all potential locations the perpetrator is calculated to be able to travel based on the current time, as well as into the future.
  • This example embodiment provides potential locations the child and perpetrator(s) have been, may be, and will be.
  • the reported child or child abductor is identified by and reported by another application user, person calling into 911 or a similar service, or by sensor(s), automated or manually operated, data could be updated and the additional report can aid the system in triangulating and more accurately predicting the abductor and the child's actual position.
  • the map views may include a polyline/route drawn from the hazard to the user's current location which calculates how far an object or hazard can move in time based on their maximum known movement capacity. If the user moves off the route, the current route map may or may not be updated in various embodiments. Alternatively, new routes can be determined and the map and alerts can be generated based on new routes and any additional data obtained on objects with respect to the new route.
  • FIG. 15 illustrates two example screens 1500 and 1510 for providing amber alert notifications and information to a user, according to an example embodiment.
  • Screen 1500 is an “Amber Alert Notification” example screen. Similar notification screens for other alert types can utilize this method.
  • Screen 1510 is an example screen that provides an “Amber Alert Info” option for users to report a sighting. The “Report This” selection causes initiation of the report section of the application, as described further herein. Additionally, Amber Alerts may be depicted in any of the forms described and depicted for other moving objects, and in ways not depicted herein. Amber Alert data, predicted locations, and other information also may be consumed by third party subscribers to the data.
  • Alerts may be in the form of push notifications, an API alert, or server-side alerts, and the like.
  • Users within a range e.g., an area which may have two or three dimensions in the present technology, referred to herein as a polygon for examples of two dimensional areas
  • the polygon can be global with the report occurring based on a user's last known location.
  • An example initial range could be 30 miles.
  • a user may only see hazard reports in range.
  • notifications can be sent based, for example, on a user's location, or more recent known location within a timeframe.
  • reports concerning various objects can be received by many users and sources which can be used by various embodiments as crowdsourcing inputs for the moving object predictive locating, reporting, and alerting aspects.
  • Various embodiments provide segmenting of objects by categories, and provide for transmitting appropriate information to law enforcement for particular objects.
  • the method does not disturb or alert users outside the object range to avoid being the high tech equivalent of the “the boy who cried wolf”.
  • drones or other automated sensing devices can be used to monitor and/or identify a moving object and take various actions.
  • one or more drones are launched in response to a report of a moving object.
  • Drones can be launched from other moving vehicles, from stationary stored locations, or from roving vehicles tasked to follow tagged objects.
  • the moving object can be detected by the drone(s), (e.g., based on the initial report with descriptors), reported, tagged and followed.
  • Some embodiments provide a unique identifier for any target moving object; use a network of available sensors from moving vehicles, fixed points, IoT device, etc.; and deliver data to a central system for later identification and tracking. This system may be cloud-based and could be decentralized for increased security and capability.
  • reports from users and sensors are monitored to help combat false positives. Additionally, the weighting effect of multiple reports of the same moving object in a limited area over time can help combat false positives. If only one report of the moving object is received, it may be less weighted compared to the multiple reports.
  • Exemplary methods include a system for real-time tracking using, for examples, drones and/or networked sensor technology.
  • drones or other sensor carrying devices hard mounted or mobile—including law enforcement vehicles and autonomous vehicles
  • identify and image targets using sensors such as photo, laser, lidar, infrared, radar, sonic, etc. to identify a target's unique identity (e.g., turning objects into their own QR codes, in essence) so other networked sensor systems can help recognize and update target location and other target deltas to the central system.
  • the method can calculate the movement of an object in any predictable direction based on the movement range and characteristics of that object, including rate of acceleration and range of speed. Such embodiments may aid, for example, in the tracking of drones that could pose a danger to passenger aircraft, vehicles, or people.
  • Various embodiments are based on the speed of a relative vehicle at the time detection was made. The following may be considerations for the speed determination: real time traffic patterns; speed limit of roadway/street; speed of user reporting hazard, rate of acceleration, speed range of object's make and model. Rate of acceleration and speed can be sensed with updated data from sensors, from users reporting hazard/object; and based on calculations using known maximum acceleration and maximum top speed of hazard/object.
  • roadway describes any road which could be a freeway, expressway, highway, ramp, street, passageway, service road, alley, avenue, boulevard, turnpike, autobahn, motorway, dirt roads, gravel, and other types of roads.
  • the map drawings and route information may be based, at least in part, on commercially available map information.
  • a hazard reporter module sends the hazard (or other object) report to a server (or other computer on which a portion of the computing power for executing the method may reside).
  • the report can include one or more of the latitude/longitude (lat/lng) coordinates of the reporter vehicle and, if any was recorded, a voice file.
  • the method may calculate a polygon from the reported location, look up users reporting within that polygon within a certain number of minutes (e.g., 30 minutes for instance).
  • the method initiates the sending of alerts (e.g., push notifications) to all users meeting the above criteria.
  • the method may generate an application programming interface (API) request to a map server for route from hazard (or other object) report lat/lng to current alerted user lat/lng.
  • API application programming interface
  • the method configures the user interface for the alerted user to displays a map with their location, the hazard (or other object) reported location, and an animated polyline running from the hazard (or other object) reported location to the user's current location.
  • the method running in the application may make an API request to a map server that includes a request for all primary roadways within a certain travelable timeframe from the hazard (or other object) report.
  • the method initiates a display on the user interface of the alerted user to display highlighted primary roadways for all locations the hazard (or other object) may have traveled within that time frame.
  • this can include displaying polygons of a potentially traveled area within a timeframe, and displaying polylines for all primary roadways between the polygons.
  • an identification feature is provided within the method that locates the position of wanted persons based on various information.
  • the reachable range can be calculated based on information from sensors and other data, to provide alerts of potential threats that could potentially affect a user. For example, someone or sensors may identify a subject, report to the system, and in response, the system determines and generates possible locations/range that subject may have traveled within time constraints. All known locations and possible past positions can be made available to responding officers in all zones based on time and trajectory. Using the information and/or the application, officers can easily keep track of each other as well as all associated and needed supporting agencies including Fire/Hospitals/News Agencies/coast guard/military.
  • voice recognition capabilities are provided across all needed networks. These agencies will be fortified with the present technology due to additional access to civilian reporting to harness the power of crowdsourcing, according to various embodiments. Such embodiments, may be used in cases of assault victims and attempted or actual theft, burglary, robbery, human trafficking, runaway children reported by parents and last-seen reports, to name several non-limiting examples.
  • Some embodiments provide integration with various other apps, for example, for hiking, scuba diving, to name just a few, regarding various hazards (or other objects) that may be encountered during those activities and for responding to the same.
  • the method may track hikers, mountain bikers, rock climbers, ice climbers, etc.
  • the method may detect a rock climbing fall, general fall, or other hazardous movement and have a “life alert” two way communication if a person is disabled from a fall. If there is no response from climber or hiker to an alert, the method can alert others (where “others” as used herein may include people, robots (e.g. “robot doctors”), autonomous vehicles, drones, and the like) in the area with location information to provide help for the injured person.
  • Other kinds of problems that may occur on the hiking trail albeit a sprained ankle, dehydration, lost, etc. could also be reported.
  • the presence of dangerous animals could be detected by various embodiments or reported by other, including the presence of bears, rattlesnakes, mountain lions, etc.
  • the last known location of such moving hazards (or other objects) could be sent to authorities, rangers, and hikers in the area.
  • point to point app communication capability is provided such that a chain of people using the application can send data from person to person (and ultimately through the Internet or cellular communication if one person in the chain/mesh has connection to such sources or a satellite phone) in case of emergency or for convenience when technology becomes available.
  • the method may provide an area highlighted on the user interface showing where the hazard (or other object) could have been and presents that information for a timeframe.
  • the amount of time may be user-selectable or predetermined by the system based on various factors, including type of moving object hazard, or by third party subscribers to the system.
  • the method provides the option to have people report “harmless” animals such as deer, owls, eagles, etc. for birdwatchers, for fun and/or for photo opportunities.
  • the method could, based on the movement determined from sensors and/or reports, determine how busy a trail is based on monitoring trail traffic.
  • the method can detect people running in panic and use that information to alert users of possible danger.
  • alerts can be generated concerning dangerous weather including movement, for example, flash floods, storms, mudslides, tornadoes, wind, snow, rain, earthquakes, etc.
  • Other reports and alerts could concern water contamination due to, for example, red tide, chemical spills, etc. and could include alerting all appropriate authorities.
  • the present technology may be utilized by a pedestrian that observes a hazard (or other object) to warn others, e.g., hazard (or other object) being a vehicle driving on a sidewalk, heading to the sidewalk, etc.
  • the system can utilize built-in pedometer and accelerometer functionality in the mobile device to confirm if a person is actually walking instead of driving slow or cycling slow.
  • the technology can additionally utilize the mobile device's (e.g., cell phone's) or utilized device's accelerometer and/or gyroscope.
  • the method can be used to detect based on sensors/reports that sharks in the area and other dangers and report same to users.
  • the system may predict the outward potential movement of an individual in a vehicle, on foot, using mass transit, or using an electric bike (e-bike), an electric scooter (e-scooter), or any other forms of transportation.
  • e-bike electric bike
  • e-scooter electric scooter
  • military uses may be provided to protect and network soldiers from enemy combatant threats in all possible forms whether it be soldiers, vehicles, drones, ballistics, etc.
  • the method and system may track the outward movement of an identified soldier on foot and how fast they could move in any and all locations based on the fastest rate of speed of a human on foot.
  • Various embodiments can provide multiple projections based on the ability of the object.
  • the system can provide three projections/reachable range polygons based on the ability for a combatant soldier to change his/her method of movement. For example, a person may change quickly from running to driving to flying. All these scenarios can be calculated and filtered for considering by the receiver.
  • the method and system can confirm the earliest possible time of encounter as well as all possible locations of the opposing force within a polygon in its outward movement prediction capability through reachable range technology based on movement through time. This can help all surrounding friendly forces know the earliest possible encounter time frame or location for strategic preparation. In some embodiments, our forces can use the information from an analytics perspective for all needed scenarios.
  • Insurance companies can see an immediate metric and reduction in claims, once the present technology is paired with their services. Customers could then be more aware of hazards (or other objects) and will have the ability to maneuver around known hazards (or other objects), which should reduce accidents and insurance costs, for example.
  • Autonomous self-driving vehicles can benefit from the present technology as it will allow networked vehicles to better detect moving hazards (or other objects) to increase their ability to protect the passengers, additionally passengers can generate hazard (or other object) reports which can be added to the autonomous self-driving vehicles network.
  • FIG. 16 is a simplified flow diagram of an example method, according to some embodiments, with further details described herein.
  • Step 1602 is optional, as indicated by the dashed line, and includes providing a user interface to a user for information entry on a device, as described further herein.
  • Step 1604 includes receiving (optionally via the user interface), moving object data corresponding to a moving object, as described further herein.
  • Step 1606 includes, receiving sensor data from a sensor, as described further herein.
  • Step 1608 includes merging the received moving object data and the received sensor data into a set of merged data, as described further herein.
  • Step 1610 includes based on the merged data set, automatically determining one or more of: a predicted location for the moving object, a potential path of travel for the moving object, a potential for interaction between the moving object and one or more other objects, and an alert concerning the moving object, as described further herein.
  • Step 1612 includes providing the alert, as described further herein.
  • the present technology is a system (and corresponding method) that provides a service where third parties are providing inputs and those third parties or others are receiving outputs from the system.
  • Inputs could include all types of sensor data pertaining to users and moving objects (e.g., that could be classified as hazards), and third party consumption of both that same data as well as receiving outputs from the system.
  • the outputs received by the third party provider could include additional information generated by the system pertaining to predictions determined concerning, but not limited to, approximation and estimation of future location, proximity, trajectory and routing.
  • the method can further include providing the merged data set to a third party provider, e.g., for generating a predicted location for the moving hazard (or other object), at least one potential path of travel for the moving hazard (or other object), and/or a potential for interaction between the first user and the moving hazard (or other object), and for generating and transmitting an alert.
  • a third party provider e.g., for generating a predicted location for the moving hazard (or other object), at least one potential path of travel for the moving hazard (or other object), and/or a potential for interaction between the first user and the moving hazard (or other object), and for generating and transmitting an alert.
  • FIG. 17 illustrates an exemplary computer system 1700 that may be used to implement some embodiments of the present invention.
  • the computer system 1700 in FIG. 17 may be implemented in the contexts of the likes of computing systems, networks, servers, or combinations thereof.
  • the computer system 1700 in FIG. 17 includes one or more processor unit(s) 1710 and main memory 1720 .
  • Main memory 1720 stores, in part, instructions and data for execution by processor unit(s) 1710 .
  • Main memory 1720 stores the executable code when in operation, in this example.
  • the computer system 1700 in FIG. 17 further includes a mass data storage 1730 , portable storage device 1740 , output devices 1750 , user input devices 1760 , a graphics display system 1770 , and peripheral device(s) 1780 .
  • FIG. 17 The components shown in FIG. 17 are depicted as being connected via a single bus 1790 .
  • the components may be connected through one or more data transport means.
  • Processor unit(s) 1710 and main memory 1720 are connected via a local microprocessor bus, and the mass data storage 1730 , peripheral device(s) 1780 , portable storage device 1740 , and graphics display system 1770 are connected via one or more input/output (I/O) buses.
  • I/O input/output
  • Mass data storage 1730 which can be implemented with a magnetic disk drive, solid state drive, or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit(s) 1710 . Mass data storage 1730 stores the system software for implementing embodiments of the present disclosure for purposes of loading that software into main memory 1720 .
  • Portable storage device 1740 operates in conjunction with a portable non-volatile storage medium, such as a flash drive, floppy disk, compact disk, digital video disc, or Universal Serial Bus (USB) storage device, to input and output data and code to and from the computer system 1700 in FIG. 17 .
  • a portable non-volatile storage medium such as a flash drive, floppy disk, compact disk, digital video disc, or Universal Serial Bus (USB) storage device, to input and output data and code to and from the computer system 1700 in FIG. 17 .
  • the system software for implementing embodiments of the present disclosure is stored on such a portable medium and input to the computer system 1700 via the portable storage device 1740 .
  • User input devices 1760 can provide a portion of a user interface.
  • User input devices 1760 may include one or more microphones, an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys.
  • User input devices 1760 can also include a touchscreen.
  • the computer system 1700 as shown in FIG. 17 includes output devices 1750 . Suitable output devices 1750 include speakers, printers, network interfaces, and monitors.
  • Graphics display system 1770 include a liquid crystal display (LCD) or other suitable display device. Graphics display system 1770 is configurable to receive textual and graphical information and processes the information for output to the display device.
  • LCD liquid crystal display
  • Peripheral device(s) 1780 may include any type of computer support device to add additional functionality to the computer system.
  • the computer system 1700 in FIG. 17 can be a personal computer (PC), hand held computer system, telephone, mobile computer system, workstation, tablet, phablet, mobile phone, server, minicomputer, mainframe computer, wearable, or any other computer system.
  • the computer may also include different bus configurations, networked platforms, multi-processor platforms, and the like.
  • Various operating systems may be used including UNIX, LINUX, WINDOWS, MAC OS, PALM OS, QNX ANDROID, IOS, CHROME, and other suitable operating systems.
  • Some of the above-described functions may be composed of instructions that are stored on storage media (e.g., computer-readable medium).
  • the instructions may be retrieved and executed by the processor.
  • Some examples of storage media are memory devices, tapes, disks, and the like.
  • the instructions are operational when executed by the processor to direct the processor to operate in accord with the technology. Those skilled in the art are familiar with instructions, processor(s), and storage media.
  • the computing system 1700 may be implemented as a cloud-based computing environment, such as a virtual machine operating within a computing cloud. In other embodiments, the computing system 1700 may itself include a cloud-based computing environment, where the functionalities of the computing system 1700 are executed in a distributed fashion. Thus, the computing system 1700 , when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.
  • a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices.
  • Systems that provide cloud-based resources may be utilized exclusively by their owners or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
  • the cloud is formed, for example, by a network of web servers that comprise a plurality of computing devices, such as the computing system 1700 , with each server (or at least a plurality thereof) providing processor and/or storage resources.
  • These servers manage workloads provided by multiple users (e.g., cloud resource customers or other users).
  • users e.g., cloud resource customers or other users.
  • each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user.
  • Non-volatile media include, e.g., optical, magnetic, and solid-state disks, such as a fixed disk.
  • Volatile media include dynamic memory, such as system random-access memory (RAM).
  • Transmission media include coaxial cables, copper wire and fiber optics, among others, including the wires that comprise one embodiment of a bus.
  • Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • Common forms of computer-readable media include, e.g., a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, any other physical medium with patterns of marks or holes, a RAM, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a Flash memory, any other memory chip or data exchange adapter, a carrier wave, or any other medium from which a computer can read.
  • PROM programmable read-only memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • a bus carries the data to system RAM, from which a CPU retrieves and executes the instructions.
  • the instructions received by system RAM can optionally be stored on a fixed disk either before or after execution by a CPU.
  • Computer program code for carrying out operations for aspects of the present technology may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (e.g., through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider e.g., AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • FIG. 18 illustrates an example embodiment 1800 showing the moving object polygon (e.g., reachable range) and other information regarding the moving object and report of same, according to various embodiments.
  • the example in FIG. 18 illustrates how various embodiments can use “reachable range” to help determine a moving object's (e.g., moving hazard's (or other object's) proximity and a trajectory required to reach a location or target.
  • the polygons in FIG. 18 would replace the concentric circles shown in the examples in FIG. 3 and FIG. 4 .
  • polygons may be drawn radiating out from a hazard (or other object) report location, with each polygon depicting a timeframe from the report taking into consideration speed, traffic, roadways, and other variables, the most outward polygon representing the range that an object can reach based on its known maximum speed, road conditions, speed limits, and acceleration capabilities, etc.
  • the inner polygon represents a variable potential minimum range of speed based on similar parameters.
  • the encounter estimated time is displayed based on the object's known maximum speed, road conditions, speed limits, and acceleration capabilities, etc.
  • the hazard (or other object) report time reflected as minutes ago is displayed, which all could be displayed along with the map, polygons, routes, and identifying icons for the subject user and moving object's location or range of predicted locations.
  • FIG. 19 illustrates another example embodiment 1900 , of a screen 910 in FIG. 9 , and FIG. 6 , for reporting and viewing hazards (or other objects).
  • FIG. 20 illustrates another example embodiment 2000 , of a screen 920 in FIG. 9 , for selecting voice recording or manual entering of hazard (or other object) reports.
  • FIG. 21 illustrates another example embodiment 2100 , of the screen 1010 in FIG. 10 , for audio reporting of hazards (or other objects).
  • screen 2100 has selections to start recording (e.g., microphone symbol), to listen, to re-record, to add more info, and to submit.
  • FIG. 22 illustrates another example embodiment 2200 , of the screen 1000 in FIG. 10 , for audio reporting of hazards (or other objects).
  • Screen 2200 includes a microphone symbol, a selection to stop recording, and a cancel option.
  • FIG. 23 illustrates another example embodiment 2300 , of the screen 1100 in FIG. 11 , providing a user interface for selection of an icon for reporting characteristics a moving object, according to various embodiments.
  • a user may report that a hazard (or other object) was variously speeding, distracted, swerving, aggressive, slow, etc.
  • FIG. 24 illustrates another example embodiment 2400 , of screen 1110 in FIG. 11 , for providing identification information about a moving hazard (or other object).
  • a user may input variously license, make, color, speed, and direction of a moving hazard (or other object) vehicle, akin to a screen 1110 in the example embodiment in FIG. 11 .
  • FIG. 25 illustrates another example embodiment 2500 of a screen confirming a sent moving object hazard report, and allowing user to provide more information about the same moving object hazard (or other object). This example shows confirmation that the hazard (or other object) is reported and allows for the addition of more info.
  • FIG. 26 illustrates an example embodiment of a list of the moving hazards (or other objects), along with type of moving object hazard and which may include additional available information such as report time, report minutes ago, report location, predicted location, estimated time of encounter, etc., and allowing a user to select a moving object hazard in order to view more information and data on a map view.
  • the characteristics to be selected are shown with text and icons with one per row on the screen which may enable easier access by a driver, for instance.
  • FIG. 27 illustrates an example embodiment 2700 , of screen 1400 in FIG. 14 , for displaying status information about an aggressive moving hazard (or other object).
  • the status information may include regarding an aggressive hazard (or other object), the distance from an alert report, estimated time to encounter, and how long ago the hazard (or other object) report was made.
  • FIG. 28 illustrates an example embodiment of a screen for displaying identifying information about a moving hazard (or other object), in this example an aggressive driver, and enabling viewing of a map or list or proving an update concerning the moving hazard (or other object).
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems and corresponding methods are provided for moving object predictive locating, reporting, and alerting. An example method includes receiving moving object data corresponding to a moving object; receiving sensor data from a sensor; and merging the received moving object data and the received sensor data into a set of merged data. The example method further includes based on the set of merged data, automatically determining one or more of a predicted location or range of locations for the moving object, a potential path of travel for the moving object, an an alert concerning the moving object, and providing the alert. The automatically determining may be further based on one or more historical traits concerning the object, and the geographic medium the object is moving through. The geographic medium may include one or more of terrain, air, water, and space. The object may be a soldier, vehicle, drone, or ballistic.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 16/189,938, filed Nov. 13, 2018, which claims the benefit of U.S. Provisional Application Ser. No. 62/585,776, filed Nov. 14, 2017, which applications are hereby incorporated by reference herein in their entirety, including all references cited therein.
  • FIELD
  • The present technology pertains in general to objects of many types, more specifically, to moving object determination, reporting, predicting, and/or alerting.
  • SUMMARY
  • This summary is provided to introduce a selection of concepts in a simplified form that are further described in the Detailed Description below. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • The present disclosure provides various embodiments of systems and methods for moving object predictive locating, reporting, and alerting, described herein. For one example, the present technology may be embodied as an application (i.e., an “app”) where the application can project the potential current and future paths and locations of objects and notify individuals and other systems that may be impacted, or that are interested in obtaining this information, as detailed further herein.
  • In various embodiments, an example method includes receiving moving object data corresponding to a moving object; receiving sensor data from a sensor; and merging the received moving object data and the received sensor data into a set of merged data. The example method further includes based on the set of merged data, automatically determining one or more of a predicted location or range of locations for the moving object, a potential path of travel for the moving object, an alert concerning the moving object, and providing the alert. The automatically determining may be further based on one or more historical traits concerning the object, and the geographic medium the object is moving through. The geographic medium may include one or more of terrain, air, water, and space. The object may be a soldier, vehicle, or drone. As described further below, the object may also be a ballistic.
  • In other embodiments, the method includes identifying a specific type for the moving object; receiving historical traits and trends associated with the moving object, including statistical movement characteristics of the object, the statistical movement characteristics including acceleration and speed ability of the identified object; and adjusting the maximum acceleration, maximum speed, and maximum reachable range for the object as a function of geographic mediums that the object will move through in time over the projected course and trajectory of the object. In some embodiments, the method considers average speeds a particular object may traverse a particular geographic medium.
  • In some embodiments, the present technology is a system that provides a service where inputs are received from third parties and outputs are provided to those or to other third parties. Inputs could include all types of sensor data pertaining to users and moving objects and outputs provided could be a merged data set as well as additional information generated by the system pertaining to approximation and estimation of future location, proximity, trajectory and routing.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments are illustrated by way of example, and not by limitation, in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
  • FIG. 1 illustrates an example plan view of a freeway with a moving vehicle type hazard and other vehicles, according to an example embodiment.
  • FIG. 2 illustrates a plan view of an area that includes a freeway along with a street map/grid with adjacent roads and freeway portions, according to an example embodiment.
  • FIG. 3 illustrates an example entry screen for an application for a method operating on a mobile device, according to various embodiments.
  • FIG. 4 illustrates an example screen that can be presented to a user in response to determining an unsafe driver is located, moving, and ahead of the user, according to an example embodiment.
  • FIG. 5 illustrates an example screen that can be presented to a user in response to determining that an unsafe driver in a subject vehicle is located, moving, and is presently behind the user or driver, according to an example embodiment.
  • FIG. 6 illustrates an example screen presented to a user in response to determining that a user is approaching a moving object, according to an example embodiment.
  • FIG. 7 illustrates an example screen presented to a user to route around a potential object in response to determining that the potential object was in the route the user had been taking, according to an example embodiment.
  • FIG. 8 illustrates a block diagram of an operational flow chart according to some embodiments.
  • FIG. 9 illustrates example entry screens for an application, according to other embodiments.
  • FIG. 10 illustrates an example screen providing a user interface to record a description for reporting a moving object, according to various embodiments.
  • FIG. 11 illustrates an example screen providing a user interface for a text input method for reporting a moving object, according to various embodiments.
  • FIG. 12 illustrates three example screens for alerts provided to the user overlaid on a map, according to an example embodiment.
  • FIG. 13 illustrates three more example screens for alerts provided to the user overlaid on a map, according to an example embodiment.
  • FIG. 14 illustrates three additional example screens for alerts provided to the user overlaid on a map, according to an example embodiment.
  • FIG. 13 illustrates screen shots, according to some embodiments.
  • FIG. 14 illustrates screen shots, according to some embodiments.
  • FIG. 15 illustrates two example screens and for providing amber alerts notification and information to a user, according to an example embodiment.
  • FIG. 16 is a block diagram of an operational flow chart of a method, according to an example embodiment.
  • FIG. 17 is a simplified block diagram of a computing system, according to some embodiments.
  • FIG. 18 illustrates another example embodiment showing the moving object polygon and some information regarding the moving object and report of same, according to various embodiments.
  • FIG. 19 illustrates another example embodiment of a screen in FIG. 9 for reporting and viewing hazards and other objects.
  • FIG. 20 illustrates another example embodiment of a screen in FIG. 9 for selecting voice recording or manual entering of hazard reports.
  • FIG. 21 illustrates another example embodiment of the screens in FIG. 9 for audio reporting of hazards and other objects.
  • FIG. 22 illustrates an additional simpler example embodiment of a screen for audio reporting of hazards.
  • FIG. 23 illustrates an example screen providing a user interface for selection of an icon for reporting characteristics a moving object, according to various embodiments.
  • FIG. 24 illustrates another example embodiment of a screen in FIG. 11 for providing identification information about a moving object.
  • FIG. 25 illustrates another example embodiment of a screen for providing more information about a moving object.
  • FIG. 26 illustrates another example embodiment of a screen in FIG. 23 for selecting characteristics of the moving object.
  • FIG. 27 illustrates an example embodiment of a screen for displaying status information about a moving object.
  • FIG. 28 illustrates an example embodiment of a screen for displaying identifying information about a moving object (in this case a road hazard driving aggressively) and enabling viewing of a map or list or doing an update concerning the moving hazard.
  • DETAILED DESCRIPTION
  • While this technology is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the technology and is not intended to limit the technology to the embodiments illustrated. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the technology. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
  • It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that like or analogous elements and/or components, referred to herein, may be identified throughout the drawings with like reference characters.
  • It will be further understood that several of the figures are merely schematic representations of the present technology. As such, some of the components may have been distorted from their actual scale for pictorial clarity.
  • Tracking and alerting drivers to traffic accidents, road construction, police presence, and stationary, fixed objects is used to guide drivers to the most time efficient route to get to their destination. Moving hazards present particularly difficult problems for tracking and alerting.
  • It should be appreciated that the present technology is not limited to the objects being hazards; other objects may be moving and processed to the present technology. In addition, although some examples herein are described with respect to moving hazards that a driver could encounter on the road, the present technology is not limited to that type of moving hazards.
  • Studies have shown that excess drinking and driving can be a deadly combination and cost us an estimated to be $37 billion per year. The statistics are staggering: one in three accidents involve an alcohol-impaired driver; between midnight and 3 a.m., drunk driving crashes kill someone every 23 minutes; in 2010, 10,228 people died in alcohol-impaired driving crashes (one every 51 minutes); the most frequently recorded blood alcohol content (BAC) level in fatal crashes was 0.17 which is more than twice the legal limit; and every day 29 people are killed in drunk driving crashes and 1,440 people are injured. Clearly, there is a need to address the hazard of drunk drivers.
  • The objects that might be encountered include but are not limited to drunk or otherwise impaired drivers, texting drivers, kidnappers of missing children (e.g., “Amber” alert), battlefield objects, hiking encounters (e.g. predatory animals), in flight obstacles and hazards, maritime obstacles and hazards, and underwater dangers, to name just several possibilities.
  • Various embodiments of the present technology provide systems and methods for moving object predictive locating, reporting, and alerting, described herein. For one example, the present technology in some embodiments may be embodied as an application where the application can project the potential current and future paths and locations of objects, an individual, group of individuals, vehicle, group of vehicles (to name some non-limiting examples), and notify individuals and other systems that may be impacted, or that are interested in obtaining this information. In another embodiment, the present technology can be utilized via a web browser, a corporate network server, the cloud, etc.
  • The present technology can also be configured to provide graphics in three dimensions and/or configured for virtual reality or augmented reality interaction.
  • FIG. 1 is an example plan view of an area 100 that includes a freeway 19 with a moving object being in this example a moving vehicle type hazard 30. FIG. 1 also shows on freeway 19 other vehicles in the vicinity of the hazard identified according to an example embodiment. In the example in FIG. 1, the freeway 19 is a divided roadway with three lanes 16 of traffic in each direction of travel and a shoulder 17. In this example, a primary user 20 observes a subject vehicle 30 which may be variously a slow vehicle, a weaving vehicle, a wrong way vehicle, a vehicle driven on the shoulder, etc. The subject vehicle 30 can potentially cause harm to the primary user 20 and potentially to other vehicle.
  • In assessing the threat posed by the subject vehicle, various embodiment consider several indicia for drunk drivers may include, quick acceleration or decelerations; swerving, weaving, or zig-zagging across the road; stopping or turning abruptly or illegally; nearly hitting an object like a curb or another vehicle; drifting in and out of lanes or straddling multiple lanes; driving slower than 10 miles per hour below the speed limit; driving with headlights off at night; and driving on the wrong side of the road or anywhere that is not on the road; to name several non-limiting examples. Such indicia are weighted in determining the priority of a moving object that may be a potential hazard. For example, determining that the subject vehicle nearly hits an object on the road may be higher priority and hence given more weighting than quick acceleration. Similarly, determining that the subject vehicle is driving on the wrong side of the road is much higher priority and thus weighted more than either determining the subject vehicle is accelerating. Moreover, if the subject vehicle is driving on the wrong side of the road and is driving on the side of the road of the user, it will be given much higher priority and thus more weight than if the user was driving on the opposite side of the road from the subject vehicle. Similarly, the number of confirming reports from additional users or sensors can increase the priority and weight for determining the potential hazard and potential interaction with the hazard. In addition, if a moving object is a hazard and is identified and reported by multiple observers or sensors it can then be confirmed with increasing likelihood and as a high-level priority. For example if ten users of the technology report a wrong way driver within ten minutes within a twenty mile stretch of road then the priority/weighting of the hazard could then increase.
  • It may be determined in various embodiments that in addition to the primary user 20, the driver 21 and the driver 22 are also in imminent danger from the subject vehicle 30, but other vehicles in the vicinity may not be. It should be appreciated that objects such as vehicles shown in FIG. 1 are identified by their occupant driver; however, certain vehicles could be autonomous self-driving vehicles having no or some passengers therein.
  • In the example in FIG. 1, the motorcycle 23 may be far enough in back of the subject vehicle 30 that the rider of motorcycle 23 should be able to presently observe the subject vehicle 30, not be in immediate peril, but might be imperiled in the future. Driver 24 and driver 25 are traveling in an opposite direction and should not be affected by the unsafe actions of the subject vehicle 30, unless on an undivided roadway, or the subject vehicle 30 changed directions and veered over the divider 18, for example. Driver 22 is ahead of the subject vehicle 30 and may not be aware of the subject vehicle 30, but if the subject vehicle 30 was speeding, an accident could still occur between the subject vehicle 30 and driver 22.
  • FIG. 2 shows a plan view of an area 200 that includes a freeway 18 along with a street map/grid with adjacent roads and freeway portions, according to an example embodiment. In this example, the subject vehicle 30 has been observed by the primary observer 20. In operation for this example, the primary observer 30 would activate the application that runs in accordance with various embodiments of the methods of the present technology. The primary observer can then enter or select relevant information about the subject vehicle 30 such as, but not limited to, the speed, direction of travel and type of subject vehicle 30 and optionally the license plate of the subject vehicle 30.
  • In response to the entry/selection, the method can notify other drivers that could be affected by the object, i.e., subject vehicle 30 in this example. The notification can be based upon various factors which may include the proximity to the subject vehicle 30 and the type or categorization of subject vehicle 30. In the example in FIG. 2, the direction of travel of the subject vehicle 30 is approaching an off-ramp 41 and an interchange 40. In various embodiments, a moving visual representation of this is displayed on the screen for the user, e.g., the primary observer 20 or others. The representation and various embodiments may run variously on a mobile device, a device built into the vehicle, or some Internet of Things (IoT) device, etc. Based on the speed that is determined for the subject vehicle and the user, e.g., primary observer 20, various trajectory information can be updated. The speed may be determined by various means, including but not limited to entry or selection by a user, or sensor data where the sensor may be on a vehicle, on the mobile device, or some other device. In various embodiments, all known routes that could be potentially traversed by the subject vehicle 30 and the user, primary observer 20 for instance, can be highlighted on the display in a color and appearance can be set based upon a selectable range and specification of the notifying application.
  • In the example in FIG. 2, based upon the present direction of travel, if the subject vehicle 30 is speeding, the subject vehicle 30 can potentially rapidly approach drivers 28, 23, and 24 on the freeway 18 and a driver 26 on a side street. Driver 25 is heading in an opposite direction (see FIG. 1) and so would not require notification in various embodiments. The proximity of driver 22 is more distant from the object (e.g., subject vehicle 30), but a potential for danger from this object still exists. Various embodiments consider the speed, routes, direction, and proximity among the various factors. In addition, another factor would be whether the subject vehicle 30 was a drunk driver or a slow driver. In this example, the potential of affecting driver 22 may be very remote based on the distance from the subject vehicle 30. As the subject vehicle 30 passes the interchange 40, various embodiments would no longer notify any of the user/drivers who are no longer in the direction of travel or immediate area of concern. However, subject vehicles or other objects can be monitored by more than one sensor, sensor location, or reporting observer, and hazardous conditions and changes in subject trajectory and speed can be reported and updated by the system, providing updates to the alerted users, newly alerted users, and subscribers to the system.
  • In some embodiments, the primary observer 20 can enter a vehicle license plate or the primary observer 20's vehicle or device may be equipped with sensor(s) such as, but not limited to, a camera that could detect the subject vehicle 30's license plate, in addition to other automated ways of identifying subject vehicle 30's unique and semi-unique characteristics. In response to determining the vehicle license plate and other characteristics by whatever means, various embodiment utilize the vehicle license plate to obtain data which would provide a potential destination to use to map the subject vehicle 30 to the subject vehicle 30's home or work 50. If the projected travel path from the subject vehicle 30 to the home or work 50 of the subject vehicle 30 if known (based on vehicle license plate databases, etc.), and based on other methods of determining subject vehicle 30's route or destination, vehicles that are less likely to be affected by this projected travel path, such as vehicles 24 and 26, might not be notified (alerted) since it can be determined by the method that these vehicles may likely not be affected by the subject vehicle 30. In addition, in some embodiments, if the home or work location 54, or other destination or route, of the driver 24 is known, the potential of interaction between driver 24 and the subject vehicle 30 may be unlikely because driver 24 will soon exit the freeway as he or she progresses home. Thus, some embodiments can reduce excessive false warnings, e.g., by filtering based on the likelihood of an interaction in the foreseeable future.
  • In the example of FIG. 2, as the subject vehicle 30 passes driver 28, various embodiment allow the driver 28 (or other sensors able to identify subject vehicle 30) to update system data regarding the subject vehicle 30, e.g., regarding location. For example, based upon an updated location of the subject vehicle 30 being just past the driver 28, various embodiments determine that the need to notify driver 22 (ahead in the travel path) increases, while it may also be determined that the risk to drivers 23, 24 and 26 is virtually eliminated. The probabilities associated with the risk may be calculated by various embodiments based on the factors mentioned above and below.
  • Continuing with the vehicle hazard example, the method according to various embodiments of the present technology may be configured to run as an application (“app”) on a device.
  • Although apps and various other aspects may be described in examples with reference to a mobile device, other devices including those built into the vehicle or other object, or IoT devices of some kind, to name a few non-limiting examples, may also be used to practice the present technology according to some embodiments.
  • FIG. 3 illustrates an example screen 300 that shows a graphical user interface (GUI) provided by an application, according to various embodiments. The screen 300 screen is an example only for reference as a simplified display to enable a user to quickly enter information (e.g., to avoid the user from being overly distracted from the road). In this example, the information may be entered on a mobile device where using available sensors on the mobile device (and/or proximity to base stations or other known objects) the present location, direction of travel, and speed of the mobile device is known. This use of available data apart from user entry/selection can substantially reduce the information that a user needs to enter and/or select. Although the information may be referred to herein as entered, the entry may comprise selecting information, such as selecting an icon such as “speeding” 62. The entries can be aggregated or merged with existing mapping, Global Positioning Satellite (GPS), and other information including information from other sensors the vehicle or mobile device may have. The application for performing the method according to various embodiments can allow an observer to notify others that a hazardous vehicle is in the area or may be approaching other people in danger. In one example, the observer can simply open the application, and with a few selections can enter relevant information into the application as will be explained further below.
  • In the example shown in FIG. 3, on the screen 300 of the mobile device 60 display for the application, the primary observer 20 can selects the subject vehicle type 61 of subject vehicle 30. For example, the speeding 62 type may be selected. In addition to speeding 62, other types shown in the example GUI include drunk, wrong way, slow, weaving, and along shoulder. Several of the types may be exhibited by a drunk driver. The screen 300 can have a slider for indicating the relative speed 65. As the slider is moved in response to user action, the speed display 66 can show the perceived or relative speed 66 (“+25” in this example). The user action may be a touch, spoken command, and the like. Based upon this information and other acquired information or determinations, the method for the application can predict the direction of travel (e.g., relative to the primary observer 20 or others) and when the subject vehicle 30 may approach other drivers, off ramps and freeway interchanges.
  • The example application screen 300 may also include an interface for the user to provide vehicle license plate information, e.g., “license” 67. The example screen 300 shows an optional location for entry 69 of the license plate information 68 of the subject vehicle 30. In response to selection of this area by a user by touch or spoken commands, etc., various embodiments will display a keyboard which the user can use to enter the letters, number and symbols for the license plate as well as availability to utilize a devices camera and optical character recognition (OCR) technology to obtain license plate data. In various embodiments, this license plate information can be compared to the state department of motor vehicles accessible database to obtain the owner, color, make and model of the subject vehicle 30. In some cases, the home address, work address, or route may be obtained also by the system to help determine a possible predicted destination for the subject vehicle 30. In some embodiments, the home address, work address, route of subject vehicle 30 or other information obtained, based on the license plate information and other information, including but not limited to facial recognition, may be used to also obtain a business address, and addresses of relatives associated with the subject vehicle 30 or other information useful in determining route or future destination. This information may be referenced to predict potential destinations where the subject vehicle 30 may be heading. In some embodiments, the sensor used may be a camera, and the method can receive a photo of the subject vehicle 30 taken by the primary observer 20 using a mobile device 60's camera(s) or using a camera in the vehicle to capture an image or video of the vehicle license plate number of the subject vehicle 30, and the GUI can provide a portion to support image capture. In other embodiments, a provided user interface (and/or remote sensors) can enable an observer to select descriptive information about the vehicle, including but not limited to make, model and color, which can also be deduced utilizing image recognition technology. In some embodiments, other descriptors for the subject vehicle can be obtained by sensor(s) including, but not limited to, damage, dents, level of cleanliness, stickers, and other aspects that could alone or together with other aspects, be helpful to identify the subject vehicle 30.
  • In response to actuation of the enter button on the GUI 300 in this example, various embodiments can determine the proximity and potential of another user encountering the subject vehicle 30. In some embodiments, other drivers can setup to be linked to receive alerts (akin to an Amber alert or tornado warning alert) based on inputs from the primary observer 20 and/or remote sensors and determinations made therefrom and from sensor data, to alert the other drivers of particulars about the subject vehicle 30 and the respective likelihood of an encounter for that particular other driver. Various embodiments are particularly helpful if the subject vehicle 30 approaches from behind a vehicle such that the vehicle or driver may not be as cognizant of the moving object. Other drivers can also use the application, and autonomous sensors can confirm presence of and potential for an encounter with the object and provide the system with additional data with which to update information related to the subject vehicle 30 (hazard type of moving object).
  • FIG. 4 shows an example screen 400, for a device, presented to a user in response to determining that a moving object (hazard) is observed at point 22 and is presently ahead of the user. Various embodiments may present the hazardous object with a moving visual representation of its predicted location along a route between the hazardous object and an alerted user that will be visible within the application. The method may achieve this based on calculating the relative speed/trajectory information by determining a distance and relative position which may be presented at 92 and updated. The example screen 400 can show a series of distance areas 70, in the form of concentric circles or polygons that become increasingly intense based upon the proximity to the object. In some embodiments, a polygon effect is used which equates to a reachable range in any direction over time based on the deduced capabilities of the identified object. The distance areas 70 may be shown in red, or could be a different color based upon the type of object and the weighting of any potential interaction. The color could be customizable by a user or alert provider (i.e., service subscribing entity) depending on their preference. The example screen 400 also provides a prompt 93 to allow for user input to update information on the object that is relayed to the system and can be relayed to other application users and entities receiving data from the service.
  • FIG. 5 shows an example screen 500, for a device, presented to a user in response to determining a moving object (hazard) is observed at point 22, and is presently to the east of the user or driver 20. The screen 500 shows a series of distance areas that become increasingly intense based upon the proximity to the object. The distance and position of the moving object that is a potential moving hazard may be presented at 92, shown relative to the position of the device that is running the application in accordance with various embodiments. In FIG. 5, the areas 70 may be shown in red for emphasis, but could be a different color based upon the type of object (i.e., type of hazard) and weighting of any potential interaction, with the color customizable by the user or third party service provider. As an example, a drunk driver may be deemed to be greater hazard than a slow moving vehicle. Both moving objects have a potential for danger and unpredictability, but the potential for the drunk driver to weave, get into an accident, and cause injury can be far greater. The screen 500 also provides a prompt 93 to allow for user input to update information on the object that can be relayed to other application users and entities receiving data from the service.
  • FIG. 6 shows an example screen 600 presented to a user in response to determining that a user (e.g., driver 21) is approaching a moving hazard type of object. In various embodiments, the application (per the corresponding method) collects reports received from multiple drivers and integrates the information to update proximate drivers determined to be potentially affected and also to cancel projected hazard trajectories for users that are determined to be no longer affected. As other drivers add to the information, some projected routes can be cancelled in response to determining that the subject vehicle has changed from one road to another; e.g., changed freeways or exited a freeway. In various embodiments, the warnings generated for providing to other drivers are localized for the subject vehicle's proximity to the hazard and are determined as a function of characteristics of the hazard, e.g., a slow, weaving drunk driver, a wrong way driver, a driver on the shoulder of the road in stopped traffic, motorcycle, fire truck, ambulance or a speeder weaving around drivers to maintain a higher rate of speed, to name just a few examples.
  • Screen 600 in the example in FIG. 6 shows multiple example moving potential hazards 30-33 in proximity to the driver 21. In various embodiments, each hazard 30-33 type is moving and is identified with a different color code based upon the potential severity of the hazard. For screen 600, the driver 21 is shown approaching a hazard 30 which is along the guided trajectory, that is generated according to the method in various embodiments, because the method determined that the driver 21 needs to pass through the interchange 40 based on the route driver 21 is taking. Screen 600 provides an alert 95 that shows the hazard type and provides instruction to advise the best lane to follow to avoid the hazard based upon information provided by other users. The alert 95 in this example has a prompt to allow the user to tap for more information. It should be noted that, in this example, the method determined that some of the moving potential hazards 31-33 are not in the determined travel path of the driver 21. Although these moving potential hazards 31-33 may have minimal or no effect of the driver 21, the driver (or passenger) may be able to observe the hazards 31-33 so various embodiments allow the user to update or append information related to the hazards. Screen 600 also shows trip information 96 which may include the estimated time that the driver will arrive at their destination, number of miles and amount of travel time. In various embodiments, the determination of the user's destination based will reduce updates on other moving potential hazards 31-33 because it can be determined that those potential hazards are unlikely to affect the travel trajectory that has been determined for the driver 21. Screen 600 also shows at 94, in this example, the upcoming distance to a turn and identifies the turn along the project route of travel.
  • In various embodiments, the determination of the travel trajectory can be based on the route and destination for the user. The destination can be determined variously, for example, based on navigation system communication, user input/selection, or predicted based on historical data that can show a pattern of having certain destinations at the time and place of the trip, based on past destinations for the route taken, based on the user's calendar or other information the user had provided, with different weights given to the information depending on the source of the information, e.g., less weight to destinations based on distant history and more weight to destinations based on the venue for an event on the user's calendar accessible from the mobile device, for instance.
  • For movement prediction in various embodiments, the form of movement needs to be matched with the polygon representing calculated potential locations within a determined timeframe based on mobility physics of the object For example a person can physically run at maximum measured maximum speed of nearly 28 MPH (e.g., world's fastest runner Usain Bolt in the 100-Meter Sprint). The average vehicle has an average maximum speed of between 100-120 miles per hour. In various embodiments, the method calculates the average maximum speed of an identified object/person/vehicle, along with conditions of the object's terrain and the limited mobility and speed based on same (i.e. hills, pavement condition, curves in road, etc.) and calculates the maximum acceleration rate of a moving object (which could be a person, vehicle, or other object) along with its average calculated maximum speed and adjusts the algorithm in real time using the actual geo-location and relative movement against this algorithm to determine the earliest potential opportunity for an encounter (e.g., with the moving hazard, person, vehicle, etc.,). The algorithm can also adjust estimations in real-time based on all possible locations that the identified object/person/vehicle could have been. This can create a level of awareness for all users that will allow for appropriate preparation of possible encounter.
  • FIG. 7 shows an example screen 700 presented to a user to route around a potential hazard (in response to determining that the potential hazard was in the route the user had been taking). In various embodiments, the system is configured to provide a database (or other data structure) that can be quickly cycled through to identify specific object type with associated predictive historic traits/trends including statistical movement choices in space as well as known acceleration and speed ability of identified object, e.g., to provide the ultimate alert/warning system.
  • Additionally, the app can take into consideration, through the app's predictive algorithm and geographic system, the medium an object will move through in time including air/water/space etc. and adjust the maximum acceleration and top speed of maximum reachable range according to the known substance being traversed by the object.
  • In various embodiments, the application uses traffic patterns and travel times on each street, highway or road, and in addition, tracks moving hazards and moving potential hazards to, alert to the potential hazard type and instructions 95 in FIG. 7 and trip information 96. All known routes of the potential hazardous can be highlighted to further alert the user. While a recent accident may not generate a present delay in traffic speeds, a moving hazard can present different challenges so in various embodiments, a driver 21 that is approaching a hazard 32 along their route, will be re-routed or at a minimum be alerted before they encounter the hazard 32. On screen 700, the hazard is identified by a drink icon; other appropriate icon warnings may be used to identify various hazards.
  • FIG. 8 illustrates a block diagram of an operational flow chart for a method where the hazard is a moving vehicle on a roadway, according to some embodiments. A user selection of the application is received at 80. A warnings-only mode can be provided, whereby the application can run in the background, alerts can be sent and received as notifications, alerts can be sent and received as text messages, or alerts can be sent and received within other applications, providing warnings as determined according to various embodiments described herein. Upon initialization of the program or within application settings options may be presented to the user for selecting the sensitivity for notifications, e.g., selecting various setpoints. For example, at a low level the user has the option to only be notified when a speeder is approaching from the rear at a relatively close distance as determined by various embodiments. The methods and system in various embodiments can also provide the option for the user to select a high level of user notification and specify a distance such that a notification will be generated and sent to the user for any reported moving objects of concern such as, for instance, any reported hazards within that distance of miles (e.g., where the user is driving on an unfamiliar road at night with potential for various unseen objects). Based on additional embodiments, such settings may be determined by a third party subscriber to the system that is providing data to their users and customers.
  • In some embodiments, the application may be provided to the user on the mobile device or preinstalled on the user's mobile device or within the vehicle, e.g., by an insurance company or the vehicle manufacturer or vehicle owner (other than the driver) who owns or otherwise controls the software for the application. The insurance company, for instance, may want the application settings to have setpoints that are more broad and to allow the application to disable the driver from adjusting those predetermined setpoints.
  • After initialization of the program and based on selections, if any, of options by the user as permitted or predetermined settings, the application can provide an information entry screen for display to the user, at 81 in FIG. 8. The items on the screen that the user can adjust may vary depending on the options selected. Screen 300 in FIG. 3 shows an example of an information entry screen. At 82 on screen 800, the application allows the user to enter certain data on the information entry screen. The application can received data entered by the user 82 as described herein in various embodiments. While not required, a license plate of the moving object (e.g., of subject vehicle 30 in FIG. 2), for example, can be manually entered at 83 by the user. Alternatively, a sensor may detect the license plate and software, locally or in the cloud, could perform character recognition to ascertain the license plate number. Although a camera is mentioned at 84 as detecting the license plate, the sensor is not limited to a camera, other suitable sensors may be used that are capable of obtaining an image of the license plate or capable of determining the license plate number. The application can allow a driver (e.g., the primary observer 20 in FIG. 2) to take a photo or video of the moving subject vehicle. In addition, images can be taken by remote sensors, dashcams, and the like that are networked and connected to the service.
  • After the data, such as that shown for example at screen 300 in FIG. 3, is entered, the data can be merged, at 85 in FIG. 8, with GPS information from the mobile device or vehicle. In various embodiments, the mobile device can determine location, direction and speed data for the mobile device and the primary observer 20 in FIG. 2 in the same vehicle, using various sensors.
  • At 86 in FIG. 8, the data can be merged with mapping programs or applications locally or in the cloud. In various embodiments, the various mergings provide capability that would be otherwise not available to the mapping programs, applications, users, or third party subscribers to the data.
  • At 87 in FIG. 8, the data can be used, locally or in the cloud, to calculate future path(s) of the subject vehicle 30 in FIG. 2, as described further herein in various embodiments.
  • At 88 in FIG. 8, the relative positions between the subject vehicle 30 in FIG. 2 and other drivers are determined according to various embodiments. Based at least in part on the distance, potential travel routes, and travel speed of the subject vehicle 30 in FIG. 2, as described in examples herein, the potential for interaction between the drivers and the subject vehicle 30 is calculated, locally or in the cloud, at 89 in FIG. 8. In some embodiments, other users are provided with an image of the subject vehicle 30 which will reduce the amount of input required from those users and allow for a quicker identification and verification of the subject vehicle.
  • At 90 in the example flowchart 800 in FIG. 8, the application will then compare, locally or in the cloud, the determined potential for interaction with a threshold set for notification to determine the level of notification to provide to the particular user. In response to the potential for interaction being determined to be above the set threshold, at 91, an alert (warning) can be generated for delivery to a subject user, e.g., displayed on the user's mobile device. The application will continue to monitor the moving objects (subject vehicle 30 shown in the examples in FIG. 1 and FIG. 2, and other moving hazards determined to have potential for interaction along the user's route, for example) and compare the potential interaction with the threshold to determine if the alert should continue to be displayed, additional alerts sent, or if the potential for interaction with the object is no longer present.
  • While in various embodiments a mobile device may be used, some embodiments of the methods and systems for the application may utilize static locations that include traffic cameras, for example, to determine a subject vehicle 30 from FIG. 2 and send notifications to subject users and data subscribers.
  • In other embodiments, the application can also be used to identify both moving and static objects, or identify a subject vehicle that is emitting moving objects which are hazards such as releasing trash or dropping gravel.
  • Some embodiments may be used for warning pedestrians of a subject vehicle 30 in FIG. 1 and FIG. 2, or other objects (including persons) approaching. The method and system can provide protection to people walking around a city with a messaging and tracking system. The method and system can connect a tracked network of individuals with others as well as police, when needed for immediate support. In some embodiments, the method and system tracks a user's location amongst their friends and broadcast to their friends or to all people in the immediate surrounding area if a call for help is initiated, for example, for networked 911 system.
  • In various embodiments, the application uses sensor data to track the speed and direction of the user 20 in FIG. 2 who is inputting the data using the application, and based at least in part on that sensor data (which may include variously accelerometer data, gyrometer data, GPS data, Wi-Fi data, other radio data, and other user provided data), the application can determine the relative speed of the identified subject vehicle to project and calculate all possible trajectories of the identified subject vehicle 30 in FIG. 2 over an adjustable setpoint of time. As also described above, the license plate or other data identifying the subject vehicle 30 may be used to determine potential destinations for the subject vehicle 30 and to determine projected routes and how such routes might potential intersect with that of the system users.
  • Moving object predictive locating reporting, and alerting for other types of objects besides unsafe drivers within the scope of the present technology and described further herein.
  • FIG. 9 illustrates example entry screens 900, 910, and 920 for an application, according to other embodiments. Example screen 900 is an introductory screen shown to a user in response to the application being started. Screen 910 shows options for “Report Hazard” and “Hazards Nearby”, in various embodiments. In response to the selection of “Report Hazards”, screen 920 may be displayed which provides the option for the user to select “Record Description” using audio and/or video, or to “Enter By Hand” for the user to manually enter.
  • Some embodiments provide users and other sources with the option to include audio, video, virtual reality systems, inputs from remote networks through sensors and photos as part of their reports; enable voice reports from the user to include in-depth information; and parse audio and video reports using artificial intelligence separately or together with human monitors.
  • FIG. 10 illustrates an example screen 1000 providing a user interface to record a description for reporting a hazard, according to various embodiments. In response to selection of the “Record Description” option on screen 900, screen 1000 may be presented to the user to select initiation of an audio recording to report a hazard. In various embodiments, in response to selection of the “Record” button, the user may be prompted with the following audio: “Please speak clearly. Do not take any action that might be dangerous. Please include the type of hazard, including color and model of any vehicles, the speed and direction they are traveling, and what you observe to be the hazard. Please begin recording after the tone.” The present technology is not limited to this particular wording of the prompt; other suitable audio prompts may be used in some embodiments. In response to detecting that the user has completed the recording, screen 1010 may be presented to the user. Screen 1010 provides after recording, “Submit,” “Playback,” “Re-record”, and “Delete” options, in some embodiment.
  • FIG. 11 illustrates three example screens 1100, 1110, and 1120 for alerts providing to the user overlaid on a map, according to an example embodiment. Screen 1100 can provide selections for Speeding, Swerving, Distracted, Aggressive, and Slow, along with a Submit button, to characterize the other driver where multiple selections may be made. Screen 1110 is presented to the user to provide for entry of more information (“info”) such as, in this example, License, Make/Model, Color, Speed, and Direction for the other vehicle, that can be submitted using the Submit selection. Screen 1120 can be presented to the user to provide a voice audio input default after selecting each of the options shown in screens 1100, for example to speak the license number, and text input if the user desires.
  • FIG. 12, FIG. 13, and FIG. 14 illustrate example screens and icons for examples of report of alerts provided to a user. For each screen, the user is prompted to confirm “See This Hazard” regarding the hazard identified by one of the icons on the screen, according to various embodiments.
  • FIG. 12 illustrates three example screens 1200, 1210, and 1220 for alerts provided to the user overlaid on a map, according to an example embodiment. In example screen 1200, a moving object hazard report location is depicted along with route the moving object hazard is calculated to take towards a subject user driving in the same direction as, and ahead of the object hazard. Also, in this embodiment example a zone or region depicting the most likely current location of the object hazard is shown as distinct, using an alternate or more intense graphic or color and which may include a representative icon. In example embodiment 1200 the most likely current location of the object hazard is behind the subject user's vehicle. In addition, 1200 illustrates example icons that may be used. In example screen 1210, a moving object hazard report location is depicted along with route the moving object hazard is calculated to take when driving in the same direction as a subject user, and when driving ahead of the subject user and when the subject user is located behind the object hazard's previously reported location. In this embodiment example zones or regions depicting the most likely current locations of the object hazard is shown as distinct, using an alternate or more intense graphic or color, and which may include a representative icon. In example embodiment 1210 the most likely current locations of the object hazard are ahead of the subject user's vehicle. In example screen 1220, a moving object hazard previously reported location is depicted along with route the moving object hazard is calculated to take when driving in the same direction as a subject user, and when it is predicted to be driving ahead of the subject user, and when the subject user is located ahead of the object hazard's previously reported location. In this embodiment example zones or regions depicting the most likely current locations of the object hazard is shown as distinct, using an alternate or more intense graphic or color, and which may include a representative icon. In example embodiment 1220 the most likely current locations of the object hazard are ahead of the subject user's vehicle. In example screen 1230, a moving object hazard previously reported location is depicted [insert arrow/number as necessary] along with route the moving object hazard is calculated to take when driving in the same direction as a subject user and if on the same route as the subject user, and when it is predicted to be driving ahead of the subject user, and when the subject user is located ahead of the object hazard's previously reported location. In this embodiment example, routes the moving object hazard may have taken, but which the subject user has not traveled, are not depicted and only the routes the subject user may take and which may be taken by the moving object hazard are displayed. Zones or regions depicting the most likely current locations of the object hazard are shown as distinct, using an alternate or more intense graphic or color, and which may include a representative icon. In example embodiment 1230 the most likely current locations of the object hazard are ahead of the subject user's vehicle.
  • FIG. 13 illustrates two more example screens 1300 and 1310 for alerts provided to the user overlaid on a map, according to an example embodiment. In example screen 1300, a moving object hazard report location is depicted along with route the moving object hazard is calculated to take when if driving towards a subject user that is traveling towards the moving object hazard. In this embodiment example the zones or regions depicting the most likely current locations of the moving object hazard are not shown since it's calculated that the moving object hazard would be beyond the subject user's vehicle. In example embodiment 1300 the most likely current locations of the object hazard are behind the subject user's vehicle. In example screen 1310, a moving object hazard report location is depicted along with route the moving object hazard is calculated to take towards a subject user driving in the same direction as, and ahead of the object hazard. In this embodiment example the zones or regions depicting the most likely current locations of the moving object hazard are not shown since it is calculated that the moving object hazard would be beyond the subject user's vehicle. In example embodiment 1310 the most likely current locations of the object hazard are ahead of the subject user's vehicle.
  • FIG. 14 illustrates two additional example screens 1400 and 1410 for alerts provided to the user overlaid on a map, according to an example embodiment. In example screen 1400, a moving object hazard report location is depicted along with route the moving object hazard is calculated to take towards a subject user driving in the opposite direction and heading towards a moving object hazard (in this example moving object hazard is calculated to be headed north and subject user is calculated to be heading south). Also, in this embodiment example a zone or region depicting the most likely current location of the object hazard is shown as distinct, using an alternate or more intense graphic or color and which may include a representative icon. In example embodiment 1400 the most likely current location of the object hazard is ahead of the subject user's vehicle and between the subject user's vehicle and the previously reported moving object hazard location. In example screen 1410, a moving object hazard report location is depicted along with route the moving object hazard is calculated to take if driving the same direction as a subject user that is traveling towards the moving object hazard's previously reported location. Zones or regions depicting the most likely current location of the object hazard are shown as distinct, using an alternate or more intense graphic or color, and which may include a representative icon. In example embodiment 1410 the most likely current location of the object hazard is ahead of the subject user's vehicle.
  • Amber alerts may be provided in various embodiments. Such alerts are critical and are directed to locate and report kidnappers of missing children. The U.S. Department of Justice has estimated that nearly 800,000 children are reported missing every year. The National Center for Missing and Exploited Children estimates that 203,000 children are kidnapped each year by family members. As an example of this embodiment, upon the received report of a kidnaped child, a user such as law enforcement could identify the reported location of the abduction and view a timestamped area on a map, including polygons with all potential locations the perpetrator is calculated to be able to travel based on the current time, as well as into the future. This example embodiment provides potential locations the child and perpetrator(s) have been, may be, and will be. Additionally, if the reported child or child abductor is identified by and reported by another application user, person calling into 911 or a similar service, or by sensor(s), automated or manually operated, data could be updated and the additional report can aid the system in triangulating and more accurately predicting the abductor and the child's actual position.
  • The map views, for example in FIGS. 4-7 and 12-14 may include a polyline/route drawn from the hazard to the user's current location which calculates how far an object or hazard can move in time based on their maximum known movement capacity. If the user moves off the route, the current route map may or may not be updated in various embodiments. Alternatively, new routes can be determined and the map and alerts can be generated based on new routes and any additional data obtained on objects with respect to the new route.
  • FIG. 15 illustrates two example screens 1500 and 1510 for providing amber alert notifications and information to a user, according to an example embodiment. Screen 1500 is an “Amber Alert Notification” example screen. Similar notification screens for other alert types can utilize this method. Screen 1510 is an example screen that provides an “Amber Alert Info” option for users to report a sighting. The “Report This” selection causes initiation of the report section of the application, as described further herein. Additionally, Amber Alerts may be depicted in any of the forms described and depicted for other moving objects, and in ways not depicted herein. Amber Alert data, predicted locations, and other information also may be consumed by third party subscribers to the data.
  • Alerts may be in the form of push notifications, an API alert, or server-side alerts, and the like. Users within a range (e.g., an area which may have two or three dimensions in the present technology, referred to herein as a polygon for examples of two dimensional areas) of a reported object will receive a push notification in some embodiments. The polygon can be global with the report occurring based on a user's last known location. An example initial range could be 30 miles. A user may only see hazard reports in range. Without adding the background services, notifications can be sent based, for example, on a user's location, or more recent known location within a timeframe.
  • As described herein, reports concerning various objects can be received by many users and sources which can be used by various embodiments as crowdsourcing inputs for the moving object predictive locating, reporting, and alerting aspects.
  • Various embodiments provide segmenting of objects by categories, and provide for transmitting appropriate information to law enforcement for particular objects. In various embodiments, the method does not disturb or alert users outside the object range to avoid being the high tech equivalent of the “the boy who cried wolf”. There are numerous other use cases and embodiments for other types of moving objects and the unique challenges each can present.
  • For example, drones or other automated sensing devices can be used to monitor and/or identify a moving object and take various actions. In some embodiment, one or more drones are launched in response to a report of a moving object. Drones can be launched from other moving vehicles, from stationary stored locations, or from roving vehicles tasked to follow tagged objects. The moving object can be detected by the drone(s), (e.g., based on the initial report with descriptors), reported, tagged and followed. Some embodiments provide a unique identifier for any target moving object; use a network of available sensors from moving vehicles, fixed points, IoT device, etc.; and deliver data to a central system for later identification and tracking. This system may be cloud-based and could be decentralized for increased security and capability. In some embodiments, reports from users and sensors are monitored to help combat false positives. Additionally, the weighting effect of multiple reports of the same moving object in a limited area over time can help combat false positives. If only one report of the moving object is received, it may be less weighted compared to the multiple reports.
  • Exemplary methods include a system for real-time tracking using, for examples, drones and/or networked sensor technology. In some embodiments, drones or other sensor carrying devices (hard mounted or mobile—including law enforcement vehicles and autonomous vehicles) identify and image targets using sensors such as photo, laser, lidar, infrared, radar, sonic, etc. to identify a target's unique identity (e.g., turning objects into their own QR codes, in essence) so other networked sensor systems can help recognize and update target location and other target deltas to the central system.
  • In some embodiments, the method can calculate the movement of an object in any predictable direction based on the movement range and characteristics of that object, including rate of acceleration and range of speed. Such embodiments may aid, for example, in the tracking of drones that could pose a danger to passenger aircraft, vehicles, or people.
  • Various embodiments are based on the speed of a relative vehicle at the time detection was made. The following may be considerations for the speed determination: real time traffic patterns; speed limit of roadway/street; speed of user reporting hazard, rate of acceleration, speed range of object's make and model. Rate of acceleration and speed can be sensed with updated data from sensors, from users reporting hazard/object; and based on calculations using known maximum acceleration and maximum top speed of hazard/object. As used herein, roadway describes any road which could be a freeway, expressway, highway, ramp, street, passageway, service road, alley, avenue, boulevard, turnpike, autobahn, motorway, dirt roads, gravel, and other types of roads.
  • In some embodiments, the map drawings and route information, other than the information related to the hazards and other objects, predictive locating, reporting and alerting described herein, may be based, at least in part, on commercially available map information. In that regard, in various embodiments, a hazard reporter module sends the hazard (or other object) report to a server (or other computer on which a portion of the computing power for executing the method may reside). The report can include one or more of the latitude/longitude (lat/lng) coordinates of the reporter vehicle and, if any was recorded, a voice file. The method may calculate a polygon from the reported location, look up users reporting within that polygon within a certain number of minutes (e.g., 30 minutes for instance). The method initiates the sending of alerts (e.g., push notifications) to all users meeting the above criteria. The method may generate an application programming interface (API) request to a map server for route from hazard (or other object) report lat/lng to current alerted user lat/lng. The method configures the user interface for the alerted user to displays a map with their location, the hazard (or other object) reported location, and an animated polyline running from the hazard (or other object) reported location to the user's current location.
  • In some embodiments, the method running in the application may make an API request to a map server that includes a request for all primary roadways within a certain travelable timeframe from the hazard (or other object) report. The method initiates a display on the user interface of the alerted user to display highlighted primary roadways for all locations the hazard (or other object) may have traveled within that time frame. In some embodiments, this can include displaying polygons of a potentially traveled area within a timeframe, and displaying polylines for all primary roadways between the polygons.
  • In some embodiments, an identification feature is provided within the method that locates the position of wanted persons based on various information. For “Most Wanted” persons or escaped prisoners, for example, the reachable range can be calculated based on information from sensors and other data, to provide alerts of potential threats that could potentially affect a user. For example, someone or sensors may identify a subject, report to the system, and in response, the system determines and generates possible locations/range that subject may have traveled within time constraints. All known locations and possible past positions can be made available to responding officers in all zones based on time and trajectory. Using the information and/or the application, officers can easily keep track of each other as well as all associated and needed supporting agencies including Fire/Hospitals/News Agencies/coast guard/military. In various embodiments, voice recognition capabilities are provided across all needed networks. These agencies will be fortified with the present technology due to additional access to civilian reporting to harness the power of crowdsourcing, according to various embodiments. Such embodiments, may be used in cases of assault victims and attempted or actual theft, burglary, robbery, human trafficking, runaway children reported by parents and last-seen reports, to name several non-limiting examples.
  • Some embodiments provide integration with various other apps, for example, for hiking, scuba diving, to name just a few, regarding various hazards (or other objects) that may be encountered during those activities and for responding to the same.
  • For hiking app integration, for instance, the method may track hikers, mountain bikers, rock climbers, ice climbers, etc. The method may detect a rock climbing fall, general fall, or other hazardous movement and have a “life alert” two way communication if a person is disabled from a fall. If there is no response from climber or hiker to an alert, the method can alert others (where “others” as used herein may include people, robots (e.g. “robot doctors”), autonomous vehicles, drones, and the like) in the area with location information to provide help for the injured person. Other kinds of problems that may occur on the hiking trail albeit a sprained ankle, dehydration, lost, etc. could also be reported. In addition, the presence of dangerous animals could be detected by various embodiments or reported by other, including the presence of bears, rattlesnakes, mountain lions, etc. The last known location of such moving hazards (or other objects) could be sent to authorities, rangers, and hikers in the area.
  • Internet access might not be readily available along a trail, so in some embodiments, point to point app communication capability is provided such that a chain of people using the application can send data from person to person (and ultimately through the Internet or cellular communication if one person in the chain/mesh has connection to such sources or a satellite phone) in case of emergency or for convenience when technology becomes available. The method may provide an area highlighted on the user interface showing where the hazard (or other object) could have been and presents that information for a timeframe. The amount of time may be user-selectable or predetermined by the system based on various factors, including type of moving object hazard, or by third party subscribers to the system.
  • In other embodiments, the method provides the option to have people report “harmless” animals such as deer, owls, eagles, etc. for birdwatchers, for fun and/or for photo opportunities. The method could, based on the movement determined from sensors and/or reports, determine how busy a trail is based on monitoring trail traffic. In some embodiments, the method can detect people running in panic and use that information to alert users of possible danger.
  • In some embodiments, alerts can be generated concerning dangerous weather including movement, for example, flash floods, storms, mudslides, tornadoes, wind, snow, rain, earthquakes, etc. Other reports and alerts could concern water contamination due to, for example, red tide, chemical spills, etc. and could include alerting all appropriate authorities.
  • In some embodiments, the present technology may be utilized by a pedestrian that observes a hazard (or other object) to warn others, e.g., hazard (or other object) being a vehicle driving on a sidewalk, heading to the sidewalk, etc. In other embodiments, the system can utilize built-in pedometer and accelerometer functionality in the mobile device to confirm if a person is actually walking instead of driving slow or cycling slow. The technology can additionally utilize the mobile device's (e.g., cell phone's) or utilized device's accelerometer and/or gyroscope.
  • For scuba diving, for example, the method can be used to detect based on sensors/reports that sharks in the area and other dangers and report same to users.
  • In other embodiments, the system may predict the outward potential movement of an individual in a vehicle, on foot, using mass transit, or using an electric bike (e-bike), an electric scooter (e-scooter), or any other forms of transportation.
  • In some embodiments, military uses may be provided to protect and network soldiers from enemy combatant threats in all possible forms whether it be soldiers, vehicles, drones, ballistics, etc. In any location, the method and system may track the outward movement of an identified soldier on foot and how fast they could move in any and all locations based on the fastest rate of speed of a human on foot. Various embodiments can provide multiple projections based on the ability of the object. For example, the system can provide three projections/reachable range polygons based on the ability for a combatant soldier to change his/her method of movement. For example, a person may change quickly from running to driving to flying. All these scenarios can be calculated and filtered for considering by the receiver.
  • The method and system, based on its determinations, can confirm the earliest possible time of encounter as well as all possible locations of the opposing force within a polygon in its outward movement prediction capability through reachable range technology based on movement through time. This can help all surrounding friendly forces know the earliest possible encounter time frame or location for strategic preparation. In some embodiments, our forces can use the information from an analytics perspective for all needed scenarios.
  • Insurance companies can see an immediate metric and reduction in claims, once the present technology is paired with their services. Customers could then be more aware of hazards (or other objects) and will have the ability to maneuver around known hazards (or other objects), which should reduce accidents and insurance costs, for example.
  • Autonomous self-driving vehicles can benefit from the present technology as it will allow networked vehicles to better detect moving hazards (or other objects) to increase their ability to protect the passengers, additionally passengers can generate hazard (or other object) reports which can be added to the autonomous self-driving vehicles network.
  • FIG. 16 is a simplified flow diagram of an example method, according to some embodiments, with further details described herein.
  • Step 1602 is optional, as indicated by the dashed line, and includes providing a user interface to a user for information entry on a device, as described further herein.
  • Step 1604 includes receiving (optionally via the user interface), moving object data corresponding to a moving object, as described further herein.
  • Step 1606 includes, receiving sensor data from a sensor, as described further herein.
  • Step 1608 includes merging the received moving object data and the received sensor data into a set of merged data, as described further herein.
  • Step 1610 includes based on the merged data set, automatically determining one or more of: a predicted location for the moving object, a potential path of travel for the moving object, a potential for interaction between the moving object and one or more other objects, and an alert concerning the moving object, as described further herein.
  • Step 1612 includes providing the alert, as described further herein.
  • In some embodiments, the present technology is a system (and corresponding method) that provides a service where third parties are providing inputs and those third parties or others are receiving outputs from the system. Inputs could include all types of sensor data pertaining to users and moving objects (e.g., that could be classified as hazards), and third party consumption of both that same data as well as receiving outputs from the system The outputs received by the third party provider could include additional information generated by the system pertaining to predictions determined concerning, but not limited to, approximation and estimation of future location, proximity, trajectory and routing.
  • For one non-limiting example, the method can further include providing the merged data set to a third party provider, e.g., for generating a predicted location for the moving hazard (or other object), at least one potential path of travel for the moving hazard (or other object), and/or a potential for interaction between the first user and the moving hazard (or other object), and for generating and transmitting an alert.
  • FIG. 17 illustrates an exemplary computer system 1700 that may be used to implement some embodiments of the present invention. The computer system 1700 in FIG. 17 may be implemented in the contexts of the likes of computing systems, networks, servers, or combinations thereof. The computer system 1700 in FIG. 17 includes one or more processor unit(s) 1710 and main memory 1720. Main memory 1720 stores, in part, instructions and data for execution by processor unit(s) 1710. Main memory 1720 stores the executable code when in operation, in this example. The computer system 1700 in FIG. 17 further includes a mass data storage 1730, portable storage device 1740, output devices 1750, user input devices 1760, a graphics display system 1770, and peripheral device(s) 1780.
  • The components shown in FIG. 17 are depicted as being connected via a single bus 1790. The components may be connected through one or more data transport means. Processor unit(s) 1710 and main memory 1720 are connected via a local microprocessor bus, and the mass data storage 1730, peripheral device(s) 1780, portable storage device 1740, and graphics display system 1770 are connected via one or more input/output (I/O) buses.
  • Mass data storage 1730, which can be implemented with a magnetic disk drive, solid state drive, or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit(s) 1710. Mass data storage 1730 stores the system software for implementing embodiments of the present disclosure for purposes of loading that software into main memory 1720.
  • Portable storage device 1740 operates in conjunction with a portable non-volatile storage medium, such as a flash drive, floppy disk, compact disk, digital video disc, or Universal Serial Bus (USB) storage device, to input and output data and code to and from the computer system 1700 in FIG. 17. The system software for implementing embodiments of the present disclosure is stored on such a portable medium and input to the computer system 1700 via the portable storage device 1740.
  • User input devices 1760 can provide a portion of a user interface. User input devices 1760 may include one or more microphones, an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. User input devices 1760 can also include a touchscreen. Additionally, the computer system 1700 as shown in FIG. 17 includes output devices 1750. Suitable output devices 1750 include speakers, printers, network interfaces, and monitors.
  • Graphics display system 1770 include a liquid crystal display (LCD) or other suitable display device. Graphics display system 1770 is configurable to receive textual and graphical information and processes the information for output to the display device.
  • Peripheral device(s) 1780 may include any type of computer support device to add additional functionality to the computer system.
  • Some of the components provided in the computer system 1700 in FIG. 17 can be those typically found in computer systems that may be suitable for use with embodiments of the present disclosure and are intended to represent a broad category of such computer components. Thus, the computer system 1700 in FIG. 17 can be a personal computer (PC), hand held computer system, telephone, mobile computer system, workstation, tablet, phablet, mobile phone, server, minicomputer, mainframe computer, wearable, or any other computer system. The computer may also include different bus configurations, networked platforms, multi-processor platforms, and the like. Various operating systems may be used including UNIX, LINUX, WINDOWS, MAC OS, PALM OS, QNX ANDROID, IOS, CHROME, and other suitable operating systems.
  • Some of the above-described functions may be composed of instructions that are stored on storage media (e.g., computer-readable medium). The instructions may be retrieved and executed by the processor. Some examples of storage media are memory devices, tapes, disks, and the like. The instructions are operational when executed by the processor to direct the processor to operate in accord with the technology. Those skilled in the art are familiar with instructions, processor(s), and storage media.
  • In some embodiments, the computing system 1700 may be implemented as a cloud-based computing environment, such as a virtual machine operating within a computing cloud. In other embodiments, the computing system 1700 may itself include a cloud-based computing environment, where the functionalities of the computing system 1700 are executed in a distributed fashion. Thus, the computing system 1700, when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.
  • In general, a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices. Systems that provide cloud-based resources may be utilized exclusively by their owners or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
  • The cloud is formed, for example, by a network of web servers that comprise a plurality of computing devices, such as the computing system 1700, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user.
  • It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the technology. The terms “computer-readable storage medium” and “computer-readable storage media” as used herein refer to any medium or media that participate in providing instructions to a CPU for execution. Such media can take many forms, including, but not limited to, non-volatile media, volatile media and transmission media. Non-volatile media include, e.g., optical, magnetic, and solid-state disks, such as a fixed disk. Volatile media include dynamic memory, such as system random-access memory (RAM). Transmission media include coaxial cables, copper wire and fiber optics, among others, including the wires that comprise one embodiment of a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media include, e.g., a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, any other physical medium with patterns of marks or holes, a RAM, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a Flash memory, any other memory chip or data exchange adapter, a carrier wave, or any other medium from which a computer can read.
  • Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to a CPU for execution. A bus carries the data to system RAM, from which a CPU retrieves and executes the instructions. The instructions received by system RAM can optionally be stored on a fixed disk either before or after execution by a CPU.
  • Computer program code for carrying out operations for aspects of the present technology may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (e.g., through the Internet using an Internet Service Provider).
  • FIG. 18 illustrates an example embodiment 1800 showing the moving object polygon (e.g., reachable range) and other information regarding the moving object and report of same, according to various embodiments. The example in FIG. 18 illustrates how various embodiments can use “reachable range” to help determine a moving object's (e.g., moving hazard's (or other object's) proximity and a trajectory required to reach a location or target. In some embodiments, the polygons in FIG. 18 would replace the concentric circles shown in the examples in FIG. 3 and FIG. 4. Multiple polygons may be drawn radiating out from a hazard (or other object) report location, with each polygon depicting a timeframe from the report taking into consideration speed, traffic, roadways, and other variables, the most outward polygon representing the range that an object can reach based on its known maximum speed, road conditions, speed limits, and acceleration capabilities, etc. The inner polygon represents a variable potential minimum range of speed based on similar parameters. In this example, distance from the alert reported location is displayed, the encounter estimated time is displayed based on the object's known maximum speed, road conditions, speed limits, and acceleration capabilities, etc., and the hazard (or other object) report time reflected as minutes ago is displayed, which all could be displayed along with the map, polygons, routes, and identifying icons for the subject user and moving object's location or range of predicted locations.
  • FIG. 19 illustrates another example embodiment 1900, of a screen 910 in FIG. 9, and FIG. 6, for reporting and viewing hazards (or other objects).
  • FIG. 20 illustrates another example embodiment 2000, of a screen 920 in FIG. 9, for selecting voice recording or manual entering of hazard (or other object) reports.
  • FIG. 21 illustrates another example embodiment 2100, of the screen 1010 in FIG. 10, for audio reporting of hazards (or other objects). In this example, screen 2100 has selections to start recording (e.g., microphone symbol), to listen, to re-record, to add more info, and to submit.
  • FIG. 22 illustrates another example embodiment 2200, of the screen 1000 in FIG. 10, for audio reporting of hazards (or other objects). Screen 2200 includes a microphone symbol, a selection to stop recording, and a cancel option.
  • FIG. 23 illustrates another example embodiment 2300, of the screen 1100 in FIG. 11, providing a user interface for selection of an icon for reporting characteristics a moving object, according to various embodiments. Using the screen in FIG. 23, a user may report that a hazard (or other object) was variously speeding, distracted, swerving, aggressive, slow, etc.
  • FIG. 24 illustrates another example embodiment 2400, of screen 1110 in FIG. 11, for providing identification information about a moving hazard (or other object). Using this screen, a user may input variously license, make, color, speed, and direction of a moving hazard (or other object) vehicle, akin to a screen 1110 in the example embodiment in FIG. 11.
  • FIG. 25 illustrates another example embodiment 2500 of a screen confirming a sent moving object hazard report, and allowing user to provide more information about the same moving object hazard (or other object). This example shows confirmation that the hazard (or other object) is reported and allows for the addition of more info.
  • FIG. 26 illustrates an example embodiment of a list of the moving hazards (or other objects), along with type of moving object hazard and which may include additional available information such as report time, report minutes ago, report location, predicted location, estimated time of encounter, etc., and allowing a user to select a moving object hazard in order to view more information and data on a map view. In the example in FIG. 26, the characteristics to be selected are shown with text and icons with one per row on the screen which may enable easier access by a driver, for instance.
  • FIG. 27 illustrates an example embodiment 2700, of screen 1400 in FIG. 14, for displaying status information about an aggressive moving hazard (or other object). The status information may include regarding an aggressive hazard (or other object), the distance from an alert report, estimated time to encounter, and how long ago the hazard (or other object) report was made.
  • FIG. 28 illustrates an example embodiment of a screen for displaying identifying information about a moving hazard (or other object), in this example an aggressive driver, and enabling viewing of a map or list or proving an update concerning the moving hazard (or other object).
  • The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
  • Aspects of the present technology are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present technology. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (33)

What is claimed is:
1. A computer-implemented method for predicting the potential path and movement of an object, the computer-implemented method comprising:
receiving moving object data corresponding to a moving object;
receiving sensor data from a sensor;
merging the received moving object data and the received sensor data into a set of merged data;
based on the set of merged data, automatically determining one or more of:
a predicted location or range of locations for the moving object, and
a potential path of travel for the moving object.
2. The computer-implemented method of claim 1, wherein the automatically determining is further based on:
one or more historical traits concerning the object; and
the medium that the object is moving through.
3. The computer-implemented method of claim 1, based on the set of merged data, automatically determining a potential for interaction between the moving object and one or more other objects.
4. The computer-implemented method of claim 1, further comprising, based on the set of merged data, determining an alert concerning the moving object, and providing the alert.
5. The computer-implemented method of claim 1, wherein the object is a soldier, vehicle, drone, or ballistic.
6. The computer-implemented method of claim 2, wherein for the automatically determining, different weights are applied for the historical data, and the medium.
7. The computer-implemented method of claim 2, wherein the medium includes one or more of terrain, air, water, and space.
8. The computer-implemented method of claim 7, wherein the medium changes over the potential path, the automatically determining being adjusted as a function of change in the medium that the moving object will potentially move through over the potential path.
9. The computer-implemented method of claim 2, wherein the one or more historical traits include statistical movement characteristics of the object.
10. The computer-implemented method of claim 9, wherein the statistical movement characteristics include acceleration and speed ability of the moving object.
11. The computer-implemented method of claim 7, wherein the automatically determining of the predicted location or range of locations and the potential path are based on predicting acceleration and speed of the moving object as a function of the medium that the object is moving through.
12. The computer-implemented method of claim 7, wherein the automatically determining of the predicted location or range of locations and the potential path further includes automatically adjusting maximum acceleration and maximum speed of the moving object as a function of the medium that the object is moving through.
13. The computer-implemented method of claim 1, wherein the sensor is included in a drone launched in response to a report of the moving object, wherein the drone is launched from a moving vehicle, stationary location, or roving vehicle.
14. The computer-implemented method of claim 1, wherein based on the set of merged data, the automatically determining is of both the predicted location or range of locations for the moving object, and the potential path of travel for the moving object.
15. The computer-implemented method of claim 1, further comprising providing a user interface for information entry via a mobile device or a built-in display; wherein at least some of the moving object data received is based on input from one or more users using the user interface.
16. The computer-implemented method of claim 1, wherein the sensor is located on a mobile device or a remote device, and the sensor data includes at least location information.
17. The computer-implemented method of claim 2, wherein the automatic determining comprises calculating an area from the predicted location or range of locations for the moving object, and determining whether one or more other objects are located within the area.
18. The computer-implemented method of claim 17, further comprising determining potential for interaction of the moving object with the other or more other objects, comparing the potential to a threshold, using a result of the comparing to determine a level of notification to provide for an alert, and providing the alert at the determined level of notification.
19. The computer-implemented method of claim 1, further comprising configuring a graphical user interface for displaying to a user at least the predicted location or range of locations for the moving object, and the potential path of travel for the moving object, the configuring of the graphical user interface being for one or more of:
three dimensions;
virtual reality interaction; and
augmented reality interaction.
20. The computer-implemented method of claim 1, wherein the sensor data included one or more descriptors for the moving object, the method further comprising identifying the moving object based on at least the one or more descriptors.
21. The computer-implemented method of claim 20, wherein at least one of the one or more descriptors comprises a unique identifier for the moving object, the method including tracking the moving object based on the unique identifier.
22. The computer-implemented method of claim 2, wherein the automatically determining of the predicted location or range of locations for the moving object and the potential path of travel for the moving object is based at least in part on a predicted destination for the moving object.
23. The computer-implemented method of claim 22, wherein the destination is predicted based on one or more of:
(a) the one or more historical traits which include historical data showing a pattern of the moving object having certain destinations at the time and the place of the movement;
(b) past destinations for the potential path of travel of the moving object; and
(c) a calendar having entries concerning the moving object;
wherein different weights are given to each of (a), (b), and (c) in predicting the destination, wherein the different weight given to the historical data in (a) varies at least in part as a function of how recent the historical data occurred.
24. The computer-implemented method of claim 2, further comprising receiving one or more reports from one or more observers concerning observing the moving object at a particular time;
wherein the automatically determining includes triangulating a predicted location or range of positions and the potential path as a function of:
the one or more reports from the one or more observers;
the received moving object data; and
the receiving sensor data from the sensor.
25. The computer-implemented method of claim 24, wherein for the triangulating greater relative weight is given when the one or more reports includes multiple reports in a limited geographic area.
26. The computer-implemented method of claim 23, further comprising receiving additional sensor data from other sensors, wherein the triangulating is further a function the additional sensor data.
27. The computer-implemented method of claim 1, wherein the method further comprises:
configuring the receiving of the sensor data and the moving object data to enable point to point communication in a mesh of the sensor data and the moving object data.
28. The computer-implemented method of claim 1, wherein the sensor data comprises accelerometer data or gyroscope data from a mobile device.
29. The computer-implemented method of claim 1, wherein the sensor data includes Global Positioning Satellite (GPS) data.
30. The computer-implemented method of claim 1, further comprising:
causing display of a plurality of projections/reachable range polygons as a function of potential methods of movement for the moving object.
31. The computer-implemented method of claim 30, wherein the potential methods of movement for the moving object include:
in a vehicle on land;
in a vehicle in air;
in a vehicle in water; and
other than in a vehicle.
32. A computer-implemented method for moving object predictive locating, reporting, and alerting, the computer-implemented method comprising:
receiving moving object data corresponding to a moving object;
receiving sensor data from a sensor;
merging the received moving object data and the received sensor data into a set of merged data;
based on the set of merged data, one or more historical traits concerning the object; and the medium that the object is moving through, automatically determining:
a predicted location or range of locations for the moving object,
a potential path of travel for the moving object; and
an alert concerning the moving object; and
providing the alert.
33. A system for moving object predictive locating, reporting, and alerting, the system comprising:
a processor; and
a memory communicatively coupled to the processor, the memory storing instructions executable by the processor to perform a method, the method comprising:
receiving moving object data corresponding to a moving object;
receiving sensor data from a sensor;
merging the received moving object data and the received sensor data into a set of merged data;
based on the set of merged data, one or more historical traits concerning the object; and the medium that the object is moving through, automatically determining:
a predicted location or range of locations for the moving object,
a potential path of travel for the moving object; and
an alert concerning the moving object; and
providing the alert.
US16/748,165 2017-11-14 2020-01-21 Moving Target of Interest Predictive Locating, Reporting, and Alerting Abandoned US20200175289A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/748,165 US20200175289A1 (en) 2017-11-14 2020-01-21 Moving Target of Interest Predictive Locating, Reporting, and Alerting
US17/180,787 US11876558B2 (en) 2020-01-21 2021-02-20 Secure line-of-sight communication with aircraft

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762585776P 2017-11-14 2017-11-14
US16/189,938 US11488393B2 (en) 2017-11-14 2018-11-13 Systems and methods for moving object predictive locating, reporting, and alerting
US16/748,165 US20200175289A1 (en) 2017-11-14 2020-01-21 Moving Target of Interest Predictive Locating, Reporting, and Alerting

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/189,938 Continuation US11488393B2 (en) 2017-11-14 2018-11-13 Systems and methods for moving object predictive locating, reporting, and alerting

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/180,787 Continuation-In-Part US11876558B2 (en) 2020-01-21 2021-02-20 Secure line-of-sight communication with aircraft

Publications (1)

Publication Number Publication Date
US20200175289A1 true US20200175289A1 (en) 2020-06-04

Family

ID=66432289

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/189,938 Active 2039-06-22 US11488393B2 (en) 2017-11-14 2018-11-13 Systems and methods for moving object predictive locating, reporting, and alerting
US16/748,165 Abandoned US20200175289A1 (en) 2017-11-14 2020-01-21 Moving Target of Interest Predictive Locating, Reporting, and Alerting

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/189,938 Active 2039-06-22 US11488393B2 (en) 2017-11-14 2018-11-13 Systems and methods for moving object predictive locating, reporting, and alerting

Country Status (2)

Country Link
US (2) US11488393B2 (en)
WO (1) WO2019099413A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11367355B2 (en) * 2020-03-04 2022-06-21 International Business Machines Corporation Contextual event awareness via risk analysis and notification delivery system
US20220254250A1 (en) * 2021-02-10 2022-08-11 Lhp, Inc. Streetlight situational awareness system
US20240257643A1 (en) * 2023-01-31 2024-08-01 James P. Bradley Drone Warning System for Preventing Wrong-Way Collisions
US12103523B2 (en) 2021-10-07 2024-10-01 Here Global B.V. Method, apparatus, and computer program product for identifying wrong-way driven vehicles

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018146546A (en) 2017-03-09 2018-09-20 エアロセンス株式会社 Information processing system, information processing device, and information processing method
US11488393B2 (en) 2017-11-14 2022-11-01 AWARE Technologies Systems and methods for moving object predictive locating, reporting, and alerting
US10762785B1 (en) * 2018-01-09 2020-09-01 State Farm Mutual Automobile Insurance Company Vehicle collision alert system and method
US10599152B2 (en) * 2018-02-12 2020-03-24 Ford Global Technologies, Llc Earthquake emergency management system for automotive vehicles
US10573183B1 (en) * 2018-09-27 2020-02-25 Phiar Technologies, Inc. Mobile real-time driving safety systems and methods
DE102018218922A1 (en) * 2018-11-06 2020-05-07 Robert Bosch Gmbh Prediction of expected driving behavior
DE102019200345A1 (en) * 2019-01-14 2020-07-16 Continental Automotive Gmbh Cloud-based detection and warning of danger spots
CN111324680B (en) * 2019-01-25 2021-05-18 北京嘀嘀无限科技发展有限公司 Information display method and device
US11466994B2 (en) * 2019-02-08 2022-10-11 Uber Technologies, Inc. Optimized issue reporting system
US10785634B1 (en) * 2019-03-08 2020-09-22 Telefonaktiebolaget Lm Ericsson (Publ) Method for end-to-end (E2E) user equipment (UE) trajectory network automation based on future UE location
WO2020183602A1 (en) * 2019-03-12 2020-09-17 ソニー株式会社 Information processing device and information processing method
EP3748602A1 (en) 2019-06-03 2020-12-09 Sony Corporation Monitoring vehicle movement for traffic risk mitigation
US11240367B1 (en) 2019-06-05 2022-02-01 Brook S. Parker-Bello System, method, and apparatus for coordinating resources to prevent human trafficking and assist victims of human trafficking
TWI705016B (en) * 2019-07-22 2020-09-21 緯創資通股份有限公司 Driving alarm system, driving alarm method and electronic device using the same
WO2021016596A1 (en) 2019-07-25 2021-01-28 Nvidia Corporation Deep neural network for segmentation of road scenes and animate object instances for autonomous driving applications
CN112305534B (en) * 2019-07-26 2024-03-19 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and storage medium
US11854277B1 (en) * 2019-09-06 2023-12-26 Ambarella International Lp Advanced number plate recognition implemented on dashcams for automated amber alert vehicle detection
US11958183B2 (en) 2019-09-19 2024-04-16 The Research Foundation For The State University Of New York Negotiation-based human-robot collaboration via augmented reality
KR20190118994A (en) * 2019-10-01 2019-10-21 엘지전자 주식회사 Method and device for focusing sound source
DE102019216074A1 (en) * 2019-10-18 2021-04-22 Robert Bosch Gmbh Method for providing an object message about an object recognized in the surroundings of a road user in a communication network for communication with other road users
US11511666B2 (en) * 2019-10-28 2022-11-29 Verizon Patent And Licensing Inc. Systems and methods for utilizing machine learning to identify vehicle surroundings, route conditions, and points of interest
US20220383748A1 (en) * 2019-10-29 2022-12-01 Sony Group Corporation Vehicle control in geographical control zones
US11885907B2 (en) * 2019-11-21 2024-01-30 Nvidia Corporation Deep neural network for detecting obstacle instances using radar sensors in autonomous machine applications
US11532168B2 (en) 2019-11-15 2022-12-20 Nvidia Corporation Multi-view deep neural network for LiDAR perception
US12080078B2 (en) 2019-11-15 2024-09-03 Nvidia Corporation Multi-view deep neural network for LiDAR perception
US12050285B2 (en) 2019-11-21 2024-07-30 Nvidia Corporation Deep neural network for detecting obstacle instances using radar sensors in autonomous machine applications
US11414088B2 (en) * 2020-01-16 2022-08-16 Toyota Motor Engineering & Manufacturing North America, Inc. Anomalous driver detection system
EP4122787B1 (en) * 2020-03-19 2024-04-24 NISSAN MOTOR Co., Ltd. Vehicle travel assistance method and vehicle travel assistance device
CN111619584B (en) * 2020-05-27 2021-09-21 北京经纬恒润科技股份有限公司 State supervision method and device for unmanned automobile
CN112347993B (en) * 2020-11-30 2023-03-17 吉林大学 Expressway vehicle behavior and track prediction method based on vehicle-unmanned aerial vehicle cooperation
US20220171397A1 (en) * 2020-12-02 2022-06-02 The Boeing Company Computing device and method for tracking movement of objects
US11958422B2 (en) * 2021-02-01 2024-04-16 T-Mobile Usa, Inc. Road condition reporter
AU2022234565A1 (en) * 2021-03-10 2023-08-31 Leonardo Us Cyber And Security Solutions, Llc Systems and methods for vehicle information capture using white light
US12051438B1 (en) * 2021-03-26 2024-07-30 T-Mobile Usa, Inc. Using machine learning to locate mobile device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR960003444A (en) 1994-06-01 1996-01-26 제임스 디. 튜턴 Vehicle surveillance system
CN103413231B (en) * 2006-03-16 2017-10-27 比凯特有限责任公司 The system and method that height correlation advertisement is shown on mobile object and income is obtained
GB2436916B (en) * 2006-03-29 2008-09-24 John Turnbull Warning System
US9963145B2 (en) * 2012-04-22 2018-05-08 Emerging Automotive, Llc Connected vehicle communication with processing alerts related to traffic lights and cloud systems
US20130197736A1 (en) 2012-01-30 2013-08-01 Google Inc. Vehicle control based on perception uncertainty
US20140012494A1 (en) * 2012-07-06 2014-01-09 International Business Machines Corporation Collaborative gps tracking
US9536424B2 (en) 2014-02-10 2017-01-03 Here Global B.V. Adaptive traffic dynamics prediction
US10493996B2 (en) * 2014-09-22 2019-12-03 Future Technology Partners, Llc Method and system for impaired driving detection, monitoring and accident prevention with driving habits
US9573592B2 (en) * 2014-12-23 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Risk mitigation for autonomous vehicles relative to oncoming objects
US9834224B2 (en) * 2015-10-15 2017-12-05 International Business Machines Corporation Controlling driving modes of self-driving vehicles
US10081357B2 (en) * 2016-06-23 2018-09-25 Honda Motor Co., Ltd. Vehicular communications network and methods of use and manufacture thereof
US10496091B1 (en) * 2016-08-17 2019-12-03 Waymo Llc Behavior and intent estimations of road users for autonomous vehicles
US10650621B1 (en) * 2016-09-13 2020-05-12 Iocurrents, Inc. Interfacing with a vehicular controller area network
DE102017220139A1 (en) * 2017-11-13 2019-05-16 Robert Bosch Gmbh Method and device for providing a position of at least one object
US11488393B2 (en) 2017-11-14 2022-11-01 AWARE Technologies Systems and methods for moving object predictive locating, reporting, and alerting

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11367355B2 (en) * 2020-03-04 2022-06-21 International Business Machines Corporation Contextual event awareness via risk analysis and notification delivery system
US20220254250A1 (en) * 2021-02-10 2022-08-11 Lhp, Inc. Streetlight situational awareness system
US12112620B2 (en) * 2021-02-10 2024-10-08 Lhp, Inc. Streetlight situational awareness system
US12103523B2 (en) 2021-10-07 2024-10-01 Here Global B.V. Method, apparatus, and computer program product for identifying wrong-way driven vehicles
US20240257643A1 (en) * 2023-01-31 2024-08-01 James P. Bradley Drone Warning System for Preventing Wrong-Way Collisions

Also Published As

Publication number Publication date
US11488393B2 (en) 2022-11-01
WO2019099413A1 (en) 2019-05-23
US20190147260A1 (en) 2019-05-16

Similar Documents

Publication Publication Date Title
US11488393B2 (en) Systems and methods for moving object predictive locating, reporting, and alerting
US10922050B2 (en) System and method for providing mobile personal security platform
US10599818B2 (en) Event-based vehicle operation and event remediation
CN112088397B (en) System and method for vehicle geofence management
CN106662458B (en) Wearable sensor data for improving map and navigation data
EP3217244A1 (en) Drive mode switch-over according to autopilot geofencing on a planned route
US8630820B2 (en) Methods and systems for threat assessment, safety management, and monitoring of individuals and groups
US9503860B1 (en) Intelligent pursuit detection
US20170268896A1 (en) Vehicular communications network and methods of use and manufacture thereof
US20140118140A1 (en) Methods and systems for requesting the aid of security volunteers using a security network
US10466691B2 (en) Coordinated control of self-driving vehicles under emergency situations
CN106463054A (en) Adaptive alarm management for Advanced Driver Assistance System (ADAS)
US10553113B2 (en) Method and system for vehicle location
US20080183344A1 (en) Systems and methods for communicating restricted area alerts
US20080033644A1 (en) Navigation Routing System Having Environmentally Triggered Routing
US12087158B1 (en) Traffic control system
US20220114879A1 (en) Vehicle control for user safety and experience
US20150230061A1 (en) Distributed optimization for event traffic control
US20210264792A1 (en) Using Geofences To Restrict Vehicle Operation
US11900804B2 (en) System and method for map-based geofencing for emergency vehicle
US20230078911A1 (en) Method and apparatus for dispersing incident routing
US20220194426A1 (en) Method and apparatus for increasing passenger safety based on accident/road link correlation
EP3935613B1 (en) Systems and method for map-based geofencing
CN109863540A (en) Quick responding method and system based on autopilot facility, storage medium
US20230046309A1 (en) Moving Target of Interest Predictive Locating, Reporting, and Alerting

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION