WO2018052322A1 - Système et procédé pour une caméra fixe et collaboration de dispositif mobile sans pilote pour améliorer la certitude d'identification d'un objet - Google Patents

Système et procédé pour une caméra fixe et collaboration de dispositif mobile sans pilote pour améliorer la certitude d'identification d'un objet Download PDF

Info

Publication number
WO2018052322A1
WO2018052322A1 PCT/PL2016/050039 PL2016050039W WO2018052322A1 WO 2018052322 A1 WO2018052322 A1 WO 2018052322A1 PL 2016050039 W PL2016050039 W PL 2016050039W WO 2018052322 A1 WO2018052322 A1 WO 2018052322A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
certainty
captured
unmanned mobile
interest
Prior art date
Application number
PCT/PL2016/050039
Other languages
English (en)
Inventor
Wojciech Jan Kucharski
Pawel Jurzak
Grzegorz KAPLITA
Original Assignee
Motorola Solutions, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Solutions, Inc. filed Critical Motorola Solutions, Inc.
Priority to GB1902085.8A priority Critical patent/GB2567587B/en
Priority to US16/308,503 priority patent/US10902267B2/en
Priority to DE112016007236.8T priority patent/DE112016007236T5/de
Priority to PCT/PL2016/050039 priority patent/WO2018052322A1/fr
Publication of WO2018052322A1 publication Critical patent/WO2018052322A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • FIG. 1 is a system diagram illustrating an example operating environment for improving object recognition certainty of objects imaged by fixed cameras through intelligent dispatch and coordination between fixed cameras and camera- equipped unmanned mobile vehicles, in accordance with an embodiment.
  • FIG. 2 is a plan diagram of a geographic area diagrammatically illustrating fixed camera and camera-equipped unmanned mobile vehicle positioning and coordination, in accordance with an embodiment.
  • FIG. 3 is a device diagram showing a device structure of a computing device for improving object recognition certainty of objects of interest imaged by fixed cameras through intelligent dispatch and coordination between fixed cameras and camera-equipped unmanned mobile vehicles, in accordance with an embodiment.
  • FIG. 4 illustrates a flow chart setting forth process steps for operating a computing device of FIG. 3 to improve object recognition certainty of objects imaged by fixed cameras through intelligent dispatch and coordination between fixed cameras and camera-equipped unmanned mobile vehicles, in accordance with an embodiment.
  • FIG. 5 illustrates a flow chart setting forth process steps for operating a distributed system of fixed cameras and camera-equipped unmanned mobile vehicles to improve object recognition certainty of objects imaged by fixed cameras, in accordance with an embodiment.
  • FIG. 6 is a diagram illustrating several different possible points of view for capturing and comparing a person's face as a captured object for comparison to another captured person's face as an object of interest, in accordance with an embodiment.
  • a process for fixed camera and unmanned mobile device collaboration to improve identification of an object of interest includes: receiving, at an electronic processing device from a fixed camera, a captured first point of view of a first captured object and determining, with a first level of certainty in a predetermined level of certainty range, that the captured first point of view of the first object matches a first stored object of interest; identifying, by the electronic processing device, one or more camera-equipped unmanned mobile vehicles in a determined direction of travel of the first captured object;
  • the electronic processing device transmitting, by the electronic processing device to the one or more identified camera-equipped unmanned mobile vehicles, a dispatch instruction and intercept information, the intercept information including the determined direction of travel of the first captured object, information sufficient to identify either the first captured object or a vehicle with which the first captured object is travelling, and information identifying a desired second point of view of the first captured object different from the first point of view; receiving, by the electronic processing device, via the identified one or more camera-equipped unmanned mobile vehicles, a captured second point of view of the first captured object; and using, by the electronic processing device, the captured second point of view of the first captured object to determine, with a second level of certainty, that the first captured object matches the stored object of interest.
  • an electronic processing device for fixed camera and unmanned mobile device collaboration to improve identification of an object of interest includes: a fixed-camera interface; an unmanned mobile device interface; a memory; a transceiver; and one or more processors configured to: receive, via the fixed-camera interface and from a fixed camera, a captured first point of view of a first captured object and determining, with a first level of certainty in a predetermined level of certainty range, that the captured first point of view of the first object matches a first stored object of interest; identify one or more camera-equipped unmanned mobile vehicles in a determined direction of travel of the first captured object; transmit, via the transceiver to the one or more identified camera-equipped unmanned mobile vehicles, a dispatch instruction and intercept information, the intercept information including the determined direction of travel of the first captured object, information sufficient to identify either the first captured object or a vehicle with which the first captured object is travelling, and information identifying a desired second point of view of the first captured object different from the first point of view; receive,
  • an example communication system diagram illustrates a system 100 including a first fixed video camera 102, a first camera-equipped unmanned mobile vehicle 104, and a first object 106 for capture in a first vehicle 108.
  • Each of the first fixed video camera 102 and the first camera-equipped unmanned mobile vehicle 104 may be capable of directly wirelessly communicating via a direct-mode wireless link 142 or a wired link, and/or may be capable of wirelessly communicating via a wireless infrastructure radio access network (RAN) 152 over respective wireless infrastructure links 140, 144.
  • RAN radio access network
  • the fixed video camera 102 may be any imaging device capable of taking still or moving-image captures in a corresponding area of interest, illustrated in FIG. 1 as a road, but in other embodiments, may include a building entry-way, a bridge, a sidewalk, or some other area of interest.
  • the fixed video camera 102 is fixed in the sense that it cannot physically move itself in any significant direction (e.g., more than one foot or one inch in any horizontal or vertical direction).
  • the fixed video camera 102 may be continuously on, may periodically take images at a regular cadence, or may be trigged to begin capturing images and/or video as a result of some other action, such as motion detected in the corresponding area of interest by a separate motion detector device
  • the fixed video camera 102 may include a CMOS or CCD imager, for example, for digitally capturing images and/or video of a corresponding area of interest. Images and/or video captured at the fixed video camera 102 may be stored at the fixed video camera 102 itself, and/or may be transmitted to a separate storage or processing device via direct-mode wireless link 142 and/or RAN 152. While fixed video camera 102 is illustrated in FIG. 1 as affixed to a street light or street pole, in other
  • the fixed video camera 102 may be affixed to a building, a stop light, a street sign, or some other structure.
  • the first camera-equipped unmanned mobile vehicle 104 may be a camera-equipped flight-capable airborne drone having an electro-mechanical drive element, an imaging camera, and a microprocessor that is capable of taking flight under its own control, under control of a remote operator, or some combination thereof, and taking images and/or video of a region of interest prior to, during, or after flight.
  • the imaging camera attached to the unmanned mobile vehicle 104 may be fixed in its direction (and thus rely upon repositioning of the mobile vehicle 104 it is attached to for camera positioning) or may include a pan, tilt, zoom motor for independently controlling pan, tilt, and zoom features of the imaging camera.
  • the first camera-equipped unmanned mobile vehicle 104 while depicted in FIG.
  • the imaging camera attached to the unmanned mobile vehicle 104 may be continuously on, may periodically take images at a regular cadence, or may be trigged to begin capturing images and/or video as a result of some other action, such as the unmanned mobile vehicle 104 being dispatched to a particular area of interest or dispatched with instructions to intercept a certain type of person, vehicle, or object.
  • the imaging camera may include a CMOS or CCD imager, for example, for digitally capturing images and/or video of the corresponding region of interest, person, vehicle, or object of interest.
  • Images and/or video captured at the imaging camera may be stored at the unmanned mobile vehicle 104 itself and/or may be transmitted to a separate storage or processing device via direct- mode wireless link 142 and/or RAN 152. While unmanned mobile vehicle 104 is illustrated in FIG. 1 as being temporarily positioned at a street light or street pole (perhaps functioning as a charging station to charge a battery in the unmanned mobile vehicle 104 while it is not in flight), in other embodiments, the unmanned mobile vehicle 104 may be positioned atop a building, atop a stop light, or some other structure.
  • Infrastructure RAN 152 may implement over wireless links 140, 144 a conventional or trunked land mobile radio (LMR) standard or protocol such as ETSI Digital Mobile Radio (DMR), a Project 25 (P25) standard defined by the Association of Public Safety Communications Officials International (APCO), Terrestrial Trunked Radio (TETRA), or other LMR radio protocols or standards.
  • infrastructure RAN 152 may additionally or alternatively implement over wireless links 140, 144 a Long Term Evolution (LTE) protocol including multimedia broadcast multicast services (MBMS), an open mobile alliance (OMA) push to talk (PTT) over cellular (OMA-PoC) standard, a voice over IP (VoIP) standard, or a PTT over IP (PoIP) standard.
  • LTE Long Term Evolution
  • MBMS multimedia broadcast multicast services
  • OMA open mobile alliance
  • PTT push to talk
  • VoIP voice over IP
  • PoIP PTT over IP
  • infrastructure RAN 152 may additionally or alternatively implement over wireless links 140, 144 a Wi-Fi protocol perhaps in accordance with an IEEE 802.11 standard (e.g., 802.11a, 802.11b, 802. l lg) or a WiMAX protocol perhaps operating in accordance with an IEEE 802.16 standard. Other types of wireless protocols could be implemented as well.
  • IEEE 802.11 standard e.g., 802.11a, 802.11b, 802. l lg
  • WiMAX protocol perhaps operating in accordance with an IEEE 802.16 standard.
  • Other types of wireless protocols could be implemented as well.
  • the infrastructure RAN 152 is illustrated in FIG.
  • a controller 156 e.g., radio controller, call controller, PTT server, zone controller, MME, BSC, MSC, site controller, Push-to-Talk controller, or other network device
  • a dispatch console 158 operated by a dispatcher.
  • more or different types of fixed terminals may provide RAN services to the fixed camera 102 and the unmanned mobile vehicle 104 and may or may not contain a separate controller 156 and/or dispatch console 158.
  • the first object 106 for capture is illustrated in FIG. 1 as a facial profile of a driver of vehicle 108.
  • the fixed camera 102 may obtain a first image and/or video capture of the first object 106 as the vehicle 108 passes by the fixed camera 102.
  • the first image and/or video capture may be only one point of view (POV, such as profile, straight-on, or three-quarters) and may be only a partial capture at that (e.g., due to interfering objects such as other cars, people, or traffic items, or due to reflections or weather, among other potential interferers).
  • POV point of view
  • a computing device processing the first image or video capture and attempting to match an object in the first image and/or video capture against a first stored object of interest with a lower level of certainty than absolute certainty (e.g., less than 100%, less than 90%, less than 80 or 70% or 60%).
  • a lower level of certainty than absolute certainty e.g., less than 100%, less than 90%, less than 80 or 70% or 60%.
  • the computing device processing the first image and/or video capture may match the first object in the first image and/or video capture against the first stored object of interest with a higher level of certainty than absolute uncertainty (e.g., greater than 0%, or great than 10% or 20%, or greater than 30% or 40%).
  • a higher level of certainty than absolute uncertainty e.g., greater than 0%, or great than 10% or 20%, or greater than 30% or 40%.
  • the fixed camera 102 and the unmanned mobile vehicle 104 may collaborate or coordinate to obtain a second POV image and/or video capture of the object and obtain a corresponding second certainty of a match.
  • FIG. 2 an example plan diagram 200 illustrating fixed camera 202 and camera-equipped unmanned mobile vehicle 204 positioning and coordination relative to an underlying cartographic street map for capture of a second object 206 in a second vehicle 208.
  • the fixed camera 202 may be the same or similar to the fixed camera 102 described with respect to FIG. 1 above
  • the unmanned mobile vehicles 204 may be the same or similar to the unmanned mobile vehicle 104 described with respect to FIG. 1 above.
  • the vehicle 208 is illustrated in FIG. 2 as traveling west along a street 209 when the fixed camera 202 captures a first image and/or video of the second object 206 in the second vehicle 208 and matches it against a stored object of interest with a first certainty less than absolute certainty but greater than absolute uncertainty.
  • the first image or video of the second object 206 may be a side- profile facial image capture of a driver.
  • the first certainty may be a certainty of less than 90%, or less than 80 or 70%, but greater than 10%, or greater than 20%, 30% or 40%.
  • the level of certainty range within which the herein described collaboration or coordination may be triggered may vary based on metadata associated with the object of interest and stored accompanying the object of interest or separately retrievable from another database but linked to the object of interest.
  • a facial image object of interest associated with a person accused or convicted of a felony infraction may have an associated level of certainty range of 10% (or 20%) to 80% (or 70%) to trigger the herein described collaboration or coordination processes
  • a facial image object of interest associated with a person accused or convicted of a misdemeanor infraction may have an associated level of certainty range of 35% (or 45%) to 85% (or 75%).
  • the disclosed technical processes may only be employed for certain types of infractions, such as felony infractions, and not at all for other types of infractions, such as misdemeanor infractions.
  • an officer or other responder may be automatically dispatched.
  • the fixed camera 202 or some other computing device processing the first image and/or video may cause a nearby camera-equipped unmanned mobile vehicle 204 to be dispatched to take a second point of view image capture of the second object 206 driver (preferably different than the first, such as three-quarters view or head-on, or perhaps un-obstructed side-profile compared to an obstructed first image or video).
  • a nearby camera-equipped unmanned mobile vehicle 204 may be dispatched to take a second point of view image capture of the second object 206 driver (preferably different than the first, such as three-quarters view or head-on, or perhaps un-obstructed side-profile compared to an obstructed first image or video).
  • the computing device may access a cartographic database of streets and determine, based on a detected direction of motion from the first image and/or video or as reported by some other sensor other than the fixed camera 202, identify one or more possible future paths 210, 212 that the second object 206 (and/or the vehicle 208 carrying the second object 206) is likely to take.
  • the computing device may identify all possible paths and transmit, via a transceiver, a dispatch instruction (instructing the unmanned mobile vehicle to position itself to take a second image / video capture of the second object of interest, including any necessary power-up and/or movement off of its temporary charging platform) and intercept information (including information sufficient to identify the second object 206 and/or the vehicle 208 in which the second object 206 is traveling, such as including but not limited to, the first image and/or video of the second object, a location of the fixed camera 202, a make, model, and/or color of the vehicle in which the second object is traveling, a license plate associated with the vehicle in which the second object is traveling, a lane in which the vehicle is traveling, a status of any detected turn indicators associated with the vehicle, a desired type of second point of view desired (e.g., full-on, three- quarters, and/or profile), and/or other information necessary for or to aid in identifying the second object 206 and/or the vehicle 208).
  • a dispatch instruction in
  • a computing device in the fixed camera 202 may sub- select from all possible future paths using intercept information provided by the fixed camera, such as a current driving lane or turn indicator of the vehicle 208, and sub-select less than all unmanned mobile vehicles 204 for dispatch. For example, if the vehicle's 208 left turn signal is on, the computing device may transmit the dispatch instruction and the intercept information to only the lower unmanned mobile vehicle 204 in FIG. 2 and not the upper unmanned mobile vehicle 204.
  • a computing device in or coupled to fixed camera 202 may wirelessly broadcast the dispatch instruction and intercept information for direct wireless receipt by one or more of the unmanned mobile vehicles 204 within broadcast range of the fixed camera 202.
  • Those unmanned mobile vehicles 204 receiving the dispatch instruction and intercept information may then individually, or as a group, determine whether an which ones can intercept the second object and/or vehicle and may transmit an acknowledgment and response to the fixed camera including their current location and/or imaging parameters of their associated cameras (e.g., resolution, field of view, focal length, light sensitivity, etc.).
  • the computing device could then determine which ones, or perhaps all, of the acknowledging unmanned mobile vehicles to confirm dispatch to intercept and take a second image and/or video of the second object based on geographic proximity (preferring closer to travel direction of second object and/or second vehicle) and/or imaging parameters (preferring
  • FIG. 3 a schematic diagram illustrates a computing device 300 according to some embodiments of the present disclosure.
  • Computing device 300 may be, for example, embedded in fixed video camera 102, in a processing unit adjacent to fixed video camera 102 but communicatively coupled to fixed video camera 102, at a remote server device in the RAN 152 (such as at controller 156) accessible to fixed video camera 102 and unmanned mobile vehicle 104 via the RAN 152, at the unmanned mobile vehicle 104, or at some other network location.
  • computing device 300 includes a communications unit 302 coupled to a common data and address bus 317 of a processing unit 303.
  • the computing device 300 may also include an input unit (e.g., keypad, pointing device, touch-sensitive surface, etc.) 306 and a display screen 305, each coupled to be in communication with the processing unit 303.
  • an input unit e.g., keypad, pointing device, touch-sensitive surface, etc.
  • a microphone 320 may be present for capturing audio at a same time as an image or video that is further encoded by processing unit 303 and transmitted as an audio/video stream data by communication unit 302 to other devices.
  • a communications speaker 322 may be present for reproducing audio that is sent to the computing device 300 via the communication unit 302, or may be used to play back alert tones or other types of pre-recorded audio when a match is found to an object of interest, so as to alert nearby officers.
  • the processing unit 303 may include a code Read Only Memory (ROM) 312 coupled to the common data and address bus 317 for storing data for initializing system components.
  • the processing unit 303 may further include a microprocessor 313 coupled, by the common data and address bus 317, to a Random Access Memory (RAM) 304 and a static memory 316.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the communications unit 302 may include one or more wired or wireless input/output (I/O) interfaces 309 that are configurable to communicate with other devices, such as a portable radio, tablet, wireless RAN, and/or vehicular transceiver.
  • I/O input/output
  • the communications unit 302 may include one or more wireless transceivers 308, such as a DMR transceiver, a P25 transceiver, a Bluetooth transceiver, a Wi-Fi transceiver perhaps operating in accordance with an IEEE 802.11 standard (e.g., 802.11a, 802.11b, 802. l lg), an LTE transceiver, a WiMAX transceiver perhaps operating in accordance with an IEEE 802.16 standard, and/or other similar type of wireless transceiver configurable to communicate via a wireless radio network.
  • a wireless transceivers 308 such as a DMR transceiver, a P25 transceiver, a Bluetooth transceiver, a Wi-Fi transceiver perhaps operating in accordance with an IEEE 802.11 standard (e.g., 802.11a, 802.11b, 802. l lg), an LTE transceiver, a WiMAX transceiver perhaps operating in accordance with an IEEE 802.
  • the communications unit 302 may additionally or alternatively include one or more wireline transceivers 308, such as an Ethernet transceiver, a Universal Serial Bus (USB) transceiver, or similar transceiver configurable to communicate via a twisted pair wire, a coaxial cable, a fiber-optic link, or a similar physical connection to a wireline network.
  • the transceiver 308 is also coupled to a combined modulator/demodulator 310.
  • the microprocessor 313 has ports for coupling to the input unit 306 and the microphone unit 320, and to the display screen 305 and speaker 322.
  • Static memory 316 may store operating code 325 for the microprocessor 313 that, when executed, performs one or more of the computing device steps set forth in FIG. 4 and accompanying text and/or FIG. 5 and accompanying text.
  • Static memory 216 may comprise, for example, a hard-disk drive (HDD), an optical disk drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a solid state drive (SSD), a tape drive, a flash memory drive, or a tape drive, to name a few.
  • HDD hard-disk drive
  • CD compact disk
  • DVD digital versatile disk
  • SSD solid state drive
  • tape drive a tape drive
  • flash memory drive or a tape drive
  • FIGs. 4 and 5 flow chart diagrams illustrate methods 400 and 500 for improving object recognition level of certainty of objects compared to objects of interest via fixed and camera-equipped unmanned mobile vehicle coordination. While a particular order of processing steps, message receptions, and/or message transmissions is indicated in FIGs. 4 and 5 for exemplary purposes, timing and ordering of such steps, receptions, and transmissions may vary where appropriate without negating the purpose and advantages of the examples set forth in detail throughout the remainder of this disclosure.
  • a corresponding computing device such as that set forth in FIG.
  • method 400 may execute method 400 and/or 500 at power-on, at some predetermined time period thereafter, in response to a trigger raised locally at the device via an internal process or in response to a trigger generated external to the computing device and received via an input interface, among other possibilities.
  • method 400 describes a process for improving object recognition certainty of objects compared to objects of interest by fixed and camera-equipped unmanned mobile vehicle coordination from the perspective of a centralized computing device
  • method 500 describes a similar process for improving object recognition certainty of objects compared to objects of interest by fixed and camera-equipped unmanned mobile vehicle coordination from the perspective of a distributed wireless network of devices.
  • Method 400 begins at step 402 where a computing device
  • the fixed camera may be, for example, a fixed camera coupled to a utility pole, a camera equipped to an ATM machine, a camera coupled to a traffic light, or some other fixed camera having a field of view that covers the first object.
  • the computing device compares the first POV of the first object to an object of interest stored at the computing device or stored remote from the computing device but made accessible to the computing device.
  • the stored object of interest may be an image of a person of interest (inside or outside of a vehicle and such as a facial capture of the person of interest, or particular characteristics such as a tattoo or scar of the person of interest) or a vehicle (such as an image of the vehicle, a particular make and model of the vehicle, or a particular license plate of the vehicle), or may be a image of some other object (such as a particular type of firearm, a particular word or phrase on a bumper sticker, a particular hat worn by a person, a particular warning sticker associated with a flammable, explosive, or biological substance, and/or other types objects of interest).
  • the computing device determines whether the first POV image and/or video of the first object matches an object of interest with a first determined level of certainty within a predetermined level of certainty range.
  • this predetermined level of certainty range extends lower than absolute certainty (e.g., less than 100%, less than 90%, or less than 80%, 70%, or 60%) and higher than absolute uncertainty (e.g., greater than 0%, greater than 10%, greater than 20%, or greater than 30% or 40%), and may vary based on an identity or category associated with the object of interest (e.g., a larger range but potentially shifted lower overall, such as 10% to 60% certainty, for more serious or dangerous objects of interest such as persons convicted of felonies or graphics associated with biohazards and a smaller range but potentially shifted higher overall, such as 40 to 70% for less serious or dangerous objects of interest, such as persons convicted of misdemeanors or graphics associated with criminal organizations).
  • first determined levels of certainty above such as 60% for more
  • Various text, image, and/or object recognition algorithms may be used to match the captured first POV of the first object to the stored object of interest, including but not limited to geometric hashing, edge detection, scale-invariant feature transform (SIFT), speeded-up robust features (SURF), neural networks, deep learning, genetic, optical character recognition (OCR), gradient-based and derivative-based matching approaches, Viola- Jones algorithm, template matching, or image segmentation and blob analysis.
  • SIFT scale-invariant feature transform
  • SURF speeded-up robust features
  • OCR optical character recognition
  • the certainty of a match is provided by the matching algorithm and sets forth a numerical representation of how certain, or how confident, the algorithm is that the first object (from the first captured image and/or video of the first POV) matches the object of interest.
  • This numerical representation may be a percentage value (between 0 and 100%, as set forth above), a decimal value (between 0 and 1), or some other predetermined ranged numerical value having an upper and lower bound.
  • One or more additional points of view of the first object may be necessary for the algorithm to become more certain, or confident, that there is a match between the first object and the object of interest stored in a text, image, or object database. If the match at step 404 does not fall into the predetermined level of certainty range, processing proceeds back to step 404 again and the first object is compared to another stored object of interest. Once all objects of interest have been compared, method 400 may stop and the system may ultimately refrain from dispatching a first responder or otherwise taking any action on the first object.
  • processing proceeds to step 406, where the computing device identifies one or more camera-equipped unmanned mobile vehicles in a direction of travel of the first object.
  • the computing device may maintain a pre-determined list of camera-equipped unmanned mobile vehicle locations, or may dynamically update a maintained list of camera-equipped unmanned mobile vehicle locations as they report their current locations.
  • Each camera-equipped unmanned mobile vehicle in the list may be uniquely identified via a unique alpha-numeric identifier, and may also be associated with a status, e.g., whether it is available for dispatch, its battery power levels, its estimated flight times based on the remaining power level, imaging characteristics associated with the unmanned mobile vehicle's camera, and other information useful in determining, by the computing device, which one or more camera- equipped unmanned mobile vehicles to dispatch to intercept the first object for additional imaging purposes.
  • the computing device may select an available camera-equipped unmanned mobile vehicle from the list having a location closest to the fixed camera, a location closest to an expected intercept point of the first object (or the vehicle carrying the first object) considering speed, direction, street lane, and/or cartographic information of streets and/or other paths on which the first object or vehicle is traveling, a camera-equipped unmanned mobile vehicle having a highest imaging parameter relative to the first object (e.g., preferring a better shutter speed if the first object / vehicle is traveling at a high rate of speed, preferring a higher resolution if the first object / vehicle is traveling at a low / nominal rate of speed under, for example, 25 or 35 miles per hour, preferring a higher zoom capability and/or optical image stabilization capability if the object of interest is relatively small, for example, less than 25 cm in area or 25 cm in volume, preferring an air-based mobile vehicle or land-based mobile vehicle dependent upon an altitude at which the first object exists and a second POV desired of the first object, or some combination of the
  • Parameters such as the foregoing may be pre-populated in the list maintained at the computing device or made accessible to the computing device, or may be transmitted to the computing device by each respective camera-equipped unmanned mobile vehicle and stored in the list.
  • the computing device may identify only a single best or most capable camera-equipped unmanned mobile vehicle at step 406, while in other embodiments, the computing device may identify two or more camera-equipped unmanned mobile vehicles at step 406 using one or more of the parameters set forth above.
  • the computing device transmits, via a transceiver, a dispatch instruction and intercept information to the identified one or more camera-equipped unmanned mobile vehicles.
  • the dispatch instruction and intercept information may be transmitted to the one or more preferred camera- equipped unmanned mobile vehicles via a RAN such as RAN 152 and
  • a wired link may be used to communicatively couple the computing device to the identified one or more preferred camera-equipped unmanned mobile vehicles, ultimately reaching the identified one or more preferred camera- equipped unmanned mobile vehicles via a near- field or wired coupling at a charging point at which the mobile vehicle is resting.
  • the dispatch instruction may be incorporated into the intercept information message, or may be sent as a separate message requesting that the receiving identified one or more preferred camera-equipped unmanned mobile vehicles take action to intercept the first object and take a second POV image and/or video of the first object to improve a certainty that the first object matches the object of interest, alone or in combination with the first POV of the first captured object from the fixed camera obtained at step 402.
  • the intercept information includes information sufficient for the identified one or more preferred camera-equipped unmanned mobile vehicles to identify the first object and take a second POV image / video capture of the first object.
  • the intercept information may include a copy of the captured first POV image and/or video of the first object taken by the fixed camera at step 402, which the identified one or more preferred camera-equipped unmanned mobile vehicles may use to monitor an area around them for the first object approaching it.
  • the intercept information may include information identifying a vehicle in which the first object is traveling in (which may or may not be included in the first POV image and/or video provided by the fixed camera, but which may have been separately imaged and/or processed by the fixed camera), such as a make and model of a vehicle on or in which the first object is traveling or a license plate of a vehicle on or in which the first object is traveling.
  • the intercept information may contain information identifying a direction and/or speed of travel of the first object or vehicle on or in which the first object is traveling, may also include a lane out of a plurality of available lanes in which the vehicle is traveling, and may also include a status of a turn indicator such as a turn signal or hand gesture indicating a pending turn of the vehicle. Still further, the intercept information may include a location of the fixed camera and a time at which the first object was captured at the fixed camera. Other information could be included as well. This information may then be used by the receiving identified one or more preferred camera-equipped unmanned mobile vehicles to more quickly and accurately locate the first object.
  • the dispatch instruction or intercept information may also include information sufficient to identify a desired POV of the first object that would maximize an increase in certainty.
  • a certainty of a match between the first POV profile view 604 / three-quarters view 606 and a stored object of interest may be 60% and may fall in a predetermined level of certainty range where it is not entirely certain that there is a match but it is also not entirely certain that there is not a match.
  • the computing device may identify as a function of the first POV already provided, a particular desired POV that is one or both of an improvement in POV compared to the first POV and is complementary to the first POV so as to maximize a second level of certainty, such as the straight-on-view 602 of the first object that the computing device desires to receive from the identified one or more preferred camera-equipped unmanned mobile vehicles, and may transmit such a desire or instruction in the dispatch instruction or intercept information.
  • the receiving identified one or more preferred camera-equipped unmanned mobile vehicles may then use this additional information to position itself or themselves to obtain the particularly desired POV of the first object.
  • the computing device receives, in response to causing the transmission of the dispatch instruction and intercept information to the identified one or more preferred camera-equipped unmanned mobile vehicles at step 408, a captured second POV image and/or video of the first object.
  • the second POV image and/or video may be taken from a same or different POV than the first POV image and/or video of the first object, may be taken at a same or different distance than the first POV image and/or video, or may be taken using a camera having different, preferably higher, imaging parameters than the first POV image and/or video.
  • multiple captured second POV images and/or video of the first object may be received by the computing device at step 410.
  • the captured second POV image and/or video (or images and/or videos if multiple) is fed back in to the same or similar object identification / matching algorithm as set forth in step 404 above, accompanying the first POV image and/or video or not, and a second level of certainty that the first object matches the object of interest from step 404 is determined.
  • the second determined level of certainty will be different than the first determined level of certainty, and preferably, will be significantly greater than the first determined level of certainty (such as twenty percentage points or more greater that there is a match and in some embodiments greater than the upper range of the predetermined level of certainty range, meaning the second POV of the first object helped confirm that the first object matches the object of interest) or significantly less than the first determined level of certainty (such as twenty percentage points or more lower that there is a match, and in some embodiments less than the lower range of the predetermined level of certainty range, meaning the second POV of the first object helped confirm that the first object does not match the object of interest).
  • the computing device may cause an alert to be transmitted to one of a dispatch console and a mobile computing device determined to be located in an area within a threshold distance, such as 0.5, 1, or 2 miles, of the one or more identified camera-equipped unmanned mobile vehicles.
  • the alert may include the captured first and/or second POV images and/or video, or links thereto, of the first object and/or one or both of the determined first and second levels of certainty.
  • the alert may further include a current location (or locations) of the one or more identified camera-equipped unmanned mobile vehicles if they are still tracking the first object and/or locations of the fixed camera and the one or more identified camera-equipped unmanned mobile vehicles when the first and second POV images and/or videos were taken.
  • a live video stream (or streams) from the one or more identified camera-equipped unmanned mobile vehicles of the first object (assuming the first object is still being tracked) may be provided subsequent to the alert or in response to a receiving device activating a link in the alert requesting such a live video stream.
  • the type of alert provided may depend on one or both of the second level of certainty and an identity or category associated with the object of interest (e.g., a more noticeable alert such as a haptic alert and/or a flashing full- screen visual alert for more serious or dangerous objects of interest such as persons convicted of felonies or graphics associated with biohazards and a more subtle alert such as an audio tone or text message for less serious or dangerous objects of interest such as persons convicted of misdemeanors or graphics associated with known criminal organizations).
  • a more noticeable alert such as a haptic alert and/or a flashing full- screen visual alert for more serious or dangerous objects of interest such as persons convicted of felonies or graphics associated with biohazards
  • a more subtle alert such as an audio tone or text message for less serious or dangerous objects of interest such as persons convicted of misdemeanors or graphics associated with known criminal organizations.
  • method 500 describes a similar process for improving object recognition certainty of objects compared to objects of interest by fixed and camera-equipped unmanned mobile vehicle coordination from the perspective of a distributed wireless network of devices.
  • Method 500 begins at step 502, similar to step 402 of FIG. 4, where a second computing device receives a first point of view (POV) of a second object from a fixed camera. The second computing device compares the first POV of the second object to an object of interest stored at the second computing device or stored remote from the second computing device but made accessible to the second computing device.
  • POV point of view
  • the second computing device determines whether the first POV image and/or video of the second object matches an object of interest with a first determined level of certainty within a predetermined level of certainty range.
  • step 504 If the match at step 504 does not fall into the predetermined level of certainty range, processing proceeds back to step 504 again and the second object is compared to another stored object of interest. Once all objects of interest have been compared, method 500 may stop and the system may ultimately refrain from dispatching a first responder or otherwise taking any action on the second object.
  • processing proceeds to step 508, similar to step 408 of FIG. 4, where the second computing device this time causes a broadcast, via a transceiver, of a dispatch instruction and intercept information to be wirelessly transmitted for receipt by one or more camera-equipped unmanned mobile vehicles in a vicinity (e.g., direct mode wireless transmit range) of the second computing device and/or transceiver.
  • the dispatch instruction and intercept information may be formed in a manner, and contain a same or similar information, as that already set forth above with respect to step 408 of FIG. 4.
  • the second computing device may rely upon the receiving one or more camera-equipped unmanned mobile vehicles to determine, individually or via additional wireless communications amongst themselves, which one or more of the camera-equipped unmanned mobile vehicles that receive the broadcast should actually be dispatched to intercept the second object and take second POV images and/or video of the second object.
  • the individual or group determination may be made using similar decision trees and parameters as that already set forth above with respect to FIGs. 1 and 4.
  • the second computing device may wirelessly receive, via the transceiver from two or more camera-equipped unmanned mobile vehicles that received the broadcast(s), respective
  • the second computing device may then use the respectively received camera-equipped unmanned mobile vehicle parameter information to identify a particular one of the two or more camera- equipped unmanned mobile vehicles for capturing the second POV image and/or video of the second object in a similar manner to that already set forth above (e.g., preferring one or more of a location closest to an expected intercept point of the first object, preferring imaging parameters of the mobile vehicle's camera, preferring a type of mobile vehicle, or some combination of the above).
  • the second computing device may cause an additional electronic instruction to be broadcast or directly transmitted to the particular one of the two or more camera-equipped unmanned mobile vehicles further instructing the particular one of the two or more camera-equipped unmanned mobile vehicles to capture the second POV image and/or video of the second object.
  • the second computing device may identify only a single best or most capable particular one of the camera-equipped unmanned mobile vehicles, while in other embodiments, the second computing device may identify two or more camera-equipped unmanned mobile vehicles using one or more of the parameters set forth above.
  • the second computing device wirelessly receives, in response to causing the broadcast of the dispatch instruction and intercept information to the one or more camera-equipped unmanned mobile vehicles at step 508, a captured second POV image and/or video of the first object.
  • a captured second POV image and/or video of the first object may be received by the second computing device at step 510.
  • step 512 the captured second POV image and/or video (or images and/or videos if multiple) is fed back in to the same or similar object identification / matching algorithm as set forth in step 504 above, accompanying the first POV image and/or video or not, and a second level of certainty that the second object matches the object of interest from step 504 is determined.
  • the second computing device may cause an alert to be transmitted to one of a dispatch console and a mobile computing device determined to be located in an area within a threshold distance, such as 0.5, 1, or 2 miles, of the one or more identified camera-equipped unmanned mobile vehicles.
  • the alert may include the captured first and/or second POV images and/or video, or links thereto, of the second object and/or one or both of the determined first and second levels of certainty.
  • the alert may further include a current location (or locations) of the one or more identified camera-equipped unmanned mobile vehicles if they are still tracking the second object (and sending back location information to the second computing device) and/or locations of the fixed camera and the one or more identified camera-equipped unmanned mobile vehicles when the first and second POV images and/or videos were taken.
  • a live video stream (or streams) from the one or more identified camera-equipped unmanned mobile vehicles of the second object may be provided subsequent to the alert or in response to a receiving device activating a link in the alert.
  • an improved device, method, and system for improving object recognition certainty of objects imaged by fixed cameras through intelligent collaboration and coordination between the fixed cameras and one or more identified camera-equipped unmanned mobile vehicles.
  • false positives and false negatives can both be substantially reduced and/or eliminated, improving reliance on the imaging systems and reducing the number of unnecessarily dispatched officers or other types of responders to simply to confirm an initial low or mid level certainty match.
  • processors such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein.
  • processors or “processing devices”
  • FPGAs field programmable gate arrays
  • unique stored program instructions including both software and firmware
  • some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic.
  • ASICs application specific integrated circuits
  • an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein.
  • Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Traffic Control Systems (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'invention concerne un procédé pour une caméra fixe et une collaboration de dispositif mobile sans pilote afin d'améliorer l'identification d'un objet d'intérêt. Un premier point de vue (POV) d'un objet capturé est obtenu et il est déterminé, avec un premier niveau de certitude, que le premier POV capturé de l'objet correspond à un objet d'intérêt stocké. Ensuite, un ou plusieurs véhicules mobiles sans pilote équipés d'une caméra sont identifiés dans une direction de déplacement déterminée du premier objet capturé, et une instruction de répartition et des informations d'interception sont ensuite transmises au ou aux véhicules mobiles sans pilote équipés de caméras. Ensuite, un second POV capturé du premier objet capturé est reçu par l'intermédiaire du ou des véhicules mobiles sans pilote équipés d'une caméra. Le second POV capturé de l'objet capturé est utilisé pour déterminer, avec un second niveau de certitude, que l'objet capturé correspond à l'objet d'intérêt stocké.
PCT/PL2016/050039 2016-09-16 2016-09-16 Système et procédé pour une caméra fixe et collaboration de dispositif mobile sans pilote pour améliorer la certitude d'identification d'un objet WO2018052322A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB1902085.8A GB2567587B (en) 2016-09-16 2016-09-16 System and method for fixed camera and unmanned mobile device collaboration to improve identification certainty of an object
US16/308,503 US10902267B2 (en) 2016-09-16 2016-09-16 System and method for fixed camera and unmanned mobile device collaboration to improve identification certainty of an object
DE112016007236.8T DE112016007236T5 (de) 2016-09-16 2016-09-16 System und Verfahren für die Zusammenarbeit einer feststehenden Kamera und einer unbemannten mobilenVorrichtung zur Verbesserung der Identifikationssicherheit eines Objektes
PCT/PL2016/050039 WO2018052322A1 (fr) 2016-09-16 2016-09-16 Système et procédé pour une caméra fixe et collaboration de dispositif mobile sans pilote pour améliorer la certitude d'identification d'un objet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/PL2016/050039 WO2018052322A1 (fr) 2016-09-16 2016-09-16 Système et procédé pour une caméra fixe et collaboration de dispositif mobile sans pilote pour améliorer la certitude d'identification d'un objet

Publications (1)

Publication Number Publication Date
WO2018052322A1 true WO2018052322A1 (fr) 2018-03-22

Family

ID=57227058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/PL2016/050039 WO2018052322A1 (fr) 2016-09-16 2016-09-16 Système et procédé pour une caméra fixe et collaboration de dispositif mobile sans pilote pour améliorer la certitude d'identification d'un objet

Country Status (4)

Country Link
US (1) US10902267B2 (fr)
DE (1) DE112016007236T5 (fr)
GB (1) GB2567587B (fr)
WO (1) WO2018052322A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019209795A1 (fr) * 2018-04-26 2019-10-31 Zoox, Inc Segmentation de données à l'aide de masques

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10769439B2 (en) * 2016-09-16 2020-09-08 Motorola Solutions, Inc. System and method for fixed camera and unmanned mobile device collaboration to improve identification certainty of an object
US10332515B2 (en) * 2017-03-14 2019-06-25 Google Llc Query endpointing based on lip detection
US10555152B2 (en) * 2017-09-28 2020-02-04 At&T Intellectual Property I, L.P. Drone-to-drone information exchange
US10691968B2 (en) 2018-02-08 2020-06-23 Genetec Inc. Systems and methods for locating a retroreflective object in a digital image
US20200361452A1 (en) * 2019-05-13 2020-11-19 Toyota Research Institute, Inc. Vehicles and methods for performing tasks based on confidence in accuracy of module output
AU2019454248B2 (en) 2019-06-25 2023-07-20 Motorola Solutions, Inc System and method for saving bandwidth in performing facial recognition
CA3155551C (fr) * 2019-10-26 2023-09-26 Louis-Antoine Blais-Morin Systeme automatise de reconnaissance de plaque d'immatriculation et procede associe
CN113408364B (zh) * 2021-05-26 2022-11-11 深圳市捷顺科技实业股份有限公司 一种临时车牌识别方法、系统、装置及存储介质
US11743580B1 (en) 2022-05-16 2023-08-29 Motorola Solutions, Inc. Method and system for controlling operation of a fixed position camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150268058A1 (en) * 2014-03-18 2015-09-24 Sri International Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics
US9237307B1 (en) * 2015-01-30 2016-01-12 Ringcentral, Inc. System and method for dynamically selecting networked cameras in a video conference

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7346184B1 (en) 2000-05-02 2008-03-18 Digimarc Corporation Processing methods combining multiple frames of image data
CN101414348A (zh) 2007-10-19 2009-04-22 三星电子株式会社 多角度人脸识别方法和系统
US9930298B2 (en) 2011-04-19 2018-03-27 JoeBen Bevirt Tracking of dynamic object of interest and active stabilization of an autonomous airborne platform mounted camera
TW201249713A (en) 2011-06-02 2012-12-16 Hon Hai Prec Ind Co Ltd Unmanned aerial vehicle control system and method
US9471838B2 (en) 2012-09-05 2016-10-18 Motorola Solutions, Inc. Method, apparatus and system for performing facial recognition
US9165369B1 (en) 2013-03-14 2015-10-20 Hrl Laboratories, Llc Multi-object detection and recognition using exclusive non-maximum suppression (eNMS) and classification in cluttered scenes
US10088549B2 (en) * 2015-06-25 2018-10-02 Appropolis Inc. System and a method for tracking mobile objects using cameras and tag devices
US11147257B2 (en) * 2018-10-11 2021-10-19 Kenneth T. Warren, JR. Software process for tending crops using a UAV
US10005555B2 (en) * 2016-05-02 2018-06-26 Qualcomm Incorporated Imaging using multiple unmanned aerial vehicles
KR20180020043A (ko) * 2016-08-17 2018-02-27 삼성전자주식회사 다시점 영상 제어 방법 및 이를 지원하는 전자 장치
US10989791B2 (en) * 2016-12-05 2021-04-27 Trackman A/S Device, system, and method for tracking an object using radar data and imager data
KR20180075191A (ko) * 2016-12-26 2018-07-04 삼성전자주식회사 무인 이동체를 제어하기 위한 방법 및 전자 장치
US10529241B2 (en) * 2017-01-23 2020-01-07 Digital Global Systems, Inc. Unmanned vehicle recognition and threat management
US11064184B2 (en) * 2017-08-25 2021-07-13 Aurora Flight Sciences Corporation Aerial vehicle imaging and targeting system
US10356307B2 (en) * 2017-09-13 2019-07-16 Trw Automotive U.S. Llc Vehicle camera system
EP3750301B1 (fr) * 2018-02-06 2023-06-07 Phenix Real Time Solutions, Inc. Simulation d'une expérience locale par diffusion par flux continu en direct de points de vue partageables d'un événement en direct
US11472550B2 (en) * 2018-10-03 2022-10-18 Sarcos Corp. Close proximity countermeasures for neutralizing target aerial vehicles

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150268058A1 (en) * 2014-03-18 2015-09-24 Sri International Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics
US9237307B1 (en) * 2015-01-30 2016-01-12 Ringcentral, Inc. System and method for dynamically selecting networked cameras in a video conference

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG XIAOGANG ED - CALDERARA SIMONE ET AL: "Intelligent multi-camera video surveillance: A review", PATTERN RECOGNITION LETTERS, vol. 34, no. 1, 2013, pages 3 - 19, XP028955937, ISSN: 0167-8655, DOI: 10.1016/J.PATREC.2012.07.005 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019209795A1 (fr) * 2018-04-26 2019-10-31 Zoox, Inc Segmentation de données à l'aide de masques
US10649459B2 (en) 2018-04-26 2020-05-12 Zoox, Inc. Data segmentation using masks
CN112041633A (zh) * 2018-04-26 2020-12-04 祖克斯有限公司 使用蒙片进行的数据分割
US11195282B2 (en) 2018-04-26 2021-12-07 Zoox, Inc. Data segmentation using masks
US11620753B2 (en) 2018-04-26 2023-04-04 Zoox, Inc. Data segmentation using masks

Also Published As

Publication number Publication date
DE112016007236T5 (de) 2019-07-04
US10902267B2 (en) 2021-01-26
GB201902085D0 (en) 2019-04-03
GB2567587B (en) 2021-12-29
US20200311435A1 (en) 2020-10-01
GB2567587A (en) 2019-04-17

Similar Documents

Publication Publication Date Title
US10902267B2 (en) System and method for fixed camera and unmanned mobile device collaboration to improve identification certainty of an object
US11170223B2 (en) System and method for fixed camera and unmanned mobile device collaboration to improve identification certainty of an object
EP3918584B1 (fr) Système et procédé de modification dynamique de périmètre de détection de menace de véhicule pour un occupant sorti d'un véhicule
US9767675B2 (en) Mobile autonomous surveillance
CN108399792B (zh) 一种无人驾驶车辆避让方法、装置和电子设备
US10198954B2 (en) Method and apparatus for positioning an unmanned robotic vehicle
US10477343B2 (en) Device, method, and system for maintaining geofences associated with criminal organizations
US11226624B2 (en) System and method for enabling a 360-degree threat detection sensor system to monitor an area of interest surrounding a vehicle
US10455353B2 (en) Device, method, and system for electronically detecting an out-of-boundary condition for a criminal origanization
CN109421715A (zh) 自适应巡航控制系统中车道条件的检测
WO2019133235A1 (fr) Dispositif, système et procédé de commande de véhicule tactique autonome
US20210188311A1 (en) Artificial intelligence mobility device control method and intelligent computing device controlling ai mobility
US10859693B1 (en) Intelligent beam forming for a range detection device of a vehicle
CN115361653A (zh) 经由对相邻车辆的基于车辆的监视来提供安全性
US10388132B2 (en) Systems and methods for surveillance-assisted patrol
KR102203292B1 (ko) Cctv 겸용 드론을 이용한 cctv 감시 시스템
KR101613501B1 (ko) 주행 이동식 또는 고정식 일체형 차량번호 인식기 및 이를 이용한 단속 시스템
US11989796B2 (en) Parking seeker detection system and method for updating parking spot database using same
CN114913712A (zh) 用于防止车辆事故的系统和方法
US11975739B2 (en) Device and method for validating a public safety agency command issued to a vehicle
KR102025354B1 (ko) 위험차량 경고장치 및 그 동작 방법
CN114640794A (zh) 相机、相机处理方法、服务器、服务器处理方法以及信息处理设备
JP2020135650A (ja) 情報処理装置、情報処理システムおよび情報処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16790748

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 201902085

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20160916

122 Ep: pct application non-entry in european phase

Ref document number: 16790748

Country of ref document: EP

Kind code of ref document: A1