WO2020084288A1 - Procédé et appareil de commande d'une caméra mobile - Google Patents

Procédé et appareil de commande d'une caméra mobile Download PDF

Info

Publication number
WO2020084288A1
WO2020084288A1 PCT/GB2019/052995 GB2019052995W WO2020084288A1 WO 2020084288 A1 WO2020084288 A1 WO 2020084288A1 GB 2019052995 W GB2019052995 W GB 2019052995W WO 2020084288 A1 WO2020084288 A1 WO 2020084288A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
server
location
time
mobile camera
Prior art date
Application number
PCT/GB2019/052995
Other languages
English (en)
Inventor
Kavaljeet Singh DIGVA
Original Assignee
Digva Kavaljeet Singh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digva Kavaljeet Singh filed Critical Digva Kavaljeet Singh
Priority to GB2107356.4A priority Critical patent/GB2593368B/en
Publication of WO2020084288A1 publication Critical patent/WO2020084288A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/487Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19654Details concerning communication with a camera
    • G08B13/19656Network used to communicate with a camera, e.g. WAN, LAN, Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41422Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance located in transportation means, e.g. personal vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/21Intermediate information storage
    • H04N1/2166Intermediate information storage for mass storage, e.g. in document filing systems
    • H04N1/2179Interfaces allowing access to a plurality of users, e.g. connection to electronic image libraries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/18Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
    • H04W4/185Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals by embedding added-value information into content, e.g. geo-tagging

Definitions

  • the present disclosure relates to methods and apparatuses for controlling a mobile camera.
  • the disclosure relates to controlling a plurality of mobile cameras by a server to obtain video data recorded at a particular time and location.
  • the footage may be footage of an incident that occurred in the vicinity of the vehicle, such as an accident, a traffic violation, a crime committed on the road, etc. If the footage is not stored within a predetermined amount of time, the footage may be overwritten by newly recorded footage.
  • the present disclosure provides a method of controlling a mobile camera, the method being performed at a server and comprising: receiving a plurality of live video data streams from one or more mobile cameras; storing data indicative of a time and a location at which each live video data stream was recorded; receiving a query for video data, the query specifying a time and a location; identifying a mobile camera that recorded a video data stream at a time and a location satisfying the query; and sending one or more instructions to the identified mobile camera, the instructions being configured to control the identified mobile camera.
  • the method may further comprise: storing the received plurality of live video data streams, wherein each of the stored video data streams is associated with the data indicative of the time and location at which it was recorded; in response to receiving the query for video data, identifying at least one stored video data stream recorded at a time and a location satisfying the query; and outputting the identified stored video data stream.
  • the query specifying time and location may specify one or more of a period of time and a range of locations.
  • the one or more instructions may comprise an instruction configured to cause the identified mobile camera to transmit video data at a second resolution, the second resolution being higher than a resolution of a live video data stream received from the identified mobile camera.
  • the method may further comprise receiving, in response to the instruction, video data at the second resolution from the identified mobile camera.
  • the one or more instructions may comprise an instruction configured to cause the identified mobile camera to store video data recorded by that camera in a video data file. The video data stored in the video data file may be associated with the time specified in the query.
  • the one or more instructions may comprise an instruction configured to cause the identified mobile camera to transmit a video data file to the server.
  • the method may further comprise receiving, in response to the instruction, the video data file from the identified mobile camera.
  • the one or more instructions may comprise an instruction configured to cause the identified mobile camera to begin transmitting a live video data stream to the server. This instruction may be sent in response to determining that the identified mobile camera is not currently transmitting a live video data stream.
  • the method may further comprise identifying a plurality of mobile cameras, and sending an instruction to each of the identified mobile cameras to control the identified mobile cameras.
  • the method may further comprise: storing data indicative of one or more image features of the live video data stream. At least some of the one or more image features may be received by the server in response to an instruction from the server.
  • the one or more image features may comprise at least one of number plate data, road sign data, facial feature data, position data of a vehicle, direction data of a vehicle and speed data of a vehicle.
  • Another aspect of the present disclosure provides a method of controlling a mobile camera, the method being performed at the mobile camera and comprising: transmitting a live video data stream to a server; transmitting, to the server, data indicative of a time and a location at which the live video stream was recorded; receiving an instruction from the server, the instruction being configured to control the mobile camera; and in response to receiving the instruction, performing one or more operations in accordance with the received instruction to control the mobile camera.
  • the method may further comprise: detecting an event using one or more camera sensors; and in response to detection of the event, performing one or more operations to control the camera.
  • the operations may comprise one or more of: initiating a recording of video data; and/or transmitting a live video data stream at a first resolution; and/or transmitting video data at a second resolution, wherein the second resolution is higher than the first resolution of the live video data stream; and/or storing video data in a file; and/or transmitting a video data file to the server.
  • the method may further comprise: receiving an instruction from the server to identify one or more image features of the live video data stream.
  • the method may further comprise analysing video data corresponding to the live video data stream, and identifying one or more image features of the video data.
  • the method may further comprise transmitting to the server data indicative of one or more image features.
  • the mobile camera may further receive one or more selection criteria from the server.
  • the mobile camera may check the one or more image features against the one or more selection criteria.
  • the mobile camera may transmit to the server data indicative of image features matching the one or more selection criteria.
  • Another aspect of the present disclosure provides a method of providing video data, the method being performed at a server and comprising: receiving a plurality of live video data streams from one or more mobile cameras, wherein each of the live video data streams is associated with data indicative of a location at which it was recorded; receiving a query for video data, the query specifying a location; identifying at least one live video data stream recorded at a location satisfying the query; and outputting the identified video data stream.
  • Each of the stored video data streams may be associated with the data indicative of the time at which it was recorded, the received query may specify a time, and identifying at least one live video data stream may comprise identifying a live video data stream recorded at a time and a location satisfying the query.
  • the method may further comprise storing the received plurality of live video data streams, and outputting the identified video data stream may comprise outputting a stored video data stream. Alternatively or additionally, outputting the identified video data stream may comprise outputting a live video data stream.
  • the query may specify one or more of a period of time and a range of locations.
  • the present disclosure provides an apparatus configured to perform any of the methods disclosed herein.
  • the present disclosure provides server and/or a mobile camera configured to perform the respective methods disclosed herein.
  • the present disclosure also provides a system including one or more such servers and one or more such mobile cameras.
  • Another aspect of the present disclosure provides a processor-readable medium comprising instructions which, when executed by a processor, cause the processor to perform any of the methods disclosed herein.
  • Another aspect of the present disclosure provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform any of the methods disclosed herein.
  • Figure 1 is a schematic representation of a network comprising a server and a plurality of mobile cameras
  • Figure 2 is a schematic representation of the server shown in Figure 1 ;
  • Figure 3 is a schematic representation of one of the mobile cameras shown in Figure 1 ;
  • Figure 4 is a flow diagram of a method of controlling a mobile camera, as performed by a server
  • Figure 5 is a flow diagram of a method of controlling a mobile camera, as performed by the mobile camera;
  • Figure 6 is a flow diagram of a method of controlling a mobile camera and responding to a query, as performed by a server;
  • Figure 7 is a flow diagram of a method of analysing video data and transmitting identified features, as performed by a mobile camera.
  • Figure 1 depicts a network 100 comprising one or more servers 200 and a plurality of mobile cameras 300, illustrated as cameras 300(a)-(d), wherein the one or more servers is able to communicate with each of the mobile cameras via a respective communication link 500.
  • server e.g. as a load-balanced server cluster, as a distributed cloud server arrangement, or any other suitable configuration.
  • the mobile cameras may send data over communication link 500.
  • the mobile cameras 300(a)-(d) may send a live video data stream and/or one or more video data files to the server 200.
  • the mobile cameras 300(a)-(d) may also send, to the server 200, information indicating the time and location at which the live video data stream or video data file was recorded.
  • the mobile cameras 300(a)-(d) may also send other sensor information (which is described in more detail below) to the server 200.
  • Video data, or other data, with corresponding location data may be referred to as geotagged data.
  • Video data, or other data, with corresponding time data may be referred to as timestamped data.
  • the server 200 may store the time and location data received from the cameras 300(a)-(d) in a memory.
  • the server 200 may further store the one or more video data files and/or live video data stream in a memory, accessible by the server 200.
  • the server 200 is further configured to be able to receive and process one or more queries 1 10.
  • a query 110 may request video data and may specify a time and location.
  • the server 200 may be configured to search the stored data to find time and location data satisfying the time and location data of the query 110.
  • the server 200 may identify one or more mobile cameras 300 linked to (in other words, associated with) the time and location data satisfying the query 1 10. Having identified a mobile camera 300, the server 200 may send one or more instructions to the identified camera in order to control that mobile camera 300. In response to the instructions, the server 200 may receive further data from mobile camera 300. This further data may also be stored by the server 200.
  • the server 200 may provide a response 120 to the query 1 10.
  • the server 200 may transmit or otherwise provide access to the data stored on the server 200 and/or the further data received from an identified camera 300.
  • An advantage of receiving a plurality of live video data streams and corresponding location and time data is that the server has a live, up-to-date, overview of the location of the plurality of mobile cameras 300 and the video data those cameras are recording. This live data can be used for determining a response 120 to a query 110.
  • Another advantage of this method is that a plurality of cameras can be controlled centrally by the server 200, for example for the collection of video and other data, and does not rely on local control of a mobile camera 300, for example by a user or owner of the mobile camera 300.
  • FIG. 2 depicts the server 200.
  • the server 200 comprises a processor 210 and a memory 220.
  • the processor 200 is able to execute instructions, stored in the memory 220, to cause the server 200 to perform actions that implement methods described herein.
  • Memory 220 may comprise a data store 230 in which data received by the server 200 may be stored.
  • the data store 230 may alternatively or additionally comprise memory which is physically separate from, but connected to, the server 200. This separate memory may be physically remote from the server 200, and may be connected via a wireless and/or wired connection.
  • the server 200 may further comprise a receiver 240 and a transmitter 250 for receiving and transmitting data, respectively.
  • the server 200 is a cloud based server, and comprises an Elasticsearch search engine to facilitate the search and retrieval of information (such as video data and metadata) stored in the data store 230.
  • FIG. 3 depicts a mobile camera 300, sometimes referred to in this description as a camera 300, which comprises a processor 310 and a memory 320.
  • a mobile camera 300 is a camera which may be moved during normal operation of the camera. Specifically, the mobile camera 300 may be moved between different geographical locations, and may record imaging data during this movement.
  • a mobile camera may be placed in a vehicle, such that it can record imaging data while the vehicle is in motion.
  • a dash cam is an example of a mobile camera that can be placed in a vehicle and used to implement the present disclosure.
  • a mobile camera 300 may also record imaging data while it is stationary.
  • the processor 310 is able to execute instructions, stored in the memory 320, to cause the mobile camera 300 to perform actions that implement the methods described herein.
  • the mobile camera 300 comprises imaging hardware 370 able to record data, such as video data and image stills.
  • Imaging hardware 370 may comprise optical elements (e.g. a lens and an image sensor), electronic elements, connectivity elements, signal processing elements, other processing elements, and/or software elements for capturing, processing, and transmitting imaging data. Imaging data may comprise for example video data, a live video data stream, or image stills.
  • the mobile camera 300 further comprises a receiver 340 and a transmitter 350, for receiving and transmitting data, respectively.
  • the data may be transmitted and/or received over the communication link 500 and may comprise, for example, imaging data, or instructions for controlling the mobile camera 300.
  • the memory 320 of the mobile camera 300 is not sufficiently large to store imaging data indefinitely.
  • the memory 320 of the mobile camera 300 may store imaging data in a loop buffer 330, which has a predetermined amount of storage space available for storing data.
  • a loop buffer is a type of first-in first-out (FIFO) data structure, and may be known as a circular buffer, circular queue, ring buffer or cyclic buffer.
  • the memory structure in which the loop buffer 330 is stored may use data files, or may be implemented as raw storage on a storage medium.
  • the storage medium may comprise a Secure Digital (SDTM) card or a flash memory (e.g., a NAND flash memory).
  • SDTM Secure Digital
  • flash memory e.g., a NAND flash memory
  • the loop buffer 330 is able to retain a set amount of data history, while overwriting the oldest saved data with newer recorded data.
  • the length of the time period of historical imaging data that can be saved in the loop buffer 330 depends on the size (i.e. data storage capacity) of the loop buffer 330, the resolution of the imaging data saved in the buffer 330, and the frequency or rate at which new imaging data is produced for storage in loop buffer 330, and may further depend on other factors.
  • the size of the loop buffer 330 may be changed (i.e., increased or decreased) by the processor 310.
  • the processor 310 may change the size of the loop buffer in response to a command from the server 200.
  • the processor 310 may change the size of the loop buffer 330 to allow more or less storage space in the memory 320 to be allocated to a file store 360 (described below).
  • a mobile camera 330 may comprise multiple loop buffers 330, which may store different types of data. For example, a first loop buffer may store video data at a first resolution, and a second loop buffer may store storing video data at a second resolution.
  • the memory 320 of the camera 300 may further comprise a file store 360, in which video data files may be saved separate from the loop buffer 330. Unlike imaging data stored in the loop buffer 330, video data files stored in the file store 360 are not automatically overwritten by new recorded data.
  • video data may be stored in the loop buffer 330 in MPEG-2 transport stream (“MPEG-TS”) format.
  • Video data files may be stored in the file store 360 in MPEG-4 Part 14 (“MP4”) format.
  • MP4 MPEG-4 Part 14
  • Other suitable formats including proprietary formats, may also be used to store imaging data in the loop buffer 330 and the file store 360. Imaging data in the loop buffer 330 may be converted into a different format for storage in the file store 360.
  • a mobile camera 300 may further comprise one or more sensors for detecting or measuring one or more characteristics of the camera 300, the camera’s environment and/or the imaging data it records.
  • the sensors may include one or more of a location sensor, a gyroscopic sensor, an accelerometer, an infrared sensor, a magnetometer, a thermometer, and a barometer, for example.
  • the location sensor is configured to determine the location of mobile camera 300.
  • the location sensor may be, for example, a Global Navigation Satellite System (GNSS) sensor.
  • the GNSS sensor may be configured to use any suitable navigation satellite system including, but not limited to, the Global Positioning System, Galileo, GLONASS and/or BeiDou.
  • the location sensor need not necessarily be a GNSS sensor.
  • the location sensor may be configured to determine the location of the mobile earners 300 using a land-based positioning system, e.g. by triangulating a position based on a plurality of network access points, such as cellular telephone base stations.
  • the location sensor may be used to determine the velocity of the mobile camera 300, by determining the difference between the location of the camera at two or more points in time.
  • FIG 4 is a flow diagram of a method that can be executed by a server 200 to control a mobile camera 300.
  • the server 200 receives a plurality of live video data streams from a plurality of mobile cameras 300, for example cameras 300(a)-(d) shown in Figure 1.
  • Each mobile camera 300 may provide a separate live video data stream independently from the other cameras and/or data streams.
  • Each mobile camera 300 also provides data indicating the time and location at which the live video stream was recorded by that camera 300.
  • the data indicating the time and location may be encoded in the live video stream itself.
  • each frame of the live video stream may include metadata that indicates the time and location at which that frame was recorded.
  • the data indicating the time and location at which the live video stream was recorded may be provided to the server in a communication that is separate from the live video stream itself.
  • a live video data stream may be a transmission of video data, by a mobile camera 300, occurring at substantially the same time as the video data is recorded.
  • a live stream of video data may be received by the server 200 with a slight delay relative to when the data was recorded, for example a 5 seconds, 10 seconds, 30 seconds, or 1 minute delay.
  • the delay between the video data being recorded and the live video data stream being received by the server 200 may be caused by a plurality of factors. These factors may be caused by the mobile camera 300 and/or the communication link 500.
  • the live video data stream must be prepared for transmission to the server 200 by a mobile camera 300. This process may include signal processing of the recorded video data, for example to change the resolution of the video data and/or to change the data format of the video data.
  • the video data stream may be transmitted across the communication link 500 in data packets, wherein each data packet may comprise video data of a predetermined length of time, for example 1 , 5, or 10 seconds.
  • the processing of video data may further comprise preparing these data packets of the video stream.
  • the delay caused by preparation of the data packets may depend on the length of time of video data included in a packet.
  • a mobile camera 300 may wait to collect an amount of video data to fill a data packet, before sending the next data packet.
  • the delay caused by the data packet creation may be at least as long as the duration of video data comprised in a data packet.
  • the live video data stream will take a finite (non-zero) amount of time to be transmitted over the communication link 500.
  • the capacity and data transfer speed of the communication link 500 may influence the delay with which the live video data stream is received by the server 200.
  • the server 200 stores the time and location data received from the mobile camera 300, in its data store 230.
  • the server 200 also stores data indicating which mobile camera 300 provided the time and location data.
  • the data is stored in such a way that it is searchable by the server 200 based on one or both of location and time.
  • the data store 230 may comprise a database configured to store the time and location data.
  • the database may also be configured to store the live video data streams that were received in operation 402.
  • the server 200 receives a query 110.
  • the query 110 specifies a time and a location.
  • the time may be expressed as a specific time, multiple specific times, a period of time, or multiple periods of time.
  • the time may optionally comprise a date.
  • a time may be specified as a number of hours and minutes (and, optionally, seconds) on a particular date. It will be appreciated that other suitable methods for specifying a time may be used, such as a UnixTM time stamp.
  • the location may be expressed as a specific location, multiple specific locations, a distance range around one or more specific locations, or may specify an area such as for example a street, a neighbourhood, a postcode, a range of geographic coordinates, etc.
  • a location may be specified by the geographic coordinates of that location, e.g. GNSS coordinates. More complex queries are also possible.
  • a query can be formulated to request recordings within an area defined by a travel time, a speed of travel, a starting location and a starting time (e.g.“show me all recordings reachable by travelling for W minutes atX kilometres per hour, when starting from location X at time Z”).
  • a query of this form can be used to track persons involved in an incident (e.g. a perpetrator, a victim and/or a witness) by retrieving video footage from cameras located at any point along all of their possible routes away from the incident.
  • a query may be formulated to request recordings at a particular location at a future time.
  • This type of query can be used to schedule recordings, by causing mobile cameras 300 that would otherwise be inactive to begin recording and/or by causing mobile cameras 300 to record data in a different manner (e.g. at a higher resolution and/or by storing video data files in the file store 360) if they are in that particular location.
  • the server 200 searches the data store 230 for data satisfying the time and location of the query 110.
  • the server 200 may compare the stored data to the specified time and location using ranges and/or limits set by the query 110. Additionally or alternatively, the server 200 may use a predetermined range around the specified time and location to determine whether or not the time and location data are considered to satisfy the query 110.
  • the predetermined range may be 5, 10, 20, or 30 seconds around a specified time and/or 10, 20, 50, or 100 metres around a specified location.
  • the predetermined range may be set during configuration of the server 200, may be set by a user of the server 200, or may be included as part of a received query 1 10.
  • the server 200 identifies one or more mobile cameras 300 that recorded a video stream at the time and location data that satisfied the query 1 10.
  • Information identifying the mobile camera 300 for example a camera reference or other unique identifier, may be stored in the data store 230 with the time and location data.
  • Information identifying the mobile camera 300 that recorded the video stream may be saved separately from the time and location data itself.
  • the camera 300 that recorded the video stream may be determined from the time and location data itself, for example by being encoded in the data structure (e.g. using a particular file format) in which this time and location is stored.
  • the data structure may contain metadata identifying the source of the data, which may be an identifier of the mobile camera 300 that recorded the data.
  • the mobile camera 300 that recorded a video stream may be determined from the video stream, for example from metadata contained in the video stream, wherein that metadata identifies the camera.
  • the server 200 sends one or more instructions to the camera 300 identified at operation 410 for controlling the identified camera 300.
  • the nature of the instructions may depend, for example, on the time of the data satisfying the query 1 10, the nature of query 1 10, or one or more properties of identified camera 300. If more than one mobile camera 300 was identified at operation 410, the server 200 may send one or more instructions to each identified camera 300.
  • the server 200 may store video data received in the live video data stream in the data store 230.
  • An advantage of storing video data at the server 200 is that the server 200 is able to provide video data in response to a query 110 directly, without requesting further data from a mobile camera 300.
  • the stored video data may be linked to (or, in other words, associated with) the time and location data stored in operation 404.
  • the video data may be stored in the format in which it was received, or may be converted to another format and/or another resolution for storage.
  • Example formats for video data include MP4 or MPEG-TS formats.
  • a query 1 10 may comprise a request for video data related to a time and location specified in the query 1 10.
  • the server 200 may identify stored video data that was part of a live video data stream linked to a time and location satisfying the query 1 10.
  • the identified video data may be only a portion of a video data stream spanning a period of time, for example a portion of stored video data corresponding to a period of time within a live video data stream covering a longer time period.
  • the server 200 may output the identified stored video data as part of a response 120 to the query 110.
  • Outputting stored video data may comprise sending a video data file comprising a copy of the stored video data as part of a response 120 to the query 1 10.
  • outputting stored video data may comprise providing information (e.g., a uniform resource locator and/or login data) in the response 120, whereby the information in the response enables a receiver of the response 120 to access the stored video data in the data store 230.
  • the server 200 is able to control an identified mobile camera 300, which may be in response to a query 110, by sending one or more instructions to the camera 300. These instructions may include an instruction to transmit video data to the server 200, wherein the video data has a second resolution that is higher than the resolution of the live video data stream.
  • the resolution of the live video data stream may be referred to as a first resolution, and may be set by the mobile camera 300 to be sufficiently low so that live streaming over the communication link 500 is possible.
  • a mobile camera 300 may for example record video data at a resolution determined by the imaging hardware 370. This resolution may be equal to or higher than the second resolution mentioned above, for example 1080p resolution, ultra-HD resolution, or 4K resolution.
  • the camera 300 may convert the video data to a lower resolution, for example 340 x 480, 600 x 800, or 720p resolutions.
  • the mobile camera 300 may use the first resolution (i.e. the lower resolution) to transmit a live video data stream to the server 200 over the communication link 500.
  • the server 200 may receive second resolution video data in response to the instruction sent to the camera 300.
  • the response 120 to query 1 10 can provide more detailed video of an event of interest to a party making the query 1 10, without having to transmit all second resolution data to server 200.
  • the quantity of data transmitted over the communication link 500 can thus be reduced (which may, in turn, reduce the costs of transmitting data), and the quantity of storage space require at the server 200 may also be reduced. Only video data of interest for a query 1 10 is transmitted over link 500 and potentially stored at server 200.
  • the one or more instructions sent to the identified mobile camera 300 may comprise an instruction to store video data in a file.
  • the instruction may specify the resolution at which the video data is to be stored, or the resolution may be the default resolution at which camera 300 stores recorded video data.
  • the instruction may specify one or more periods of time and/or one or more locations for which corresponding video data is to be stored in the video data file.
  • the video data file may be stored in the file store 360, where it may be protected from being overwritten by newly recorded data (unless overwriting is specifically instructed).
  • the mobile camera 300 may generate a reference to the video data file that can be used to identify the video data file, and link it to the instruction.
  • the reference to the created video data file may be transmitted and provided to the server 200. Time and location data corresponding to the video data may further be stored or otherwise linked to (i.e. associated with) the stored video data file.
  • the one or more instructions sent to the identified mobile camera 300 may comprise an instruction to transmit one or more video data files to the server 200.
  • the instruction may specify a time and/or a location, or a period/range of one or both of these, to identify one or more video data files to be sent.
  • the instruction may comprise a reference to a video data file to be sent.
  • the camera 300 may identify the requested video data file(s), and transmit them over the communication link 500 to the server 200.
  • the server 200 may determine that an identified camera 300, corresponding to time and location data satisfying query 110, is not currently providing a live video data stream.
  • the server 200 may send an instruction to the identified camera 300, to cause the identified camera 300 to initiate a live video data stream to the server 200.
  • Camera 300 receiving the instruction may be recording, but not transmitting, video data.
  • the camera 300 may begin a transmission of a live video data stream to the server 200.
  • the camera 300 may not be recording video data, in which case the instruction can initiate a recording of video data as well as begin a transmission of a live video data stream to the server 200.
  • the time and location specified in query 1 10 may be satisfied by a current or recent time and location of an identified camera 300.
  • recent can be taken to mean within a period of time for which recorded imaging data is still stored in the memory 320 of the camera 300, for example in the loop buffer 330.
  • the server may send an instruction to the camera 300 to send video data at a second resolution alongside (i.e. at substantially the same time as) the live video data stream.
  • the second resolution may be higher than the first resolution of the live stream, and may not be suitable to be sent as a live stream.
  • the camera 300 may implement the instruction by storing the second resolution video data in a file, and transmitting the video data file to server 200 at a rate which may be slower than a live stream.
  • the instruction may be executed by the camera 300 by storing at least a portion of the loop buffer 330 data in a video data file at the second resolution, and sending the stored video data file over the communication link 500. Once the video data is sent, the corresponding video data file at the camera 300 may be deleted or overwritten.
  • the video data file may be separate from the loop buffer 330, and may for example be stored in the file store 360 of the camera 300. Alternatively, the video data may also be sent straight from the loop buffer 330 to the server 200.
  • the camera may send second resolution video data to the server 200 for a predetermined period of time (which may be specified in the instruction received from the server 200), or may send second resolution video data until an instruction is received to stop doing so.
  • the server 200 may send an instruction to the identified camera 300 to send second resolution video data corresponding to the time and location.
  • the mobile camera 300 may respond to this request by retrieving the video data from the loop buffer 330, storing the video data in a file, and transmitting the file. If the video data corresponding to that time is no longer stored in the loop buffer 330 or elsewhere in the memory 320 of the camera 300, the camera 300 may notify the server 200 that the requested video data is no longer available.
  • Figure 5 is a flow diagram of a method for controlling a mobile camera 300.
  • the method is performed at a mobile camera 300.
  • the method involves the mobile camera 300 transmitting 502 a live video data stream to the server 200, which may be done over the communication link 500.
  • the camera further transmits 504 to the server 200 data indicative of a time and a location at which a live video data stream was recorded.
  • the data may, optionally, be sent separately to the live video data stream itself.
  • the time and location data sent by the camera 300 may also include further data indicating the identity of the camera that recorded this data.
  • the camera 300 may receive 506 an instruction from server 200 as set out above, and may perform 508 one or more operations to control the camera 300 in response to receipt of the instruction from the server 200.
  • the mobile camera 300 may comprise one or more sensors for measuring properties relating to the camera 300.
  • a location sensor may be used to determine the location of the mobile camera 300, stored as location data. This location data may be linked to the time at which the location was determined (that is to say, the location data is timestamped), and provided to the server 200.
  • the mobile camera 300 may comprise a gyroscopic sensor, which may be used for determining orientation of the mobile camera 300, stored as orientation data. This orientation data may be provided to the server 200, and may be used to determine what can be seen in imaging data (such as video data) captured by camera 300.
  • Mobile camera 300 may comprise an accelerometer, which may be used to determine acceleration of the mobile camera 300.
  • Mobile camera 300 may further comprise one or more of a thermometer to measure temperature, or a barometer to measure pressure.
  • Mobile camera 300 may further comprise an infrared sensor to measure activity in the vicinity of the camera (for example, to detect an approaching person or vehicle).
  • One or more of the sensors 380 of the mobile camera 300 may be used for detection and identification of potential incidents. Potential incidents may be identified through detection of events. Examples of events include a sudden acceleration, a sudden deceleration, or a change in angular direction, which may be detected by an accelerometer or a gyroscope. These examples of events may indicate an incident involving an impact affecting movement of mobile camera 300 (e.g. a collision involving a vehicle in which the mobile camera 300 is located). An infrared sensor may be used to detect an approaching individual or vehicle, whose actions and/or movements may be of interest, and may constitute an event. If an event is detected by a sensor 380, this may trigger the camera 300 to begin recording video data.
  • events include a sudden acceleration, a sudden deceleration, or a change in angular direction, which may be detected by an accelerometer or a gyroscope. These examples of events may indicate an incident involving an impact affecting movement of mobile camera 300 (e.g. a collision involving
  • the camera 300 may store the video data recorded in the period of time around this moment in a separate video data file to avoid it being overwritten or deleted by newly recorded footage.
  • Mobile camera 300 may receive and/or store information to allow it to determine when an event detected by a sensor is a notable event that warrants recording and/or storing of video data.
  • the mobile camera 300 may be configured to notify the server 200 if an event is detected. In some implementations, the mobile camera 300 may contact the server only if certain events occur, if more than a threshold number of events occur, or if more than a threshold number of events occur within a predetermined period of time.
  • the mobile camera 300 may send video data to the server 200 continuously, as a live video data stream.
  • the mobile camera 300 may send video data to the server 200 in response to the detection of an event by a sensor 380 of the mobile camera 300 or in response to an instruction received over the communication link 500 from the server 200.
  • the mobile camera 300 may send imaging data in the form of a still image, which may be sent periodically. For example, when a mobile camera is not recording video data, it may record a still image periodically, e.g. every 5, 10 or 30 minutes, and provide the still image to the server 200 with corresponding time and location data. This allows the server 200 to remain aware of the presence of mobile camera 300, even when the camera is not sending a live video data stream.
  • the mobile camera 300 may be located in a vehicle. Specifically, the mobile camera 300 may be a dash cam. The mobile camera 300 may be powered by an external source connected to the vehicle. Alternatively or additionally, the mobile camera 300 may be powered by an internal battery, which may be rechargeable. For example, a mobile camera 300 located in a vehicle may be powered by the power supply of the vehicle while the vehicle engine is turned on, and may be powered by an internal battery while the vehicle engine is turned off. The internal battery may be charged by the vehicle while the engine is on and/or may be charged separately from the vehicle.
  • the mobile camera 300 may be connected to the server 200 at least in part by a wireless link forming a communication link 500.
  • a wireless link may for example comprise Wi-FiTM (IEEE 802.1 1), GSM, GPRS, 3G, LTE, and/or 5G connectivity.
  • the communication link may also comprise a wired connection, for example an Ethernet (IEEE 802.3) connection, to a connectivity hub of the vehicle.
  • the vehicle connectivity hub may then provide a communication link 500 to the server 200, for example using Wi-FiTM (IEEE 802.11), GSM, GPRS, 3G, LTE, 5G, or other connectivity channels.
  • the mobile camera 300 may comprise both wired and wireless connections, so that it can form a communication link 500 independently or make use of other available connections.
  • the communication link 500 might fail.
  • the communication link 500 might be unavailable in an area where there is no wireless network connectivity, or due to a hardware failure.
  • the mobile camera 300 might be notified internally of an unsuccessful data transmission and may save the first resolution live video data stream data in the loop buffer 330 and/or video data files in the mobile camera memory 320, for sending upon restoration of the communication link 500.
  • the mobile camera 300 may reduce the amount of second (i.e. higher) resolution video data stored in order to increase the amount of first (i.e. lower) resolution data stored by mobile camera 300, to avoid video data loss for a period of time.
  • Mobile camera 300 may continue to timestamp and geotag video data at the time of recording, for transmitting to the server 200 upon restoration of the communication link 500. If the communication link 500 fails for a prolonged period of time, and the mobile camera 300 runs out of memory for storing video data, older video data may be overwritten, and may be lost.
  • the communication link 500 may be present, but may be unable to transmit the desired amount of data as part of a live video data stream.
  • the server 200 may communicate over one or more wired or wireless communication links (including, but not necessarily limited to, the communication link 500).
  • the server 200 may communicate using using Ethernet (IEEE 802.3), Wi-FiTM (IEEE 802.11), GSM, GPRS, 3G, LTE, and/or 5G connections.
  • Ethernet IEEE 802.3
  • Wi-FiTM IEEE 802.11
  • GSM Global System for Mobile communications
  • GPRS 3G
  • LTE Long Term Evolution
  • 5G connections Fifth Generation
  • the data sent over the communication links may be secured using the encryption of the respective connectivity standard.
  • the data transferred over the connection may further be encrypted using encryption data mechanisms known in the art.
  • the server 200 may receive a query 1 10 from an authorised party.
  • the server 200 may have (or have access to) a list of authorised parties from which it accepts and handles queries.
  • the list of authorised parties may include a law enforcement agency, such as the police.
  • An authorised party may have secure authentication credentials, which can be added to a query 110 so that the server 200 can authenticate the query 1 10.
  • the authentication credentials may further indicate to the server 200 the identity of the party presenting the query 110.
  • the query 110 may be sent by the authorised party to the server 200 over a communication link.
  • the communication link may be the same communication link 500 used by server 200 to communication with mobile cameras 300, or may be a separate communication link, for example a wired communication link.
  • the communication link used for transmitting one or both of queries 1 10 and query responses 120 may be a secure communication link.
  • Authentication credentials may be required to authorise a party. Parties which are not authorised may be prevented from presenting a query 1 10 to the server 200. Alternatively or additionally, a query 1 10 from a non-authorised party may lack authentication credentials, and the server 200 may as a result refuse to handle and respond to such a query 110.
  • a query 1 10 may further comprise information regarding the nature of the required response 120 to query 110.
  • a query 110 may specify one or more destinations, or the format of the response.
  • Example queries may include a request to send video data corresponding to a specified time and location, a request to initiate and send a recording by a camera 300 in a particular location at a current or future time, a request to retrieve data from server 200, or a request to receive data stored on one or more mobile cameras 300.
  • a query 110 may specify further requirements. Data relating to the specified further requirements may be sent, to the server 200, along with time and location data by a mobile camera 300.
  • a query may specify an orientation of a camera 200 in one direction or a range of directions, which may be determined from data provided by a magnetometer. The instantaneous orientation of the camera 200 may be sent alongside time and location data by the mobile camera 300.
  • a mobile camera 300 may be owned by a camera-owning party, which may or may not be an authorised party 300. Different mobile cameras 300 may be owned by different parties.
  • a mobile camera 300 may have local controls which may allow the camera to be controlled by a party other than the server 200. For example, a mobile camera 300 may be controlled by the camera itself based on input and information received from one or more sensors 380, or may be controlled by a party owning and/or using mobile camera 300.
  • An authorised party may be separate from a party owning and/or using a mobile camera 300.
  • a party using a mobile camera 300 may be able to control the mobile camera 300 manually. For example, a user may be able to initiate or stop a live video data stream or a video data recording.
  • Figure 6 illustrates a flow chart of an example query 110 and response 120 by a server 200.
  • the server receives a plurality of live video data streams from a plurality of mobile cameras, wherein each live video data stream is timestamped and geotagged, thereby providing time and location data.
  • the server 200 stores the time and location data, and video data of the live video data stream, respectively, in a searchable data store 230.
  • the server 200 receives 610 a query 1 10 from an authorised party.
  • the query 110 specifies a time and location of interest.
  • the server 200 searches the data sore 230 for time and location data corresponding to the specified time and location.
  • the server 200 identifies all entries in data store 230 that have time and location data falling within a predetermined range around the specified time and location data of the query 110. For each identified entry of time and location data, the server transmits 612 the video data corresponding to the time and location data, as part of a response 120 to query 110. In operation 614, the server 200 identifies a mobile camera 300 responsible for obtaining the video data satisfying the query 110. Server 200 may determine that the identified mobile camera 300 may have access to further information relevant to the query 110. For example, the identified mobile camera 300 may have video data at a higher resolution than the video data stored in data store 230. In operation 616, the server 200 sends instructions to the identified mobile camera 300 to control the mobile camera 300 to transmit the further relevant information to the server.
  • this may involve the mobile camera 300 storing, in a video data file, the relevant video data at a higher resolution than the streamed video data.
  • the video data file may then be transmitted to the server 200 over a communication link 500, along with corresponding time and location data (for example, by timestamping and geotagging the video data).
  • the server 200 receives the further relevant information from mobile camera 300, which in this example may be the further video data.
  • the further received video data is then stored by server 200 in data store 230. Based on its corresponding time and location data, this further video data may be linked to other data corresponding to this video data, for example the video data of the live video data stream, and its corresponding time and location data.
  • the server 200 sends the further relevant data as part of a response 120 to the query 1 10, to the authorised party and/or to a destination specified in the query 1 10.
  • a mobile camera 300 may be configured to analyse video data.
  • the processor 310 of the mobile camera 300 may execute instructions that perform video data analysis.
  • Video data analysis may comprise processing one or more images in the video data recorded by the mobile camera 300.
  • Video data analysis can be performed using a suitable computer vision and/or machine learning technique.
  • the processor 310 may implement a trained neural network (e.g., a convolutional neural network) or a support vector machine in order to analyse video data.
  • Analysing video data may comprise identifying one or more image features that are present in the video data. Examples of image features that may be identified through video data analysis include vehicle registration plates, road signs, facial features of people visible in the video data, and speeds of other vehicles.
  • An advantage of analysing data at the mobile camera 300 instead of at the server 200 is that the mobile camera 300 has access to the recorded video data in a form that has not been compressed for transmission to the server 200.
  • the mobile camera 300 may thus be able to identify image features that would not be identifiable following lossy compression of the video data.
  • the image features that are identified by video data analysis at the mobile camera 300 may be sent to the server 200.
  • a video data stream may comprise metadata, which may comprise data identifying the mobile camera 300 recording the video stream, and data relating to the time and/or location of the captured video data.
  • the metadata may further include information indicative of image features identified by video data analysis.
  • a frame of the video data stream may be associated with metadata that indicates one or more image features identified in that frame.
  • the mobile cameras 300 may thus generate metadata relating to the identified image features, for providing to the server 200.
  • the server 200 may store the received metadata, including the information indicative of image features identified by video data analysis, in the data store 230.
  • the data store 200 can store the received metadata in such a way that it is searchable (e.g. in a searchable database) in order to allow a specific frame of video data to be retrieved based on the metadata associated with the frame.
  • the metadata includes the time and/or location of the video data
  • the image features can be timestamped and/or geotagged. It is thus possible to search the data store 230 to identify the times and/or locations at which a particular image feature was identified in the video data and, optionally, to retrieve frames of the video data containing that image feature.
  • a server 200 may receive a query 1 10 for video data that specifies one or more image features.
  • the query 1 10 may specify image feature(s) alternatively or additionally to specifying a time and/or location.
  • the one or more specified image features may be used to identify video data that satisfies the query 110.
  • the server 200 may send the video data in which the image feature is present and/or metadata relating to the identified image feature.
  • the server 200 may also provide a location and/or a time at which the image feature was identified or recorded, which may be achieved using timestamps and/or geotags of the image feature and related video data. Parameters of the query 110 may be used to determine which data is provided in response to the query 1 10.
  • a query 1 10 may specify a number plate, and request any footage in which this identified feature is present, as well as the time and location of the video footage.
  • the server 200 may send video data (e.g. one or more frames), as well as timestamp data and geotag data corresponding to the image feature in the video data.
  • Analysing video data may comprise analysing a plurality of recorded images separately (i.e. individually) or together. A set of consecutive images may be analysed together to determine changes in the field of view of a mobile camera over time, for example to detect a moving vehicle or pedestrian.
  • a server 200 may transmit an instruction to a mobile camera 300, which instructs the mobile camera 300 to analyse the video data that it captures.
  • the mobile camera 300 receives the instruction to analyse video data.
  • the server 200 may transmit an instruction to turn video data analysis at a mobile camera 300 on or off.
  • the instruction may specify a time and/or location at which the mobile camera 300 should analyse data. Analysis may be performed at times specified by the server 200. For example, analysis may be performed continuously, periodically, at a specific time, over a specified time range, etc. Analysis may be performed at locations specified by the server 200.
  • the server 200 may instruct the mobile camera 300 to analyse video data when it is located in a specified area.
  • the instructions sent by the server 200 may specify a time or time range, and a location or area in which video data analysis should be performed by mobile camera 300. Alternatively or additionally, the instructions may instruct the mobile camera 300 to perform one or more specific types of video data analysis, thereby to identify one or more specific types of image feature in the video data. Operation 702 is optional, and the mobile camera 300 may perform video data analysis by default, without the need to be instructed to do so by the server 200.
  • the mobile camera 300 analyses video data. As a result of analysing the video data, at operation 706 the mobile camera 300 identifies one or more image features present in the video data. Examples of identified image features include a vehicle registration plate, a vehicle make (i.e. the name of the vehicle manufacturer), a vehicle model, a vehicle colour, a facial feature, a road sign, the position of another vehicle, the speed of another vehicle and the direction of travel another vehicle etc.
  • the mobile camera 300 may generate metadata for each of the identified image features.
  • the server 200 may control the mobile camera 300 regarding the actions taken in relation to identified image features.
  • the control may be achieved by transmitting instructions to the mobile camera 300, for example at operation 702 and/or in a separate request.
  • the server 200 may instruct a mobile camera 300 to perform analysis on video data to identify one or more image features.
  • the server 200 may further instruct the mobile camera 300 to transmit the one or more identified image features to the server 200.
  • the identified image features may be transmitted to the server in the form of metadata, which may be transmitted along with video data with which the metadata is associated.
  • the mobile camera 300 may be instructed to transmit all metadata relating to an identified image feature to the server 200, shown in operation 708.
  • the image feature metadata received by the server 200 may be stored in the data store 230, so that it is linked to the received video data and other received metadata.
  • the server 200 may also process the received metadata relating to an identified image feature. For example, the server 200 may compare the received metadata to one or more image features on a predetermined list. If the identified image feature matches a feature on the list, the metadata and/or video data associated with the metadata may be processed further. Specific implementations of processing identified image features will be described in more detail below.
  • the server 200 may send one or more selection criteria to the mobile camera 300.
  • the selection criteria allow the server 300 to request information relating to specific image features and/or to request video data in which those image features are present.
  • a selection criterion may include a type of image feature and, optionally, a corresponding value for that feature.
  • a selection criterion may specify that the image feature is a vehicle registration plate, and the value of the vehicle registration plate is “ABC 123”.
  • the mobile camera 300 checks identified image features against the selection criteria. If an identified image feature matches 712 the selection criteria, in operation 714 the mobile camera 300 transmits video data (e.g. one or more frames) in which the identified image feature is present to the server 200.
  • the mobile camera 300 transmits metadata comprising a value of the identified image feature to the server 200. If an identified image feature does not match 716 the selection criteria, video data in which the identified image feature is present and/or metadata comprising a value of the identified image feature may be stored locally at the mobile camera 300, in operation 718. Alternatively or in addition to being stored locally, video data and/or metadata not meeting the selection criteria may be deleted, or other actions may be taken (for example, the video data may be sent at lower resolution, or when there is a surplus in transmission capacity).
  • the mobile camera 300 may send the identified image features to the server 200 together with the video data stream (e.g. as part of the video data stream metadata), or separately from the video data stream.
  • An advantage of sending all identified features without using selection criteria is that less computing power is needed at the side of the mobile camera 300. This enables the use of cheaper, less complex mobile camera 300 devices.
  • Another advantage of not using selection criteria at the mobile camera 300 is that latency in providing data to the server 200 may be reduced.
  • an advantage of using selection criteria is that the amount of image feature metadata sent to the server 200 can be reduced. This may reduce the data transfer requirements for transmitting the video data stream and related metadata from the mobile camera 300 to the server 200.
  • a mobile camera 300 can receive different selection criteria for different types of image features. For example, a mobile camera 300 may be instructed to send all road sign information, but to only send facial feature data that matches a facial feature on a predetermined list provided to the mobile camera 300.
  • the type of image feature identified through video data analysis is a vehicle registration plate, also known as a number plate.
  • This identification process may be referred to as Automatic Number Plate Recognition (ANPR).
  • ANPR Automatic Number Plate Recognition
  • Identified image features that correspond to number plates may be referred to as identified number plates.
  • the server 200 may instruct the plurality of mobile cameras 300 identify number plates (e.g. at operation 702).
  • the mobile cameras can be configured to identify number plates through the use of known computer vision techniques to detect the presence of a number plate in an image, and to extract alphanumeric characters therefrom.
  • the mobile cameras 300 may return all identified number plates to the server 300 (e.g. at operation 708 or 714), or may return specific number plates matching a selection criterion to the server 300 (e.g.
  • the server 200 may use the plurality of mobile cameras 300 to locate one or more predetermined vehicles.
  • the server 200 may have access to a list of vehicles of interest, which may include untaxed vehicles, stolen or missing vehicles, vehicles linked to criminal activities, or otherwise wanted vehicles.
  • the list of vehicles of interest may include the number plate of each vehicle of interest.
  • the server 200 may instruct one or more mobile cameras 300 to analyse video data to identify number plates.
  • the server 200 may instruct the mobile cameras 300 to transmit all identified number plates to the server 200.
  • the server may then check received identified number plates against the predetermined list.
  • the predetermined list may comprise sensitive or confidential data. As a result, it may be advantageous to check the identified number plate against the predetermined list at the server side, so that the predetermined list does not need to be provided to a mobile camera 300.
  • the server 200 may provide one or more selection criteria to the plurality of mobile cameras.
  • the selection criteria may, for example, comprise the list of vehicles of interest.
  • the mobile camera 300 may then check identified number plates against this list, and may transmit any identified number plates that match the predetermined list to the server 200.
  • the pre-transmission selection may reduce data transmission and storage capacity requirements.
  • the server 200 may also implement a combination of the two approaches described above, checking identified number plates at the server 200 side for some of the plurality of mobile cameras 300, and providing selection criteria to other mobile cameras 300.
  • image features such as identified number plates can be timestamped and/or geotagged. It is thus possible for the server 200 to identify the time and/or location at which a particular vehicle of interest was seen by the mobile cameras 300.
  • the location of a vehicle of interest can be tracked over a prolonged period of time because, even if the vehicle of interest moves out of the field of vision of one mobile camera, it may enter the field of vision of another mobile camera.
  • the server 200 may be used to track a vehicle of interest using a plurality of mobile cameras 300 in the vicinity of the vehicle of interest.
  • the server 200 may send instructions to the plurality of mobile cameras 300 to detect the vehicle of interest.
  • the instructions may comprise a selection criterion that includes the number plate of the vehicle of interest.
  • the instructions may further comprise a specified location, around a known or suspected location of the vehicle of interest; alternatively or in addition, the instructions may be sent to mobile cameras 300 that are in the known or suspected location of the vehicle of interest.
  • checking identified number plates against that of the vehicle of interest may be performed by the mobile camera 300 or by the server 200.
  • the server 200 may send updates to the instructions for tracking the vehicle of interest. Updates to the instructions may include an update to the location of the vehicle of interest, for example based on location information provided by one or more of the mobile cameras 300.
  • the server 200 may use the plurality of mobile cameras 300 to identify cases of vehicle cloning.
  • Vehicle cloning in this context refers to copying of a number plate, such one vehicle illicitly bears the number plate of another vehicle.
  • the server 200 may detect vehicle cloning if a number plate is identified by multiple mobile cameras 300 within a short time period, in multiple locations that would not be reachable by the same vehicle in that time period.
  • a mobile camera 300 may be equipped to determine one or more further characteristics of a vehicle, such as the model of the vehicle, the make of the vehicle, the colour of the vehicle, or any other visible characteristics of the vehicle (e.g., damage to the vehicle, stickers or decorations on the vehicle, etc.).
  • a mobile camera 300 may determine such further characteristics for all vehicles, or the server 200 may provide one or more selection criteria regarding further characteristics.
  • the server 200 may request further characteristics for the vehicles having identified number plates matching a predetermined list of number plates of interest.
  • the server 200 may instruct the mobile cameras 300 to identify all white vehicles manufactured by Ford.
  • the network of mobile cameras 300 can be used to locate a vehicle that is suspected of being involved in an incident, even where the number plate of the vehicle is unknown.
  • the server 200 can use the number plate of a vehicle in combination with the further characteristics of the vehicle to detect when a number plate has been affixed to an incorrect vehicle.
  • the server 200 can query an authoritative data source (such as a database operated by a vehicle licensing or registration authority, or an insurance provider) to determine the make, model and/or colour of the vehicle for a given number plate.
  • the make, model and/or colour provided by the authoritative data source can be compared with the make, model and/or colour identified by a mobile camera 300. A discrepancy between the make, model and/or colour identified by a mobile camera 300 with that provided by the authoritative data source may indicate that the number plate has been affixed to an incorrect vehicle.
  • the server 200 can alert a law enforcement authority to the possibility of criminal activity.
  • the alert can include video data (e.g. one or more frames) that includes the vehicle bearing an incorrect number plate, and can optionally further include the time and/or location at which that vehicle was identified.
  • the type of image feature identified through video data analysis is a road sign.
  • a mobile camera 300 is configured to detect the presence of a road sign.
  • the mobile camera 300 may further be configured to interpret the meaning of an identified road sign.
  • the mobile camera 300 may be configured to determine a speed limit indicated by the road sign.
  • the camera may be configured to determine a warning, prohibition, instruction or other information conveyed by the road sign.
  • road sign interpretation may be performed by the server 200. Interpretation of road signs can be achieved through the use of known computer vision techniques to analyse the visual content (e.g. alphanumeric characters or drawings) in the road sign.
  • Data relating to road sign detection and/or interpretation may be collected by a server 200 by sending instructions to one or more mobile cameras 300 to analyse video data to identify road signs.
  • a mobile camera 300 may transmit all identified road sign data to the server 200.
  • server 200 may use one or more selection criteria to set conditions for which road sign information to receive from mobile criteria 300.
  • a server 200 may send a selection criterion to cause the mobile cameras 300 to only transmit road sign data related to speed limits.
  • a server 200 may control a plurality of mobile cameras 300 to build and maintain an up- to-date road sign library.
  • the server 200 builds and maintain an up to date road speed library, based on identified speed limit signs.
  • image features such as road signs can be timestamped and/or geotagged.
  • the geotags can be used to associate an identified road sign with a particular road.
  • the timestamps can be used to update the library, by identifying old road sign information and replacing it with newly-identified road sign information.
  • Mobile cameras 300 may also capture up-to-date information from smart road systems, such as smart motorways. For example, the mobile cameras 300 may provide frequent updates to the road sign library by interpreting variable message road signs (e.g. road signs setting a variable speed limit) on a smart road system.
  • variable message road signs e.g. road signs setting a variable speed limit
  • a server 200 may determine the frequency and locations at which mobile cameras 300 identify road signs. For example, in locations with variable message road signs, a server 200 may instruct mobile cameras 300 to always analyse road sign data, in order to maintain up to date road sign information. In other locations, server 200 may instruct mobile cameras 300 to analyse data periodically, for example once a day or once a week.
  • the type of image feature identified through video data analysis is a facial feature of a person visible in the video data captured by a mobile camera.
  • the term“facial feature” refers to any information derived from image analysis of a human face in the video data.
  • a facial feature may include video data (e.g. a frame, or a portion of a frame) in which a face has been detected.
  • a facial feature may include biometric information that can be used to identify a person whose face is present in the video data.
  • a mobile camera 300 is configured to detect the presence of a face in the video, using known computer vision techniques. After detecting a face, the mobile camera 300 may send a frame of video data containing the face (or a portion of the frame that has been cropped to the boundaries of the face) to the server 200 for further analysis of the detected face. Alternatively or in addition, the mobile camera 300 itself may perform further analysis of the detected face. Further analysis of the detected face may include determining biometric information from the image of the face, using known computer vision techniques. As mentioned above, image features such as facial features can be timestamped and/or geotagged, thus allowing the location of a particular person at a particular time to be determined. The server 300 can instruct one or more mobile cameras 200 to identify facial features.
  • the instructions to identify facial features may be location specific and/or time specific. For example, an area-based selection criterion for a particular facial feature may be sent to the plurality of mobile cameras 300. In response, mobile cameras 300 in the specified area may analyse video data and check identified facial features against the particular facial feature specified by the selection criterion. A mobile camera 300 may send identified facial features to the server 200.
  • the selection criteria may also specify a time range in which to analyse video data in an area.
  • the server 200 and plurality of cameras 300 may be used to verify or follow up on a reported sighting of specified person in a particular area, e.g. a wanted person or a missing person. A sighting may be reported to the server 200.
  • the server 200 may send instructions to a plurality of mobile cameras 300 to analyse video data for facial features in the particular area, starting immediately, either for a specified duration or until a stop instruction is sent.
  • the plurality of mobile cameras 300 may transmit identified facial features to the server 200.
  • the server 200 may use this data to locate and/or track the specified person.
  • the type of image feature identified through video data analysis is the speed of other vehicles.
  • the term“other vehicle” refers to a different vehicle from that in which a mobile camera is located, although it should be understood that the mobile camera does not need to be located in a vehicle in order to identify the speed of other vehicles.
  • a mobile camera 300 may be configured to infer speed of other vehicles using computer vision techniques.
  • One or more mobile cameras 300 may analyse video data to identify another vehicle, for example using AN PR as described above.
  • the one or more mobile cameras 300 may track the movement of the identified vehicle over time, for example by estimating a position and change in position over time relative to each the one or more mobile cameras 300.
  • the one or more mobile cameras 300 may make use of other data, for example speed and/or location of the vehicle a mobile camera 300 is in, in order to make an estimation of the speed of the identified vehicle. This may, for example, be used to detect potential speeding offences.
  • a mobile camera 300 may send the number plate and the location of a speeding vehicle 200 to the server, whereupon a law enforcement agency can be alert to the offence.
  • the mobile camera 300 may be configured to infer the direction of travel of other vehicles.
  • the direction of travel of other vehicles can be inferred using similar computer vision techniques to those used to infer the speed of other vehicles, in combination with knowledge of the mobile camera’s own direction of travel (which can be derived from changes to its location over a period of time).
  • the mobile camera 300 can send the direction of travel of another vehicle to the server 300 (optionally along with the number plate, location and/or speed of the other vehicle).
  • image features other than those listed above may be identified through video data analysis, and that the methods described above for handling and processing identified features also apply. It will also be appreciated that the various examples of image features described above can be combined.
  • the network may also comprise one or more stationary cameras, for example one or more pan-tilt-zoom cameras.
  • a stationary camera can be understood to be a camera that remains substantially in the same location during normal use, but may be moveable, for example to be brought from one location to another.
  • the methods disclosed herein can be performed by instructions stored on a processor- readable medium.
  • the processor-readable medium may be: a read-only memory (including a PROM, EPROM or EEPROM); random access memory; a flash memory; an electrical, electromagnetic or optical signal; a magnetic, optical or magneto-optical storage medium; one or more registers of a processor; or any other type of processor- readable medium.
  • the present disclosure can be implemented as control logic in hardware, firmware, software or any combination thereof.
  • the apparatuses disclosed herein may be implemented by dedicated hardware, such as one or more application-specific integrated circuits (ASICs) or appropriately connected discrete logic gates.
  • a suitable hardware description language can be used to implement the methods described herein with dedicated hardware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un procédé de commande d'une caméra mobile (300), réalisé au niveau d'un serveur (200). Le serveur (300) reçoit une pluralité de flux de données vidéo en direct provenant d'une ou de plusieurs caméras mobiles (300), et stocke des données indicatives d'une heure et d'un endroit où chaque flux de données vidéo en direct a été enregistré. Le serveur (300) reçoit une requête (110) de données vidéo, la requête spécifiant une heure et un endroit. Le serveur identifie une caméra mobile (300) ayant enregistré un flux de données vidéo à une heure et un endroit satisfaisant la requête, et envoie une ou plusieurs instructions à la caméra mobile (300) identifiée. Les instructions sont configurées pour commander la caméra mobile (300) identifiée.
PCT/GB2019/052995 2018-10-22 2019-10-21 Procédé et appareil de commande d'une caméra mobile WO2020084288A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2107356.4A GB2593368B (en) 2018-10-22 2019-10-21 Method and apparatus for controlling a mobile camera

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB1817173.6 2018-10-22
GB1817173.6A GB2578311A (en) 2018-10-22 2018-10-22 Method and apparatus for controlling a mobile camera
GBGB1901720.1A GB201901720D0 (en) 2018-10-22 2019-02-07 Method and apparatus for controlling a mobile camera
GB1901720.1 2019-02-07

Publications (1)

Publication Number Publication Date
WO2020084288A1 true WO2020084288A1 (fr) 2020-04-30

Family

ID=64453740

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2019/052995 WO2020084288A1 (fr) 2018-10-22 2019-10-21 Procédé et appareil de commande d'une caméra mobile

Country Status (2)

Country Link
GB (3) GB2578311A (fr)
WO (1) WO2020084288A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130039542A1 (en) * 2009-04-28 2013-02-14 Whp Workflow Solutions, Llc Situational awareness
US20150189466A1 (en) * 2014-01-01 2015-07-02 Ouri Shifman Method for providing on-demand digital representations to mobile devices in other geographic locations
US20170054948A1 (en) * 2015-08-21 2017-02-23 Trimble Navigation Limited On-demand system and method for retrieving video from a commercial vehicle

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4926400B2 (ja) * 2004-12-27 2012-05-09 京セラ株式会社 移動カメラシステム
US20170019581A1 (en) * 2015-07-16 2017-01-19 Symbol Technologies, Llc Arrangement for, and method of, reducing server processing requirements of a host server
KR101689621B1 (ko) * 2015-07-22 2016-12-27 (주)피타소프트 이중 와이파이 접속 기능을 구비한 차량용 영상 기록 장치 및 이를 이용한 영상 공유 시스템
GB2577689B (en) * 2018-10-01 2023-03-22 Digital Barriers Services Ltd Video surveillance and object recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130039542A1 (en) * 2009-04-28 2013-02-14 Whp Workflow Solutions, Llc Situational awareness
US20150189466A1 (en) * 2014-01-01 2015-07-02 Ouri Shifman Method for providing on-demand digital representations to mobile devices in other geographic locations
US20170054948A1 (en) * 2015-08-21 2017-02-23 Trimble Navigation Limited On-demand system and method for retrieving video from a commercial vehicle

Also Published As

Publication number Publication date
GB2593368B (en) 2023-03-01
GB201901720D0 (en) 2019-03-27
GB2578311A (en) 2020-05-06
GB202107356D0 (en) 2021-07-07
GB201817173D0 (en) 2018-12-05
GB2593368A (en) 2021-09-22

Similar Documents

Publication Publication Date Title
US11562020B2 (en) Short-term and long-term memory on an edge device
US20210334558A1 (en) Method and system for providing behavior of vehicle operator using virtuous cycle
JP6486744B2 (ja) 地域監視サーバ、提供端末及びそのシステム
KR101783982B1 (ko) 차량용 감시 카메라를 이용한 통합 관제 시스템 및 그 방법
CN106462729B (zh) 车辆图像数据管理系统和方法
US11248920B1 (en) Systems and methods for assessment of rideshare trip
US7881604B2 (en) Image recording device, image managing system, and image recording control program
KR101906709B1 (ko) 커넥티드 블랙박스
JP2018180810A (ja) 交通違反検知装置、システム、交通違反検知方法およびプログラム
WO2012137367A1 (fr) Système d'accumulation d'images
CN113033301B (zh) 基于ai图像识别技术采集道路巡检设施数据的方法
WO2015099387A1 (fr) Système de gestion du trafic intelligent
KR101475038B1 (ko) 실시간 도로 지도 생성 방법 및 시스템
US20190304210A1 (en) Real-Time Data Acquisition and Recording Data Sharing System
KR101519973B1 (ko) 차량용 블랙박스 영상의 통합 관리 시스템 및 방법
WO2020084288A1 (fr) Procédé et appareil de commande d'une caméra mobile
JP2019004373A (ja) 映像情報共有装置、映像情報共有システム及び映像情報共有方法
KR20130103876A (ko) 블랙박스 데이터 검색 시스템
JP7182121B2 (ja) 画像収集システム
KR20130057265A (ko) 스마트폰용 블랙박스 영상 제보 시스템 및 방법
KR20140065725A (ko) 차량용 블랙박스 영상 정보의 통합 관리 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19791353

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 202107356

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20191021

122 Ep: pct application non-entry in european phase

Ref document number: 19791353

Country of ref document: EP

Kind code of ref document: A1