WO2019136107A1 - Scrub and playback of video buffer over wireless - Google Patents

Scrub and playback of video buffer over wireless Download PDF

Info

Publication number
WO2019136107A1
WO2019136107A1 PCT/US2019/012109 US2019012109W WO2019136107A1 WO 2019136107 A1 WO2019136107 A1 WO 2019136107A1 US 2019012109 W US2019012109 W US 2019012109W WO 2019136107 A1 WO2019136107 A1 WO 2019136107A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
mobile device
playlists
client device
camera
Prior art date
Application number
PCT/US2019/012109
Other languages
French (fr)
Inventor
Andrew Hodge
Nathan Ackerman
Original Assignee
Owl Cameras, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Owl Cameras, Inc. filed Critical Owl Cameras, Inc.
Priority to CA3087335A priority Critical patent/CA3087335A1/en
Priority to EP19736083.7A priority patent/EP3714605B1/en
Priority to US16/957,062 priority patent/US11303967B2/en
Publication of WO2019136107A1 publication Critical patent/WO2019136107A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4825End-user interface for program selection using a list of items to be played back in a given order, e.g. playlists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/04Key management, e.g. using generic bootstrapping architecture [GBA]
    • H04W12/047Key management, e.g. using generic bootstrapping architecture [GBA] without using a trusted network node as an anchor
    • H04W12/0471Key exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/50Secure pairing of devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/69Identity-dependent
    • H04W12/77Graphical identity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Definitions

  • This disclosure generally relates to video-based data collection systems, and more specifically to securely pairing video capturing devices with mobile devices.
  • Ring a feature of the Ring system is called“Ring Neighborhoods” (described at https://ring.com/neighborhoods).
  • Ring Neighborhoods a feature of the Ring system
  • a user can set a radius around the user’s home equipped with Ring cameras and automatically get notified when other users within that radius share videos in the Ring platform. Users can share any video they find may be interesting for other users in the neighborhood.
  • these systems do not provide a simple and secure approach to pair the cameras with the mobile device running an app to access the cameras’ recordings.
  • Safety cameras for backing up or side view cameras are becoming common-place.
  • security camera systems record video from both inside and outside the vehicle for safety and management purposes.
  • Safety Track of Belleville, Michigan provides a 2- channel dash camera system equipped with a 3G/4G cellular dongle that connects to the camera system via USB for streaming video from the vehicle in real time (described at
  • a video data collection and sharing platform is provided.
  • a method for wirelessly accessing video objects stored in a buffer of a video capturing device includes detecting an authorized mobile device in physical proximity to the video capturing device and activating a wireless hotspot mode in the video capturing device.
  • the wireless hotspot mode includes network parameters to identify the access point of a wireless network and to provide secure access to the wireless network.
  • the access parameters are provided to the authorized mobile device, for example during the authorization or pairing process. Alternatively, the access parameters may be provided at a different time.
  • the access point receives a request to join the wireless network from the authorized mobile device. The request may be based, at least in part, on the network parameters.
  • a plurality of playlists are generated, each playlist listing a plurality of video objects from the video objects stored in the buffer of the video capturing device for enabling access to the plurality of video objects, and each playlist configured to enable playback of each video object listed in sequence and without significant interruption.
  • the playlists may be provided to the authorized mobile device via the wireless network, for example, upon request.
  • a method for wirelessly accessing video objects stored in a buffer of a video capturing device includes receiving access parameters for accessing a video capturing device in a wireless hotspot mode.
  • the access parameters may include for example network parameters to identify an access point of a wireless network and to securely access the wireless network.
  • the method includes sending a request to join the wireless network to the access point in the video capturing device based, at least in part, on the network parameters.
  • Several playlists are received from the video capturing device via the wireless network. Each playlist may list a set of video objects from the video objects stored in the buffer of the video capturing device to enable access to the video objects.
  • each playlist may be configured to enable playback of each video object listed in sequence and without significant interruption.
  • the method further includes requesting access to a video object stored in the buffer of the video capturing device based on user scrubbing input and a playlist of the plurality of playlists.
  • a master playlist is also generated and may be sent and received.
  • the master playlist lists the other playlists, each of which lists a subset of video objects identified in a list as an advertisement according to a streaming protocol.
  • Each of the playlists may be organized according to the HTTP Live Streaming protocol and each video object may be identified using a #tag.
  • each video object identified in the list may consist of a number of seconds of recorded video of between 1 and 10 seconds. Further, the list in each of the playlists may identify video objects corresponding to between 10 and 20 minutes of video.
  • the master playlist may be sent to the authorized mobile device via the wireless network.
  • the playlists are sent upon receiving requests from the authorized mobile device for two or more of the playlists listed in the master playlist.
  • receiving the plurality of playlists may include sending a plurality of requests to the video capturing device. These requests may be based on user scrubbing input on a video playback control on the mobile device. For example, in one embodiment the scrubbing input may be based on a timeline-based user interface related to the time when the video objects where captured by the video capturing device. In another embodiment, the scrubbing input is based on a map-based user interface related to the location where the video objects where captured by the video capturing device.
  • a method may also include displaying a map- based user interface on a display of the mobile device.
  • the map-based user interface may provide a map with indications representative of locations where the video objects stored in the buffer of the video capturing device were captured.
  • video objects that are sent in response to a request may include video data, time data, and location data.
  • a method may also include displaying the video data on the screen of the mobile device along with a map comprising an indicator corresponding to the location data.
  • a first video object is provided in response to a request from the authorized mobile device and a second video object listed in sequence after the first video object in a playlist is also automatically sent after the first video object in response to the request
  • FIG. 1 illustrates an exemplary video-based data capture and analysis system according to one embodiment of the disclosure.
  • FIG. 2 is a functional block diagram of a client device according to one embodiment of the disclosure.
  • FIG. 3 is a block diagram of a dash camera client device according to one embodiment.
  • FIG. 4a shows a graphical user interface (GUI) for a“clips pane” in a mobile app in mobile device according to one embodiment.
  • GUI graphical user interface
  • FIG. 4b shows a graphical user interface (GUI) for a“camera pane” in a mobile app in mobile device according to one embodiment.
  • GUI graphical user interface
  • FIG. 4c shows a graphical user interface (GUI) for a“news pane” in a mobile app in mobile device according to one embodiment.
  • GUI graphical user interface
  • FIG. 4d shows a graphical user interface (GUI) for a“camera pane” in a mobile app in mobile device according to another embodiment.
  • GUI graphical user interface
  • FIG. 4e shows a graphical user interface (GUI) for a“camera pane” in a mobile app in mobile device according to another embodiment.
  • GUI graphical user interface
  • FIG. 5 is a flow chart illustrating a method of video data collection according to one
  • FIG. 6a a flow chart illustrating a method for cloud-based data collection and analysis of event- based data according to one embodiment.
  • FIG. 6b illustrates a data model for capturing metadata associated with a given video data object or file according to one embodiment.
  • FIG. 6c illustrates a data model for capturing metadata associated with a given event-based video clip according to one embodiment.
  • FIG. 7 is a flow chart illustrating a method for generating event-based video clips according to one embodiment.
  • FIG. 8 is a flow chart illustrating a method for setting up a client device according to one embodiment.
  • FIG. 9 is a flow chart illustrating a method for obtaining a mobile app from a mobile device according to one embodiment.
  • FIG. 10 is a flow chart illustrating a method for setting up and pairing a client device and a mobile device according to another embodiment.
  • FIG. 11 is a flow chart illustrating a method for scrubbing or playing back high-quality video data stored in a buffer over wireless communication according to one embodiment.
  • Client device 101 is a dedicated data capture and recording system suitable for installation in a vehicle.
  • client device 101 is a video-based dash camera system designed for installation on the dashboard or windshield of a car.
  • Client device 101 is connected to cloud-based system 103.
  • cloud-based system 103 includes a server system 102 and network connections, such as for example, to Internet connections.
  • cloud-based system 103 is a set of software services and programs operating in a public data center, such as an Amazon Web Services (AWS) data center, a Google Cloud Platform data center, or the like.
  • AWS Amazon Web Services
  • Cloud-based system 103 is accessible via mobile device 104 and web-based system 105.
  • mobile device 104 includes a mobile device, such as an Apple iOS based device, including iPhones, iPads, or iPods, or an Android based device, like a Samsung Galaxy smartphone, a tablet, or the like. Any such mobile device includes an application program or app running on a processor.
  • Web-based system 105 can be any computing device capable of running a Web browser, such as for example, a WindowsTM PC or tablet, Mac Computer, or the like.
  • Web- based system 105 may provide access to information or marketing materials of a system operations for new or potential users.
  • Web-based system 105 may also optionally provide access to users via a software program or application similar to the mobile app further described below.
  • system 100 may also include one or more auxiliary camera modules 106.
  • auxiliary camera module 106 may be implemented as a client device 101 and operate the same way.
  • auxiliary camera module 106 is a version of client device 101 with a subset of components and functionality.
  • auxiliary camera module 106 is a single camera client device 101.
  • Connection 107 is a cellular-based wireless packet data connection, such as a 3G, 4G, LTE, 5G, or similar connection.
  • Connections 108a- 108c between other system components and cloud-based system 103 are Internet-based connections, either wired or wireless.
  • mobile device 104 may at different times connect to cloud-based system 103 via Wi-Fi (i.e., any IEEE 802.1 l-based connection or similar technology) and cellular data (e.g., using 4G, LTE, or the like).
  • Wi-Fi i.e., any IEEE 802.1 l-based connection or similar technology
  • cellular data e.g., using 4G, LTE, or the like.
  • Web-based system 105 is connected to cloud-based system 103 over the World Wide Web using a wired Internet connection, such as DSL, cable modem, or the like.
  • auxiliary camera module 106 is connected to cloud-based system 103 via a Wi-Fi connection to a home router connected to the Internet via cable modem, DSL, or the like. Any combination of available connections can be used to connect any of the system components to cloud-based system 103 via the Internet or similar networks.
  • FIG. 2 a functional system diagram for a client device 101 according to one embodiment is shown. Different embodiments may include a subset of the components shown in FIG. 2 and/or other components not shown. In alternative embodiments, the components shown in FIG. 2 (as well as additional components not shown, such as for example, HDMI modules, battery charger and/or power supply modules, and the like) may be part of a System- on-Chip (SoC) device, multiple chips on a board, ASICs, or the like.
  • SoC System- on-Chip
  • the client device 101 includes a microprocessor 201 connected to a data bus 202 and to a memory device 203 and additional functional modules.
  • microprocessor 201 is a Qualcomm Snapdragon MSM8953 but other microprocessors may be used to implement the invention, such as for example, other Qualcomm’s Qualcomm processors, ARM Cortex A8/9 processors, Nvidia’s Tegra processors, Texas Instruments OMAP processors, or the like.
  • the microprocessor 201 executes operating system software, such as Linux, Android, iOS, or the like, firmware, drivers, and application software.
  • the client device 101 in this exemplary embodiment includes a location module 204, a wireless transceiver module 205, an audio I/O module 206, a video module 207, a touchscreen module 208, a sensor module 209, and an I/O module 216.
  • the different modules are implemented in hardware and software modules. In alternative embodiments, these modules can be hardware, software, or a combination of both.
  • alternative embodiments may be provided with one or more central processor (“CPU”) cores on an SoC also including a wireless modem, multimedia processor, security and optionally other signal co-processors, such as for example, one or more graphics processor unit (“GPU”) cores, one or more holographic processing unit (“HPU”) cores, and/or one or more vision processing units (“VPU”).
  • one or more SoC processors used to embody the invention may encompass CPUs, GPUs, VPUs, HPUs, and other co-processors, motherboard buses, memory controllers, screen controllers, sound chipsets, camera modules, on-board memory, and several peripheral devices, including for example cellular, Wi-Fi, and Bluetooth transceivers, as further described below.
  • Alternative embodiments include modules as discrete components on a circuit board
  • bus 202 interconnected by bus 202 or a combination of discrete components and one or more SoC modules with at least some of the functional modules built-in.
  • location module 204 may include one or more satellite receivers to receive and decode signals from location satellite systems, such as Global Positioning System (“GPS”), Global Navigation Satellite System (“GLONASS”), and/or BeiDou satellite systems.
  • location module 204 is a Qualcomm QTR2965 or Qualcomm QGR7640 receiver that connects to a GPS antenna for receiving GPS satellite signals and providing geographical coordinates (latitude and longitude) of the location of the client device 101.
  • the wireless transceiver module 205 includes a cellular modem, e.g., compliant with 3G/UMTS, 4G/LTE, 5G or similar wireless cellular standards, a Wi-Fi transceiver, e.g., compliant with IEEE 802.11 standards or similar wireless local area networking standards, and a Bluetooth transceiver, e.g., compliant with the IEEE 802.15 standards or similar short-range wireless communication standards.
  • the wireless transceiver module 205 is a Sierra Wireless HL- 7588.
  • the audio EO module 206 includes an audio codec chipset with one or more analog and/or digital audio input and output ports and one or more digital-to-analog converters and analog-to- digital converters and may include one or more filters, sample rate converters, mixers, multiplexers, and the like.
  • an audio codec chipset with one or more analog and/or digital audio input and output ports and one or more digital-to-analog converters and analog-to- digital converters and may include one or more filters, sample rate converters, mixers, multiplexers, and the like.
  • a Qualcomm WCD9326 chipset is used, but alternative audio codecs may be used.
  • video module 207 includes a DSP core for video image processing with video accelerator hardware for processing various video compression formats and standards, including for example, MPEG-2, MPEG-4, H.264, H.265, and the like.
  • video module 207 is integrated into an SoC
  • client device 101 includes an integrated GPU inside the Qualcomm MSM8953 but alternative embodiments may include different implementations of video module 207.
  • the touchscreen module 208 is a low-power touchscreen sensor integrated circuit with a capacitive touchscreen controller as is known in the art. Other embodiments may implement touchscreen module 208 with different components, such single touch sensors, multi- touch sensors, capacitive sensors, resistive sensors, and the like.
  • the touchscreen module 208 includes an LCD controller for controlling video output to the client device’s LCD screen.
  • touchscreen module 208 includes a device used for LCD control. LCD controller may be integrated into a touchscreen module 208 or, in alternative embodiments, be provided as part of video module 207, as a separate module on its own, or distributed among various other modules.
  • sensor module 209 includes controllers for multiple hardware and/or software-based sensors, including, accelerometers, gyroscopes, magnetometers, light sensors, gravity sensors, geomagnetic field sensors, linear acceleration sensors, rotation vector sensors, significant motion sensors, step counter sensors, step detector sensors, and the like.
  • sensor module 209 is and Invensense ICM-20608.
  • Alternative implementations of sensor module 209 may be provided in different embodiments.
  • sensor module 209 is an integrated motion sensor MEMS device that includes one or more multi-axis accelerometers and one or more multi-axis gyroscopes.
  • Client device 101 may also include one or more I/O modules 210.
  • I/O module 210 includes a Universal Serial Bus (USB) controller, a Controller Area Network (CAN bus) and/or a LIN (Local Interconnect Network) controller.
  • USB Universal Serial Bus
  • CAN bus Controller Area Network
  • LIN Local Interconnect Network
  • client device 101 also includes a touchscreen 211.
  • touchscreen 211 may be a capacitive touch array controlled by touchscreen module 208 to receive touch input from a user.
  • Other touchscreen technology may be used in alternative embodiments of touchscreen 211, such as for example, force sensing touch screens, resistive touchscreens, electric-field tomography touch sensors, radio-frequency (RF) touch sensors, or the like.
  • user input may be received through one or more microphones 212.
  • microphone 212 is a digital microphone connected to audio module 206 to receive user spoken input, such as user instructions or commands.
  • Microphone 212 may also be used for other functions, such as user communications, audio component of video recordings, or the like.
  • Client device may also include one or more audio output devices 213, such as speakers or speaker arrays.
  • audio output devices 213 may include other components, such as an automotive speaker system, headphones, stand-alone “smart” speakers, or the like.
  • Client device 101 can also include one or more cameras 214, one or more sensors 215, and a screen 216.
  • client device 101 includes two cameras 2l4a and 2l4b.
  • Each camera 214 is a high definition CMOS-based imaging sensor camera capable of recording video one or more video modes, including for example high-definition formats, such as 1440r, l080p, 720p, and/or ultra-high-defmition formats, such as 2K (e.g., 2048x 1080 or similar), 4K or 2l60p, 2540p, 4000p, 8K or 4320p, or similar video modes.
  • Cameras 214 record video using variable frame rates, such for example, frame rates between 1 and 300 frames per second.
  • cameras 2l4a and 2l4b are Omnivision OV-4688 cameras.
  • Alternative cameras 214 may be provided in different embodiments capable of recording video in any combinations of these and other video modes.
  • CMOS sensors or CCD image sensors may be used.
  • Cameras 214 are controlled by video module 207 to record video input as further described below.
  • a single client device 101 may include multiple cameras to cover different views and angles.
  • client device 101 may include a front camera, side cameras, back cameras, inside cameras, etc.
  • Client device 101 can include one or more sensors 215.
  • sensors 215 may include one or more hardware and/or software-based sensors, including, accelerometers, gyroscopes, magnetometers, light sensors, gravity sensors, geomagnetic field sensors, linear acceleration sensors, rotation vector sensors, significant motion sensors, step counter sensors, step detector sensors, and the like.
  • client device 101 includes an accelerometer 215a, gyroscope 2l5b, and light sensor 2l5c.
  • FIG. 3 provides an illustrative embodiment of a client device implemented as a dash camera system according to the invention.
  • Mobile device 104 may be an Apple iOS based device, such as an iPhone, iPad, or iPod, or an Android based device, such as for example, a Samsung Galaxy smartphone, a tablet, a PDA, or the like.
  • Apple iOS based device such as an iPhone, iPad, or iPod
  • Android based device such as for example, a Samsung Galaxy smartphone, a tablet, a PDA, or the like.
  • mobile device 104 is a smartphone with one or more cameras, microphone, speakers, wireless communication capabilities, and sensors.
  • mobile device 104 may be an Apple iPhone 5, 6, 7, 8 or X.
  • the wireless communication capabilities of mobile device 104 preferably include wireless local area networking communications, such as 802.11 compatible communications or Wi-Fi, short-range low-power wireless communications, such as 802.15 compatible communications or Bluetooth, and cellular communications (e.g., 4G/LTE, 5G, or the like).
  • mobile device 104 preferably includes an application program or app running on a processor.
  • One of ordinary skill in the art is familiar with mobile operating systems and mobile apps.
  • Mobile apps are typically made available and distributed through electronic means, such as for example, via electronic“stores” such as the Apple App Store or the Google Play Store, or directly from apps providers via their own websites.
  • electronic“stores” such as the Apple App Store or the Google Play Store
  • apps providers via their own websites.
  • mobile device app is not required for operation of the system, for example, camera device 101/106 may include a voice-enabled interface, a chat-bot interface, or the like. However, several embodiments include the use of a mobile app.
  • a mobile app on mobile device 101 provides a user interface to a user account on cloud system 103 and to client device 101.
  • mobile app includes functionality similar to auxiliary camera 106.
  • mobile app uses one or more cameras on mobile device 104 to record video events in accordance to one embodiment of the disclosure.
  • the video recording, buffer management, and other methods and techniques described herein may be also
  • FIG. 4a-4e a user interface for an app in mobile device 104 according to one embodiment is described.
  • the mobile app includes one or more panes 401.
  • FIG. 4a shows a graphical user interface (GUI) for a clips pane 40 la in a mobile app in mobile device 104 according to one embodiment.
  • the mobile app can receive video clips from multiple sources and store them locally.
  • video clips can be received from cloud system 103.
  • Client devices 101, auxiliary cameras 106, and mobile devices 104 of the user and other users can upload video clips to cloud system 103.
  • Video clips can also be directly sent to mobile device 104, for example from a client device 101 or an auxiliary camera 106.
  • Video clips can also be locally generated on mobile device 104.
  • only metadata for a clip is provided to the mobile app while the video data for the clip is stored remotely.
  • video data objects (such as for example files, data records, data objects, or the like) may be stored on cloud servers 102 or in local memory of client devices 101, auxiliary cameras 106, or other mobile devices 104 and remotely accessible over the Internet.
  • one or more types video clips from one or more of these sources can be made available through the clips pane 40 la of mobile app as illustrated in FIG. 4a.
  • Clips pane 40 la includes a listing of video clips that can be accessed by the user via mobile device 104.
  • clips are added to the clips pane 40 la along with an alert to the user on the mobile device 104.
  • client device For example, every time a clip is generated by a client device 101, client device causes a clip alert to be displayed to the user’s mobile device 104 and the generated clip is listed on clips pane 40la available for access by the user.
  • a descriptor 402a-n and a clip type icon 403 a-n are provided.
  • clip type icon 402 provides a visual indicator of the source of the video clip.
  • clip type icons 402a-b indicate that those clips were automatically generated via the auto-tagging method (as further described below) and clip type 402c indicates that that clip was user-generated. In additional embodiments, these and other clip types may be used.
  • a multi-clip type icon may be used to indicate availability of multiple clips related to the same event, such as for example, multiple clips generated from different camera devices providing different viewpoints of the same event as further described below.
  • Descriptors 402 provided text associated with the video clip, such as, for example, a user-generated description or an auto-tag descriptor as further described below.
  • other icons 403 for different clip types and descriptors 402 may be used in a clips pane 40 la in accordance with this disclosure.
  • a user of the mobile app can cause mobile device to playback a video clip listed in the clips pane 40 la by clicking on or touching the video clip listing on the clips pane 40 la.
  • the mobile app causes a media player, either built-in or provided through the operating system of the mobile device 104, to play the selected video clip.
  • live camera feeds from multiple sources can be displayed on the mobile device 104 through the camera pane 40lb of mobile app as illustrated in FIG. 4b.
  • the camera pane 40 lb includes a camera feed window 410, a camera control interface 411 and a camera selection interface 412.
  • Alternative embodiments may include a subset or additional elements in camera pane 40lb.
  • camera selection interface 412 may be not included in a single-camera embodiment. Additional embodiments are described below with references to FIG. 4D and FIG. 4E.
  • Camera feed window 410 displays the video feed from the currently selected camera.
  • the camera feed window 410 is enlarged to take the entire screen upon user selection, becoming for example the window of an associated media player app.
  • the screen may be automatically enlarged or upon tilting of the mobile device 104 to a side, the camera feed window 410 (in“landscape” orientation) takes up the full mobile device display.
  • Cameras may be selected using the camera selection interface 412.
  • camera selection interface 412 may display a selection option 4l2a-n for each of 1 - n available cameras.
  • icons are used to depict each of the available cameras, such as a home camera (e.g., an auxiliary camera 105), a vehicle camera (e.g., from a client device 101), and a phone camera (e.g., the camera on the mobile device 106). Any number of additional cameras may be made available and the selection interface 412 modified to allow selection, such as via a drop-down menu, a pop-up“edit” menu, a picker menu, a rolling menu, or the like.
  • real time camera feeds are provided to the mobile app with the same approach used for providing video clips based on a playlist file or manifest file as further described below.
  • the playlist files are dynamically updated to include each newly generated video data object or file captured by the relevant camera.
  • the file location is provided in the updated playlist and the playlist file is updated via the cloud system 103 or directly from the source of the video feed.
  • playlist files for streaming video are dynamically updated as described in the HTTP Live Streaming specification (as for example described in Internet Draft draft-pantos-http- live-streaming-23 submitted by Apple, Inc. to IETF on May 22, 2017) incorporated herein by reference in its entirety.
  • Alternative streaming techniques may be used in other embodiments, including, for example, MPEG-DASH (ISO/IEC 23009-1), Adobe’s HTTP Dynamic Streaming, Microsoft’s Smooth Streaming, or the like.
  • camera pane 40lb includes camera control elements 411.
  • a recording or manual tagging control element 41 la is provided for the user to instruct the currently selected camera to generate a clip for the currently displayed video (as further described below).
  • mobile device 104 can also be used to generate additional video clips for the given event from a different angle or perspective.
  • any time the mobile app is running on the mobile device 104 one or more cameras on the mobile device 104 are recording video data and manual tagging control element 41 la is used to generate a manually-tagged video clip as further described below.
  • mobile device 104 can be used as client device 101 or auxiliary camera device 106 according to this embodiment.
  • camera pane 40 lb may also include additional control elements 411, such as, buttons, icons, or other selection elements or menus, to access non-live video stored in the buffer of the currently selected camera.
  • additional control elements 411 such as, buttons, icons, or other selection elements or menus, to access non-live video stored in the buffer of the currently selected camera.
  • a user may remotely access an entire set of video data objects or files stored in the buffer of the user’s client device 101 (e.g., video files for the preceding 24 hours) through user control elements 411.
  • the source camera device e.g., client 101, auxiliary camera 106, or other camera device
  • the source camera device generates a dynamic playlist or manifest file including the video files for the next preset time period, for example, one minute, and it is progressively and dynamically updated in increments of same amount of time (e.g., every minute) with the next set of video files.
  • FIG. 11 provides an illustrative method according to one embodiment for accessing the entire set of video data object or files stored in a client device buffer using this playlist approach.
  • the playlist or manifest files are generated as further described below with reference to video clip generation methods.
  • a graphical user interface for an alternative“camera pane” l40lb in a mobile app in mobile device 104 according to another embodiment is provided.
  • the camera selection interface 1412 allows the user to select source device, such as a camera in a client device 101/106, and the video for the selected camera stored in the buffer of the selected device is displayed in the camera feed window 1410.
  • the camera control interface 1411 includes a time slider interface 141 lb.
  • the control interface 1411 displays an indicator of the available buffered video 141 la, such as for example a color bar, a video timeline bar with snapshots of the video at periodic intervals, or the like.
  • the time slider interface 141 lb includes a time stamp 141 lc representative of the time metadata in the buffered video.
  • a user may move the slider along the video indicator bar 141 la to select the appropriate starting point for playback of the video.
  • the video indicator bar 141 la represents the entirety of the available buffered video for the selected camera device.
  • playback of the selected video takes place using manifest files as further described with reference to FIG. 11.
  • GUI graphical user interface
  • the camera selection interface 1512 also allows the user to select source device, such as a camera in a client device 101/106, and the video for the selected camera stored in the buffer of the selected device is displayed in the camera feed window 1510.
  • the camera feed window 1510 shows a thumbnail, lower resolution, version of the video corresponding to the video stored in the buffer at the selected location or time.
  • a location slider 151 lb is provided instead of a time slider.
  • the camera control interface 1511 provides a map representative of the area where the video stored in the buffer was captured.
  • the metadata captured along with the video data is used to define the geo-coordinates for the stored video data and a path of travel 151 la is plotted on a map to indicate the locations represented in the stored video.
  • the location slider 151 lb can be moved by the user along the path of travel 151 la to select the video from the buffer that was captured at the selected location.
  • the location slider 151 lb also provides a time stamp
  • the video is played in a larger camera feed window (not shown) such as those illustrated in FIG. 4b and FIG. 4d.
  • the camera control interface 1511 map and the camera feed window 1510 are swapped upon the video playback start point selection.
  • the video thumbnail is replaced with the full resolution video in the larger size window while the map is reduced in size and the location slider 151 lb is replace with a location indicator that is updated with the location metadata 624/625 (as for example described with reference to FIG.
  • the relative size and position of the camera feed window 1510 and synchronized map 1511 may be changed in different embodiments without departing from the teachings of the invention.
  • other synchronized displays of video and corresponding metadata are provided, such as time stamp 622, speed 627, heading 628, altitude 626, or the like.
  • the metadata may be displayed in a separate window or overlaid on the video and the items of metadata displayed may be based on user selection or automatically determined, based for example on availability, screen size, intended application, or the like.
  • a mobile app on mobile device 104 may also include a news pane 40 lc.
  • News pane 40 lc provides information from a cloud service provider to users.
  • news pane 40 lc may provide the user with links to video clips on cloud service 103 that are related to video clips generated by the user’s device or devices. For example, links to videos from nearby camera devices generated around the same time as an event video clip of the user (e.g., a car crash, break-in, or the like) and available from other users may be provided to the user on the news pane 40 lc.
  • the features described above with respect to the mobile app may also be provided via Web-based system 105 using conventional website programming techniques to implement the functionality described for the mobile app.
  • client device 101 Referring back to FIG. 1, the operation of client device 101 is described in more detail.
  • client device 101 includes two or more cameras 214.
  • two or more cameras 214 For example, in one
  • a first“IN” camera 2l4a is directed at the inside of a vehicle, i.e., the cabin, driver, and passengers, and a second“OUT” camera 2l4b is directed at the road in front of the vehicle.
  • additional cameras 214 may be used, for example facing the back and/or sides of the vehicle, multiple interior areas of the vehicle, one or more top camera with a wide-angle lens providing a 360° view around the vehicle, or the like.
  • client device 101 is always turned on as long as it has sufficient power to operate. Cameras 214a and 214b are always turned on and recording video. The video recorded by the cameras 214 is buffered in the memory device 203.
  • memory device 203 is configured as a circular buffer.
  • memory device 203 may be a 32 Gb FLASH memory device.
  • Client device 101 manages the buffer in memory device 203 to store video data for a predetermined and programmable set amount of time. For example, in one embodiment, memory device 203 buffers video data from two cameras 214a and 214b for the preceding 24 hours.
  • client device 101 includes software to manage the cameras 214 to control the amount of data, e.g., bytes, generated by the cameras 214 and buffered in memory 203.
  • cameras 214 record data at various selectable video modes and rates.
  • cameras 214a and 214b can be set by client device 101 to capture video at various resolutions, including for example l440p, l080p, 720p, 360p, 240p, and the like.
  • the frame rate for the video collected by each camera 214 can be set by client device 201.
  • each camera 214 can independently change its video capture rate from 0 to 30 frames per second.
  • processor 201 receives location and/or motion data from a location module 204, acceleration data from an accelerometer sensor 215a, vehicle status data, such as for example the revolutions per minute (“RPM”) of a vehicle’s engine, vehicle battery charge level, and the like, from I/O module 201 connected to a CAN bus, time from wireless module 205 (e.g., LTE network time), image processing inputs from video module 207 (e.g., face
  • the inputs are used to determine the relevant features affecting the operation mode of the vehicle, such as for example, motion or lack of motion, presence of a user, presence of a person but not the user, or the like.
  • an operational mode is determined 503.
  • the possible operational modes of a vehicle incorporating client device 101 may include: default, driving, recently parked, parked, armed, low battery, and very low battery. Different embodiments can provide a subset or additional modes of operation, which may also vary depending on the vehicle or other location where the client device 101 (or auxiliary camera) may be located.
  • a status change is determined at step 504. For example, after powering up, input data is received and the operational mode is no longer in“Default” mode.
  • the camera settings e.g., resolution and frame rate
  • recording of the video is done 506 using the camera settings.
  • additional actions may be associated with the various operational modes.
  • the method checks 507 if the operational mode requires additional actions. If so, the actions are performed at step 508.
  • different actions may be associated with different modes to provide additional functionality to the system within the scope of the invention. If one of the actions does not turn off the system, then recording can continue at step 506 as described above.
  • FIG. 6a a method for capturing and storing video according to one
  • video cameras in the various devices are preferably always on and recording video. Once video is being recorded, the method beings 601 and continues until the device is turned off or, in the case of a mobile device 104, until the mobile app stops running.
  • the image sensor For each camera, the image sensor generates video data according to the camera settings for the current operational mode as described above with reference to FIG. 5.
  • the video data is received 602 and the video for each preset time period is encoded 603 according to a video compression and encoding standard, such as for example, MPEG-4, H.264, H.265, or any other video compression and encoding standard.
  • the time period for each block of video may be predetermined or variable (e.g., based on user settings) and may be, for example, 2, 4, 6, or 10 seconds. In one embodiment, every two seconds of video is encoded together into a video data object, record, or file. Other embodiments may use different time periods depending, for example, on the intended use of the video, the purpose for the system, the location where the system is deployed, the amount of memory available, the processing power available, or other relevant factors. Metadata for the same time period is also captured 604 as information associated with the captured video data. As part of the metadata capture 604, a globally unique ID (“GUID”) is generated to uniquely identify the video data and metadata for the time period.
  • GUID globally unique ID
  • the video data is encrypted 605.
  • Any encryption algorithm may be used, such as, for example encryption algorithms compliant with the Advanced Encryption Standard (AES), Blowfish, Twofish, Data Encryption Standard (DES) (e.g., Triple-DES), RSA, or the like.
  • AES Advanced Encryption Standard
  • Blowfish Twofish
  • DES Data Encryption Standard
  • Triple-DES Triple-DES
  • RSA RSA
  • the encrypted video data and associated metadata for the given time period are stored 606 in the buffer.
  • the resulting video data object or file will be of varying size based on the camera settings (e.g., resolution, frame rate, etc.) applied as well as any other factors, such as applied compression format and encoding.
  • the video data object is then hashed 607 using a one-way hash function, such as SHA, MD5, or similar algorithm, to generate a unique hash for the captured video, i.e., the video data hash.
  • the hashing function may be applied to a file that includes both the video data and metadata.
  • the metadata may be stored separately but in association with the video data and it is not included in the generation of the hash 607.
  • a message is generated 608 including the metadata for each time period and the corresponding video data hash.
  • the message is then cryptographically signed 609 to guarantee the message payload originates from an authorized device.
  • a private key associated with a system-authorized device may be used to generate a one-way hash of the message payload.
  • the private key is used to encrypt the payload of the message.
  • each client device 101, auxiliary camera 106, and mobile device 104 is associated with a unique cryptographic key -pair.
  • the device securely stores the private key.
  • the cloud system 103 retains access to the public keys for each device so it can verify that messages it receives come from authorized devices.
  • cloud system 103 maintains a set of records uniquely associating a device ID for each authorized device in the system with a corresponding public key that is applied to messages received from the device.
  • private-public-key cryptographic signature methodologies may be used to verify that each received message includes a signature or encrypted payload encrypted with a private key from an authorized device.
  • the client device uses its private cryptographic key to cryptographically sign or otherwise encrypt the video data object itself, for example, if the actual video data object is to be sent or otherwise uploaded to another device, such as cloud system 103. This could optionally be done in conjunction with step 609 as described above.
  • the message is sent 610 to the cloud system.
  • the message is sent using a secured connection, such as for example, an SSL/HTTPS connection over TCP/IP or the like.
  • the process then repeats for the video data and metadata captured in the subsequent time period.
  • the time required to perform the process of FIG. 6a is less than the selected time period. For example, a device capturing video data in two-second increments (the time period) sends the metadata and video hash message to the cloud system 103 every two seconds. If at some point the data connection to the cloud is interrupted or otherwise becomes unavailable, the system may locally cache the messages for transmission upon reconnection to the cloud system 103.
  • the message signing step 609 is omitted.
  • a device establishes a secured connection with the cloud system 103, such as an SSL/HTTPS connection, and authenticates itself to the server 102.
  • a device provides its device ID and a cryptographically signed version of its device ID, signed with the device’s private key.
  • the server 102 retrieves the public key corresponding to the device ID provided and verifies the signed device ID for a match.
  • the server provides the device with a session token that uniquely identifies communications from that device for a given session. Thereafter messages are sent 610 over the secured connection with the metadata and video hash and also including the server-provided token.
  • the video- object metadata 620 is periodically sent to cloud system 103 as device telemetry information.
  • the telemetry information 620 is sent after the recording of each video object, e.g., every 2 seconds, 6 seconds, 8 seconds, 10 seconds, or the like.
  • the video-object metadata 620 may include one or more metadata items including, for example, a device ID 621, an atomic clock time stamp 622, a GPS timestamp 623, a latitude value 624, a longitude value 625, an altitude 626, a speed 627, a compass heading 628, a horizontal accuracy value 629, a vertical accuracy value 630, a software version 631, a location string value (e.g., a "geohash") 632, a connection type identifier (e.g., 2G, 3G, 4G, WiFi, etc.) 633, a wireless signal strength value 634, and/or a carrier identifier 635.
  • a device ID 621 an atomic clock time stamp 622, a GPS timestamp 623, a latitude value 624, a longitude value 625, an altitude 626, a speed 627, a compass heading 628, a horizontal accuracy value 629, a vertical accuracy value
  • a data model for capturing metadata associated with a given event- based video clip such as an automatically generated video clip, a user-generated video clip, or the like, is provided according to one embodiment.
  • the event metadata 650 is generated and stored with each video clip.
  • the event metadata 650 may include one or more metadata items including, for example, device ID 651, an atomic clock time stamp 652, a location string value (e.g., geohash) 653, an event or tag type 654, an event or tag type 655, an event or tag title 656, an event or tag latitude value 657, an event or tag longitude value 658, an event or tag altitude 659, an event or tag speed 660, an event or tag compass heading 661, an event or tag horizontal accuracy value 662, an event or tag vertical accuracy value 663, the full file name for the an event or tag clip file (e.g., manifest file) 664, a software version 665, a device type ID 664, and one or more Boolean variables to indicate whether the event or tag clip has been viewed 665a, shared 665b, deleted 665c, etc.
  • a location string value e.g., geohash
  • an event or tag type 654 e.g., an event or tag type 654
  • the method starts 700.
  • the various inputs are monitored 701 while video is continuously captured. If no tagging event is detected 702, the system keeps monitoring. If a tagging event is detected 702, the relevant video data in the buffer is identified and selected 703. For example, once an event is detected 702, the video files for a predefined period of time before and after the event is identified in the buffer. In one example, 15 seconds before and after the event time is used. The amount of time, preferably between 10 and 30 seconds, may be pre-programmed or user selectable. Further, two different time periods may be used, one for time before the event and the other for time after the event. In one embodiment, the time periods may be different depending on the event detected. For example, for some events the time periods may be 30 seconds before event and 1 or 2 minutes after while other events may be 15 seconds before and 15 seconds after.
  • the selected video data is marked for buffering 704 for a longer period of time.
  • the video files for the selected time period are copied over to a second system buffer with a different buffering policy that retains the video for a longer period of time.
  • the selected video data being in a buffer storing video for 24 hours is moved over to a second buffer storing video for 72 hours.
  • a video clip is then generated 705 with the selected video data.
  • every video clip generated is associated with a globally unique identifier (GUID).
  • GUID globally unique identifier
  • video clips are generated using a playlist file or manifest file as is known in the art.
  • Each playlist or manifest file includes a GUID.
  • an m3u8 playlist file is generated according to the HTTP Live Streaming specification (as for example described in Internet Draft draft-pantos-http4ive-streaming-23 submitted by Apple, Inc. to IETF on May 22, 2017).
  • Alternative video clip generating techniques may be used in other embodiments, including, for example, MPEG-DASH (ISO/IEC 23009-1), Adobe’s HTTP Dynamic Streaming, Microsoft’s Smooth Streaming, or the like.
  • the playlist or manifest file provides network-based location for the video data objects selected 703. For example, a Universal Resource Locator (URLs) may be provided for each of a set of video files.
  • the video data can be stored in any network accessible storage.
  • video files identified in a given playlist can be stored on a camera device (e.g., client device 101, auxiliary camera 106, or mobile device 104) and network address locators are provided for each file at that location.
  • other video clip generation approaches may be used.
  • the selected 703 video data is used to generate a single video file, such as an MPEG video file, that may be uploaded and downloaded as needed.
  • video data objects are stored on the network-accessible buffer of the camera device and the playlist or manifest files for the generated event-based video clips identify the network addresses for the memory buffer memory locations storing the video data objects or files.
  • the video data may be uploaded to the cloud system 103, or alternatively transferred to a paired mobile device 104 upon establishment of a direct data connection, such as a WiFi or cellular connection.
  • the clip generation 705 then identifies in the playlist or manifest file the network addresses for the video data stored in the cloud system 103.
  • a combination of these approaches may be used depending on storage capacity and network capabilities for the camera devices used in the system or according to other design choices of the various possible implementations.
  • other system components such as the cloud system 103 or mobile device 104
  • a message including the GUID for the generated video clip is sent to the cloud system in a cryptographically signed message (as discussed above).
  • the playlist or manifest file may also be sent in the message.
  • the playlist or manifest files are maintained in the local memory of the camera device until requested.
  • the cloud system may request the clip playlist or manifest file.
  • the cloud system may notify 706 other system components and/or other users of the clip and other system components or users may request the clip either from the cloud system 103 or directly from the camera device.
  • the clips pane 40 la in the user’s mobile app may display the clip information upon receiving the notification 706.
  • the clip metadata is not a large amount of data, e.g., a few kilobytes
  • the user app can be notified almost instantaneously after the tag event is generated.
  • the larger amount of data associated with the video data for the clip can be transferred later, for example, via the cloud system or directly to the mobile device 104, for example, via a cellular or WiFi connection.
  • the user’s mobile device 104 may be immediately notified of the tag event using only tag metadata.
  • video clips are automatically transferred to the user’s authorized and authenticated mobile device 104 automatically upon detection of the mobile device 104 in proximity of the client device 101, for example after establishing a WiFi connection.
  • a lower cost data communication approach is preferable for transferring video data avoiding expensive cellular data charges.
  • a video clip may be shared with other devices owned by the same user or, if authorized, the video clip may be shared with other users of the system.
  • the GUIDs for every video clip generated by a camera device of a given user may be stored in a user clip table in the cloud system 103.
  • GUIDs for the clips from all the cameras on a multi-camera client device 101, for the clips from any auxiliary camera device 106, and for the clips generated by the mobile app on the user’s mobile device 104 may all be stored in the user clip table.
  • the user may access the user clip table via mobile device 104.
  • mobile app may maintain a user clip table that is synchronized with the user clip table in the cloud system. Every time a new clip notification is received, the mobile app and cloud-based user clip tables are updated and or synchronized.
  • Alternative synchronization approaches may be used, such as for example a periodic synchronization approach.
  • the user clip tables may also include other information or metadata for each clip of the user, such as for example, a name or descriptor, device ID where the video was captured, time and date information, tag or event information, thumbprint images, or the like.
  • the playlist or manifest file may also be stored or identified in the user clip table.
  • a user may access video clips through the mobile app on the mobile device 104 through the clip pane 40 la. Upon selection of a clip through the clip pane 40 la, the mobile app uses the clip GUID to request the corresponding playlist or manifest file from the cloud system 103, directly from a camera device (e.g., client device 101 or auxiliary camera 106), of if previously transferred, from its local storage.
  • a camera device e.g., client device 101 or auxiliary camera 106
  • the mobile app can playback the video clip by requesting the relevant video objects using their network address identifiers, which may point to a remote storage (e.g., cloud 103 or client device 101/106) or local storage on the mobile device 104.
  • a remote storage e.g., cloud 103 or client device 101/106
  • local storage e.g., cloud 103 or client device 101/106
  • the user may provide an identification (e.g., biometric ID, face recognition, user ID and password, or the like) to access the decryption key.
  • FIG. 11 a flow chart for a method to access the buffer of a client device 101/016 from a mobile device 104 according to one embodiment.
  • the user may access the video data stored in the buffer of a client device 101/106 from the mobile device app, for example, from the cameras pane 40lb.
  • the mobile device 104 and client device 101/106 are paired and able to
  • the pairing process described below, with reference to FIG. 10 may be used to securely pair the client device 101/106 with a mobile device 104 running one embodiment of the mobile device app.
  • the client device 101 is able to recognize when the mobile device 104 is nearby 1300, such as when it is within range of its short-range communication radio.
  • the client device 101 may receive the Bluetooth ID from the mobile device 104, which may be recognized as a“bonded” device based on its Bluetooth ID or may otherwise be notified of its proximity via cloud system 103 upon detection of its BLE beacon as described below.
  • client devices will only communicate with recognized trusted devices that have gone through a pairing process.
  • the client device 101/106 Once the client device 101/106 has detected a known or trusted mobile device 104 in proximity, it enters into a WiFi hotspot mode 1301. The mobile device 104 then gets the network parameters 1302 to connect to the WiFi hotspot. In one embodiment, the mobile device 104 requests the network parameters from the client device 101/106 wirelessly, for example, through the Bluetooth connection or cellular messaging via cloud system 103. Alternatively, the client device 101/106 may provide the network parameters to the mobile device 104 wirelessly, for example via Bluetooth, via cellular messaging through cloud system 103, or the like, once it detects the proximity of the mobile device 104, for example using Bluetooth, based on its BLE beacon and proximity messages from the mobile device 104, or the like. In another
  • the network parameters may be provided to a mobile device 104 paired with client device 101/106 (either the primary paired device, a subset of paired devices, or all paired devices) during the initial mobile device association process, as for example described with reference to FIG. 8.
  • the network parameters include the hotspot name (“SSID”), an access password or token (e.g., based on WEP or WPA/2/Enterprise security modes), and other information.
  • the communication of the network parameters to the mobile device is end-to-end encrypted, for example, using a shared key or asymmetric encryption as further described below.
  • the client device’s hotspot SSID does not need to be advertised or broadcasted since the network parameters, including the SSID, are directly provided to the mobile device 104.
  • the mobile device 104 uses the network parameters to join the client device’s WiFi hotspot 1303.
  • the mobile device programmatically switches to the WiFi network with the provided SSID and password. For example, on an Apple iOS device, this switch would prompt the user to select“join” to authorize the network connection, but no additional user input would be required as the password information can be programmatically provided.
  • the programmatic switch can be accomplished without any user input.
  • the mobile device app can request access to the video data 1304 stored in the client device’s buffer using any of the techniques described above.
  • the client device generates playlists or manifest files 1305 for each of the contiguous video objects stored in the buffer, e.g., 2-second, 4-second, 6-second, etc. video objects.
  • the playlists or manifest files are provided to the mobile device 104 over the established WiFi connection 1306, identifying the network addresses of the video object files stored in the client device’s buffer.
  • the connection between the mobile device 104 and client device 101/106 is a WiFi connection
  • the same functionality is available through other wireless connections, including cellular, BluetoothTM, and the like.
  • the mobile app (or other video player app) allows the user to playback and scrub back and forth 1307 through the set of video object files stored in the buffer.
  • the“cameras” pane 40 lb in mobile app may include a“live” window 410 and an additional display area for buffered video (e.g., last 24 hours, a pre-set number of days, or the like).
  • HTTP Live Stream (“HLS”) playlists are used for the method of FIG. 11.
  • Each playlist is a combination of the 2-6 second clips, back-to-back, without significant interruption, for example, without stopping the playback between files or otherwise interrupting the playback in a way that is noticeable to the user.
  • the #tag typically used to insert advertisements in a playlist, may be placed in front of the name/location of each video object file in the buffer, essentially designating every video object to be an advertisement. While HLS playlists typically provide different streams from a single file, using the #tag approach allows the streaming of the same video feed from separate but continuous files (of a few seconds each).
  • the HLS playlists are nested to allow for access to longer time periods worth of video.
  • the client device 101/106 generates hierarchical playlists of playlists, to avoid overly long playlists.
  • a master playlist may list a plurality of other playlists, each in turn listing a plurality of video objects with corresponding network locations.
  • the mobile app switches which HLS playlist is in use at any given time.
  • Each playlist may include, for example, a few minutes to a few hours of video from the video object files in the buffer.
  • each playlist includes video objects or files for 10-20 minutes of video. This enables the user to see long periods of time worth of video, possibly multiple days, and scrub instantly across the entire time. The same approach may be used for access event-generated clips or other video accessed using playlists described herein.
  • a process for setting up a camera device such as a client device 101
  • a method for setting up a camera device for operation in the system is described.
  • camera devices such as client device 101
  • Cellular connectivity provides a data connection 107/108 between the camera device and the cloud system 103 that can be used during the set-up process.
  • the set-up process begins 1000. While the following set up steps are provided in order, no particular order is required for these steps. For example, in one embodiment, a user set up step 1001 is performed.
  • the user set up step 1001 allows the camera device to recognize the user.
  • a client device 101 provides instructions to a user to pose in different orientations while facing one of the cameras to record different angles of the user’s face.
  • a similar process may be used to recognize other user biometrics, including for example, fingerprints, voice, and irises.
  • a touch sensor may be used to record a series of images of a user’s fingerprint.
  • Voice recognition software may be trained by having the user repeat pre-defmed commands, statements, or sentences one or more times.
  • a user’s iris is recorded from multiple angles to derive a biometric optical signature. Other embodiments may include a combination of these biometrics identifications and may further include others.
  • the user’s biometric signature or signatures are stored in the camera device.
  • a cryptographic key is also generated based on a random input and stored in association with the biometric identification of the user.
  • the user set up process 1001 is repeated for each user.
  • mobile device 104 may itself be a camera device, and thus some of the set-up steps, such as user set up step 1001 may be applicable.
  • mobile device 104 includes a mobile app installed on the device as described above with reference to FIG. 4a-4e.
  • mobile device 104 and camera device e.g., client device 101
  • short range wireless modules such as Bluetooth transceivers.
  • short range wireless modules may transmit a unique ID that can be received by other short range wireless modules as a for of identification of devices in forming a piconet or otherwise pairing with each other.
  • Bluetooth transceivers can provide a unique 12-digit hexadecimal address (“BD ADDR”) for identification and pairing.
  • BD ADDR unique 12-digit hexadecimal address
  • a user may prompt the camera device to pair with the user’s mobile device 104.
  • the user may utter a voice pairing command, provide a pairing command through a touchscreen, or through any other user input device available in the camera device.
  • the pairing process involves a Bluetooth paring process.
  • the camera device displays a unique pattern that is captured by the mobile device and sent back to the camera device via the connection to the could system 103.
  • camera device may display a randomly generated alphanumeric code, a QR code, a series of black and white screens in a random order, or some other random output.
  • the random output is captured or entered into the mobile device by the mobile app and transmitted via a secured Internet connection to cloud system 103 along with a unique identifier of the mobile device, such as, for example a Bluetooth address, a MAC address, or the like.
  • the random output and the mobile device input are compared. If they match, the camera device authenticates the mobile device unique identifier (e.g., Bluetooth address or MAC address) and from that point on is associated with the mobile device.
  • both devices instead of comparing the output of the client device with the input captured by the mobile device, both devices generate an output that is compared at the server. For example, each device uses a camera to perform face recognition of the user during the set-up process and their face recognition results are sent to the server for comparison to match the same user.
  • a QR code is displayed on the display of the client device 101.
  • the QR code encodes a device ID for the client device 101 and an encryption key (or seed for generation of an encryption key) for communicating with the client device 101.
  • the mobile app on the mobile device 104 captures and interprets the QR code to obtain the device ID and encryption key.
  • the device ID may for example include a telephone number, email address, or other means for electronic messaging with the client device 101.
  • the mobile device 104 can send encrypted communications to the client device 101 as further described below to associate the mobile device with the client device, including for example, sending to the client device 101 a unique identifier for the mobile device 104, for example, telephone number, email address, Bluetooth address, MAC address, or the like. While described with the client device 101 being the device that displays the QR code, the same approach may be used with the mobile device 104 displaying the QR code and the client device 101 initiating the encrypted messaging using the encryption key provided by the mobile device 104.
  • Other“shared secret” approaches may be used for mobile device association 1002, include for example, a series of instructions to cause the user to move the mobile device while the mobile app records the outputs of one or more mobile device sensors to be matched with the provided instructions. For example, the user may raise or lower the device, shake the device, etc. in a random series causing accelerometer and/or gyroscope changes that match the requested motions.
  • the series of sensor-detected motions can be provided via Internet connection for matching with the camera device instructions for association.
  • a user may provide a telephone number for the mobile device during a registration process, for example through the mobile device app.
  • camera device may display a device ID on its screen.
  • the user inputs the device ID on the mobile app and it is transmitted to the cloud system 103.
  • the cloud system identifies the device ID and sends a message to the camera device 101/106 via Internet connection 107/108 including the telephone number for mobile device 104.
  • the camera device sends a text message to mobile device 104 with a random code.
  • the user inputs the random code via the mobile app for verification by cloud system 103 or camera device 101/106. If the random code matches the texted code, the mobile device is authenticated. Once the camera device and the mobile device are associated 1002, the camera device can trust the mobile device for subsequent interactions, based on a unique ID for the mobile device (e.g., Bluetooth address, MAC address, or the like).
  • a unique ID for the mobile device e.g., Bluetooth address, MAC address, or the like.
  • the set-up process optionally includes the step of provisioning the mobile device 104 with a mobile app.
  • FIG. 9 provides an exemplary flow diagram for an initial set-up process according to one embodiment.
  • camera device 101/106 includes a wireless cellular connection to the Internet and is configured to communicate with cloud system 103 out of the box.
  • the screen displays a QR code 1101.
  • a mobile device can use one of its existing apps to capture the QR code with its camera and interpret the code 1102.
  • the QR code provides a link or URL to a web-server, for example in cloud system 103.
  • the link or URL may include an IP address or a domain (e.g., www.owlcam.com ) and a set of parameters encoded therein as is known in the art.
  • One of the parameters may include, for example, a unique ID for the camera device 101/106 being set up, such as for example, a mobile device number, a telephone number, a serial number, or the like.
  • the link parameters may also include a randomly generated number that is different for different times the set-up process is run.
  • the same process may be performed providing the link and parameters in alternative forms, including for example, by displaying them on the screen as text/image, encoding them in an audio signal, transmitting them via short range communication (IR, AirDrop, Bluetooth, etc.) or the like.
  • short range communication IR, AirDrop, Bluetooth, etc.
  • the mobile device Upon interpreting the QR code, the mobile device uses its existing software (e.g., a web browser) to send 1103 an HTTP request to the web server identified through the link or URL and including the parameters encoded into the link.
  • the cloud system 103 receives the request and creates 1104 a record for the request, including the link-encoded parameters and additional metadata and network information derived from the HTTP requesting process, including information for uniquely identifying the mobile device 104 (e.g., combination of HTTP heather metadata, TCP/IP header information, or the like).
  • cloud system 103 redirects 1105 the mobile device to a location from where the appropriate mobile app may be obtained.
  • cloud system 103 using, for example, the“User- Agent” data from the HTTP request and/or the unique device ID for the camera device 101/106, redirects the mobile device 104 to either the Apple App Store when the User-Agent indicates the mobile device to be an iOS device or to the Google Play Store if the mobile device is determined to be an Android-based device or alternatively, to other servers capable of providing the mobile app to the mobile device over a network.
  • the cloud system 103 may include parameters in the redirection link to the appropriate version of the mobile app determined using the device ID of the camera device 101/106.
  • the mobile device 104 obtains 1106 the proper mobile app, e.g., the app for interaction with camera device 101/106 and cloud system 103. After the downloading and installation of the mobile app on mobile device, when executed, the mobile app contacts the cloud system 103 to access 1107 the record previously generated at step 1104. For example, the mobile app may derive a unique ID for the mobile device 104 using the same parameters, metadata, or other information available from the mobile device 104 when making an HTTP request like the one made at step 1103. In one embodiment, a time limit (e.g., 2-15 minutes) may be used between the HTTP request step 1103 and the record access step 1107 to facilitate the mobile device 104 identification.
  • a time limit e.g., 2-15 minutes
  • Cloud system 103 determines that the same mobile device 104 is accessing the system based on that information and provides 1108 access to the previously generated record and any other additional set up parameters that may be necessary to complete the set-up process. For example, if provided, the randomly generated number may be provided as a“shared secret” for the device association process described above. Alternatively, encryption information and/or messaging information for the camera device may be provided. Referring back to FIG. 8, another aspect of the disclosure involves setting up a direct connection between a camera device 101/106 and a mobile device 104.
  • camera device 101/106 includes wireless local area network connectivity.
  • a client device 101 may optionally operate as an access point (AP) for a local area network, such as Wi-Fi network.
  • AP access point
  • the mobile device 104 can establish a connection 109 to the client device 101 as a Wi-Fi station (STA). While a specific wireless local area network connection is described, it is understood that the present invention can be applied to a wide variety of wireless connection modes, such as, for example, Peer-to-Peer connections (e.g.,“Wi-Fi Direct,” ad hoc network, or the like).
  • the camera device can use the MAC address authenticated through a mobile device association process 1002 to determine whether the associated mobile device is the one making the connection.
  • the direct camera device to mobile device connection 109 may then be used to transfer settings, video data objects, video clips, biometric signatures, and the like, in a secured way between the devices.
  • FIG. 10 provides an exemplary flow diagram for an alternative initial set-up process according to one embodiment.
  • the mobile app is in the device 104, obtained as shown in FIG. 9, downloaded directly from a mobile application repository (e.g.,
  • the mobile app is used to securely pair to a client device 101/106 as shown in FIG. 10.
  • a client device 101/106 is activated 1200 for the first time, it is not paired to any mobile device 104.
  • the mobile device 104 broadcasts an identifying signal, such as its Bluetooth ID, WiFi identifier, or the like.
  • the client device 101/106 detects the presence 1201 of mobile device 104, for example, the client device 101/106 scans Bluetooth signals and records the Bluetooth IDs of all the devices transmitting within range of the client device 101/106. Other means of wirelessly detecting mobile devices in proximity may be used.
  • the client device 101/106 generates a GUID 1202 and establishes communication with the cloud system 103, for example, using its integrated cellular connection.
  • a GUID may for example be generated from a mobile device number (e.g., International Mobile Subscriber Identity (“IMSI”), Integrated Circuit Card ID (“ICCID”), International Mobile Equipment Identity (“IMEI”), or similar), a telephone number, a serial number, or the like.
  • IMSI International Mobile Subscriber Identity
  • ICCID Integrated Circuit Card ID
  • IMEI International Mobile Equipment Identity
  • the client device 101/106 using its GUID creates a communication record or“topic” 1203 on the cloud system 103 that can be used to post messages intended for pairing with the client device 101/106.
  • The“topic” may be uniquely identified by the GUID of the client device 101/106 or by an additional or alternative topic identifier or“subject.” According to one embodiment, other system elements can post messages, including any form of data, by identifying the proper topic with the associated GUID of the recipient device.
  • “topics” are implemented as secured electronic mail messaging accounts (e.g., to:GUID@ cloudsvstemtopics.com).
  • topics may be treated as“conversations” or“threads” of related messages (e.g., with the same“subject”) for a given GUID-identified device (e.g., to:GUID@cloudsystemtopics.com; subjecTtopicID)
  • GUID-identified device e.g., to:GUID@cloudsystemtopics.com; subjecTtopicID
  • MQTT Message Queuing Telemetry Transport
  • AQP Advanced Message Queuing Protocol
  • STOMP Session Initiation Protocol
  • CoAP the IETF Constrained Application Protocol
  • XMPP Extensible Messaging and Presence Protocol
  • WAMP Web Application Messaging Protocol
  • client device 101/106 may display on its screen a code 1204.
  • the mobile device 104 can use the app associated with the cloud system 103 to obtain the code 1205, for example, capturing a QR code with its camera, and interpret the code using the cloud-system-specific app software.
  • the QR code provides information for the mobile device 104 to message the client device 101/106 via the cloud system 103, for example, the client device’s GUID and may also include an identifier for the pairing“topic” generated at step 1203. This information is encoded into the QR code as is known in the art.
  • the code may also include a randomly generated number that is different for different times the set-up process is run.
  • the code may also include encryption information to allow encrypted messaging between mobile device 104 and the client device 101/106, such as for example, a public key of the client device for the mobile device to encrypt at least part of the messages it will send to the client device.
  • the same process may be performed providing a hyperlink and parameters in alternative forms, including for example, by displaying them on the screen as text/image, encoding them in an audio signal, transmitting them via short range communication (IR, AirDrop, Bluetooth, etc.) or the like.
  • short range communication IR, AirDrop, Bluetooth, etc.
  • the mobile device 104 sends a pairing request 1206 via cloud system 103 to the client device 101/106, for example, identified the GUID and/or pairing topic created at step 1203.
  • the pairing request also includes other pairing information for the client device 101/106 to communicate with the mobile device 104.
  • the pairing request includes one or more of a GUID of the mobile device (e.g., IMSI, IMEI, or similar), the Bluetooth ID for the mobile device, and encryption information, such as for example, a public key for the mobile device 104.
  • the cloud system 103 checks 1207 whether to allow a new mobile device 104 to pair with the client device 101/106 identified in the pairing request. For example, in one embodiment, the cloud system maintains a paring record for each client device 101/106 (e.g., based on the devices’ GUIDs) storing pairing information for each device, such as for example the GUID for each mobile device 104 paired with each client device 101/106. If the system determines that this client device GUID has an existing paired mobile device 104, an authorization request is sent to the existing or“primary” mobile device 104.
  • a paring record for each client device 101/106 (e.g., based on the devices’ GUIDs) storing pairing information for each device, such as for example the GUID for each mobile device 104 paired with each client device 101/106. If the system determines that this client device GUID has an existing paired mobile device 104, an authorization request is sent to the existing or“primary” mobile device 104.
  • Multiple mobile devices 104 may be allowed to pair with the same client device 101/106 if approved by the first or user-designated“primary” mobile device 104 that is already paired with the client device 101/106.
  • the pairing record for the client device is used by the cloud system to keep track of the paired devices.
  • the paring request is conveyed 1210 to the client device 101/106.
  • the client device receiving the pairing request verifies 1211 the pairing request. For example, in one embodiment, the client device 101/106 checks that it is in“pairing mode” and/or that it can otherwise pair with a mobile device 104.
  • the client device 101/106 checks the pairing information, including whether the Bluetooth ID for the mobile device 104 is found in the list of Bluetooth IDs scanned at step 1201 indicating that the mobile device 104 is located near the client device, within range of short-range wireless
  • the client device 101/106 stores the pairing information and once the pairing request is verified 1211, it notifies the cloud system 103 that pairing has succeed 1212.
  • the cloud system 103 updates its pairing record for the client device 101/106 to include the newly paired mobile device 104 and notifies the mobile device 104 of the pairing success.
  • the mobile device 104 and client device 101/106 can now, for example, post messages to each other on their respective“topics” via cloud system 103 using a messaging protocol, such as MQTT, for example. All messaging between the two devices enabling the functionality described above can be exchanged via the cloud-based messaging approach establish according to this embodiment.
  • the mobile device 104 can request a secured real-time feed of the video in the buffer memory of client device 101/106 using this approach and send and receive commands, for example, using RTC technologies as is known in the art.
  • the messaging between the two end-point devices can further be end-to-end encrypted.
  • the two devices can use the cloud messaging to do a Diffie-Hellman key exchange and use each other’s key to encrypt the payloads of the messages sent via the cloud messaging system.
  • the QR code may include a public key for the client device 101/106 that the mobile device can use to encrypt its own public key and include it in the pairing information with the paring request. Having each other’s public keys, the two devices can then encrypt the payload of all the messages they exchange from that point on.
  • the QR code can include a“shared secret” as described above with reference to FIG. 8 which can be used as a symmetric encryption key or to generate a symmetric encryption key that then is used by both devices to communicate securely.
  • the client device 101/106 may detect the proximity of the mobile devices using an alternative approach.
  • the client device 101/106 implements a Bluetooth beacon protocol (e.g., iBeacon, AltBeacon, URIBeacon, Eddystone, or the like) using Bluetooth Low Energy (“BLE”) to determine proximity of mobile devices 104.
  • BLE Bluetooth Low Energy
  • Any mobile device 104 with a cloud system enabled app, as for example described above with reference to FIG. 4a-4e upon detecting the client device’s beacon, wirelessly sends a message to the client device announcing its presence, and for example providing its mobile device GETID.
  • Mobile devices 104 that have undergone a pairing process as described above, will recognize the beacon for the paired client device 101/106.
  • the message may be sent via Bluetooth, WiFi, cellular, or using other wireless communications.
  • each paired mobile device 104 detecting the beacon sends a presence message to client device 101/106 via cloud system 103, either using a cellular or WiFi transmission.
  • the cloud system 103 delivers the message to client device 101/106 such that the client device can determine which mobile devices 104 are in close proximity.
  • client device 101/106 continues to periodically broadcast its beacon (the period may vary depending on conditions, such as for example battery status, presence of recognized mobile devices, operational mode, or the like).
  • a mobile device 104 stops receiving the beacon at the expected reception time it determines that it no longer is in proximity of the client device 101/106 and sends another proximity message to client device 101/106 via cloud system 103 to inform the client device 101/106 that it no longer is in its proximity.
  • Proximity messages may be time stamped so that the client device can determine periods when mobile devices are in proximity or not, for example, should the proximity messages be delayed (e.g., no cellular coverage).
  • Examples of computer-readable storage mediums include a read only memory (ROM), a random-access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks.
  • Suitable processors include, by way of example, a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of
  • microprocessors one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine.
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field Programmable Gate Arrays
  • One or more processors in association with software in a computer-based system may be used to implement methods of video data collection, cloud-based data collection and analysis of event- based data, generating event-based video clips, sharing event-based video, verifying authenticity of event-based video data files, and setting up client devices according to various embodiments, as well as data models for capturing metadata associated with a given video data object or file or for capturing metadata associated with a given event-based video clip according to various embodiments, all of which improves the operation of the processor and its interactions with other components of a computer-based system.
  • the camera devices may be used in conjunction with modules, implemented in hardware and/or software, such as a cameras, a video camera module, a videophone, a speakerphone, a vibration device, a speaker, a microphone, a television transceiver, a hands free headset, a keyboard, a Bluetooth module, a frequency modulated (FM) radio unit, a liquid crystal display (LCD) display unit, an organic light-emitting diode (OLED) display unit, a digital music player, a media player, a video game player module, an Internet browser, and/or any wireless local area network (WLAN) module, or the like.
  • modules implemented in hardware and/or software, such as a cameras, a video camera module, a videophone, a speakerphone, a vibration device, a speaker, a microphone, a television transceiver, a hands free headset, a keyboard, a Bluetooth module, a frequency modulated (FM) radio unit, a liquid crystal display (LCD) display unit, an organic light-e

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

An exemplary method and system is provided for wirelessly accessing video data files stored in the buffer of a client device for playback and scrubbing. When the client device detects a known or trusted mobile device in proximity, it enters a hotspot mode. The hotspot network parameters are provided to the mobile device. In one embodiment, the network parameters include a network ID and password. Using the network parameters, the mobile device joins the hotspot provided by the client device. Once wirelessly networked, the mobile device can request video data from the client device. The client device generates hierarchical playlists and provides them to the mobile device. The mobile device uses the playlists to access video files stored in the buffer of the client device continuously and without interruption. The video files or objects can be listed in a playlist as advertisements to allow continuous playback.

Description

Scrub and Playback of Video Buffer over Wireless
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation-in-part of and claims priority to PCT Patent Application No. PCT/US2017/50991, entitled "Video-Based Data Collection, Image Capture and Analysis Configuration," filed September 11, 2017, which claims the benefit of ET.S. Provisional
Application No. 62/412,764, filed October 25, 2016, the contents of which applications are hereby incorporated by reference in their entirety. This application also claims priority to ET.S. Provisional Application No. 62/614,170, filed on January 5, 2018, the contents of which are hereby incorporated by reference in its entirety.
BACKGROUND
This disclosure generally relates to video-based data collection systems, and more specifically to securely pairing video capturing devices with mobile devices.
With the wide adoption of smartphones and our ubiquitous connectivity to the Internet and social networks, software apps and cameras have become common place in our daily lives for personal applications. We take pictures and videos with our smartphones of all sorts of events, items, and situations, and easily upload to cloud services and share them with friends, family, and other people who subscribe or follow our shared content.
Many products and services also exist in the smart home or automated home market segment. Security cameras around the home or business place are widely used that record either constantly or with event-based triggers, like motion sensors, and store the recorded video locally on video servers or upload the video to cloud services, either via wired connections through a home router or using Wi-Fi to connect to a home network. The recorded video is typically available for the user for a period of time and accessible in real time from software apps in smartphones or via websites. Multi-camera systems store video feeds from various cameras around the home and make the various feeds available to the user through a common user interface. Some services provide the ability to share these videos with other users, not only via social networks, but also based on other factors. For example, Bot Home Automation, Inc. of Santa Monica, California, provides camera-equipped doorbell systems called Ring. Customers get access to the video from the Ring cameras via a website, ring.com. One feature of the Ring system is called“Ring Neighborhoods” (described at https://ring.com/neighborhoods). A user can set a radius around the user’s home equipped with Ring cameras and automatically get notified when other users within that radius share videos in the Ring platform. Users can share any video they find may be interesting for other users in the neighborhood. However, these systems do not provide a simple and secure approach to pair the cameras with the mobile device running an app to access the cameras’ recordings.
Another area where cameras are being used is in vehicles. Safety cameras for backing up or side view cameras are becoming common-place. For commercial vehicles, like taxis or other vehicle fleets, security camera systems record video from both inside and outside the vehicle for safety and management purposes. For example, Safety Track of Belleville, Michigan, provides a 2- channel dash camera system equipped with a 3G/4G cellular dongle that connects to the camera system via USB for streaming video from the vehicle in real time (described at
www. safetyirack.net/duai--lens-iii-vehicle-fleet-camera-sy stem/). However, these in- vehicle systems are not simple to install for an average consumer and do not provide simple and efficient ways to store, retrieve, and playback the captured video wirelessly on associated mobile devices.
What is needed is a video collection and sharing platform that addresses the deficiencies of the prior art.
BRIEF SUMMARY
According to various embodiments of the present invention, a video data collection and sharing platform is provided.
In one embodiment, a method for wirelessly accessing video objects stored in a buffer of a video capturing device is provided. The method includes detecting an authorized mobile device in physical proximity to the video capturing device and activating a wireless hotspot mode in the video capturing device. The wireless hotspot mode includes network parameters to identify the access point of a wireless network and to provide secure access to the wireless network. The access parameters are provided to the authorized mobile device, for example during the authorization or pairing process. Alternatively, the access parameters may be provided at a different time. The access point receives a request to join the wireless network from the authorized mobile device. The request may be based, at least in part, on the network parameters. A plurality of playlists are generated, each playlist listing a plurality of video objects from the video objects stored in the buffer of the video capturing device for enabling access to the plurality of video objects, and each playlist configured to enable playback of each video object listed in sequence and without significant interruption. The playlists may be provided to the authorized mobile device via the wireless network, for example, upon request.
According to another embodiment, a method for wirelessly accessing video objects stored in a buffer of a video capturing device includes receiving access parameters for accessing a video capturing device in a wireless hotspot mode. In this embodiment, the access parameters may include for example network parameters to identify an access point of a wireless network and to securely access the wireless network. The method includes sending a request to join the wireless network to the access point in the video capturing device based, at least in part, on the network parameters. Several playlists are received from the video capturing device via the wireless network. Each playlist may list a set of video objects from the video objects stored in the buffer of the video capturing device to enable access to the video objects. In addition, each playlist may be configured to enable playback of each video object listed in sequence and without significant interruption. The method further includes requesting access to a video object stored in the buffer of the video capturing device based on user scrubbing input and a playlist of the plurality of playlists.
In various embodiments, a master playlist is also generated and may be sent and received. The master playlist lists the other playlists, each of which lists a subset of video objects identified in a list as an advertisement according to a streaming protocol. Each of the playlists may be organized according to the HTTP Live Streaming protocol and each video object may be identified using a #tag. In addition, in various embodiments, each video object identified in the list may consist of a number of seconds of recorded video of between 1 and 10 seconds. Further, the list in each of the playlists may identify video objects corresponding to between 10 and 20 minutes of video.
According to another aspect of various embodiments, the master playlist may be sent to the authorized mobile device via the wireless network. In some embodiments, the playlists are sent upon receiving requests from the authorized mobile device for two or more of the playlists listed in the master playlist. In other embodiments, receiving the plurality of playlists may include sending a plurality of requests to the video capturing device. These requests may be based on user scrubbing input on a video playback control on the mobile device. For example, in one embodiment the scrubbing input may be based on a timeline-based user interface related to the time when the video objects where captured by the video capturing device. In another embodiment, the scrubbing input is based on a map-based user interface related to the location where the video objects where captured by the video capturing device.
According to one aspect of some embodiments, a method may also include displaying a map- based user interface on a display of the mobile device. The map-based user interface may provide a map with indications representative of locations where the video objects stored in the buffer of the video capturing device were captured.
According to another aspect of various embodiments, video objects that are sent in response to a request may include video data, time data, and location data. According to another aspect of some embodiments, a method may also include displaying the video data on the screen of the mobile device along with a map comprising an indicator corresponding to the location data. In one embodiment, a first video object is provided in response to a request from the authorized mobile device and a second video object listed in sequence after the first video object in a playlist is also automatically sent after the first video object in response to the request
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS FIG. 1 illustrates an exemplary video-based data capture and analysis system according to one embodiment of the disclosure.
FIG. 2 is a functional block diagram of a client device according to one embodiment of the disclosure.
FIG. 3 is a block diagram of a dash camera client device according to one embodiment.
FIG. 4a shows a graphical user interface (GUI) for a“clips pane” in a mobile app in mobile device according to one embodiment.
FIG. 4b shows a graphical user interface (GUI) for a“camera pane” in a mobile app in mobile device according to one embodiment.
FIG. 4c shows a graphical user interface (GUI) for a“news pane” in a mobile app in mobile device according to one embodiment.
FIG. 4d shows a graphical user interface (GUI) for a“camera pane” in a mobile app in mobile device according to another embodiment.
FIG. 4e shows a graphical user interface (GUI) for a“camera pane” in a mobile app in mobile device according to another embodiment.
FIG. 5 is a flow chart illustrating a method of video data collection according to one
embodiment.
FIG. 6a a flow chart illustrating a method for cloud-based data collection and analysis of event- based data according to one embodiment.
FIG. 6b illustrates a data model for capturing metadata associated with a given video data object or file according to one embodiment.
FIG. 6c illustrates a data model for capturing metadata associated with a given event-based video clip according to one embodiment.
FIG. 7 is a flow chart illustrating a method for generating event-based video clips according to one embodiment.
FIG. 8 is a flow chart illustrating a method for setting up a client device according to one embodiment.
FIG. 9 is a flow chart illustrating a method for obtaining a mobile app from a mobile device according to one embodiment.
FIG. 10 is a flow chart illustrating a method for setting up and pairing a client device and a mobile device according to another embodiment. FIG. 11 is a flow chart illustrating a method for scrubbing or playing back high-quality video data stored in a buffer over wireless communication according to one embodiment.
The figures depict various example embodiments of the present disclosure for purposes of illustration only. One of ordinary skill in the art will readily recognize form the following discussion that other example embodiments based on alternative structures and methods may be implemented without departing from the principles of this disclosure and which are
encompassed within the scope of this disclosure.
DETAILED DESCRIPTION
The Figures and the following description describe certain embodiments by way of illustration only. One of ordinary skill in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein. Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures.
The above and other needs are met by the disclosed methods, a non-transitory computer-readable storage medium storing executable code, and systems for streaming and playing back immersive video content.
Referring now to FIG. 1, an exemplary vehicular video-based data capture and analysis system 100 according to one embodiment of the disclosure is provided. Client device 101 is a dedicated data capture and recording system suitable for installation in a vehicle. In one embodiment, client device 101 is a video-based dash camera system designed for installation on the dashboard or windshield of a car. Client device 101 is connected to cloud-based system 103. In one embodiment, cloud-based system 103 includes a server system 102 and network connections, such as for example, to Internet connections. In one embodiment, cloud-based system 103 is a set of software services and programs operating in a public data center, such as an Amazon Web Services (AWS) data center, a Google Cloud Platform data center, or the like. Cloud-based system 103 is accessible via mobile device 104 and web-based system 105. In one embodiment, mobile device 104 includes a mobile device, such as an Apple iOS based device, including iPhones, iPads, or iPods, or an Android based device, like a Samsung Galaxy smartphone, a tablet, or the like. Any such mobile device includes an application program or app running on a processor. Web-based system 105 can be any computing device capable of running a Web browser, such as for example, a Windows™ PC or tablet, Mac Computer, or the like. Web- based system 105 may provide access to information or marketing materials of a system operations for new or potential users. In addition, Web-based system 105 may also optionally provide access to users via a software program or application similar to the mobile app further described below. In one embodiment, system 100 may also include one or more auxiliary camera modules 106. For example, one or more camera modules on a user’s home, vacation home, or place of business. Auxiliary camera module 106 may be implemented as a client device 101 and operate the same way. In one embodiment, auxiliary camera module 106 is a version of client device 101 with a subset of components and functionality. For example, in one embodiment, auxiliary camera module 106 is a single camera client device 101.
Client device 101 is connected to cloud-based system 103 via connection 107. In one embodiment, connection 107 is a cellular-based wireless packet data connection, such as a 3G, 4G, LTE, 5G, or similar connection. Connections 108a- 108c between other system components and cloud-based system 103 are Internet-based connections, either wired or wireless. For example, in one embodiment, mobile device 104 may at different times connect to cloud-based system 103 via Wi-Fi (i.e., any IEEE 802.1 l-based connection or similar technology) and cellular data (e.g., using 4G, LTE, or the like). In one embodiment, Web-based system 105 is connected to cloud-based system 103 over the World Wide Web using a wired Internet connection, such as DSL, cable modem, or the like. Similarly, in one embodiment, auxiliary camera module 106 is connected to cloud-based system 103 via a Wi-Fi connection to a home router connected to the Internet via cable modem, DSL, or the like. Any combination of available connections can be used to connect any of the system components to cloud-based system 103 via the Internet or similar networks.
Referring now to FIG. 2, a functional system diagram for a client device 101 according to one embodiment is shown. Different embodiments may include a subset of the components shown in FIG. 2 and/or other components not shown. In alternative embodiments, the components shown in FIG. 2 (as well as additional components not shown, such as for example, HDMI modules, battery charger and/or power supply modules, and the like) may be part of a System- on-Chip (SoC) device, multiple chips on a board, ASICs, or the like. The physical
implementation of the components, either in silicon-based integrated circuits or software are left as a design choice of the person of ordinary skill in the art without departing from the invention. The client device 101 includes a microprocessor 201 connected to a data bus 202 and to a memory device 203 and additional functional modules. In one embodiment, microprocessor 201 is a Qualcomm Snapdragon MSM8953 but other microprocessors may be used to implement the invention, such as for example, other Qualcomm’s Snapdragon processors, ARM Cortex A8/9 processors, Nvidia’s Tegra processors, Texas Instruments OMAP processors, or the like. The microprocessor 201 executes operating system software, such as Linux, Android, iOS, or the like, firmware, drivers, and application software.
The client device 101 in this exemplary embodiment includes a location module 204, a wireless transceiver module 205, an audio I/O module 206, a video module 207, a touchscreen module 208, a sensor module 209, and an I/O module 216. In this embodiment, the different modules are implemented in hardware and software modules. In alternative embodiments, these modules can be hardware, software, or a combination of both. For example, alternative embodiments may be provided with one or more central processor (“CPU”) cores on an SoC also including a wireless modem, multimedia processor, security and optionally other signal co-processors, such as for example, one or more graphics processor unit (“GPU”) cores, one or more holographic processing unit (“HPU”) cores, and/or one or more vision processing units (“VPU”). In one embodiment, one or more SoC processors used to embody the invention may encompass CPUs, GPUs, VPUs, HPUs, and other co-processors, motherboard buses, memory controllers, screen controllers, sound chipsets, camera modules, on-board memory, and several peripheral devices, including for example cellular, Wi-Fi, and Bluetooth transceivers, as further described below. Alternative embodiments include modules as discrete components on a circuit board
interconnected by bus 202 or a combination of discrete components and one or more SoC modules with at least some of the functional modules built-in.
In one embodiment, location module 204 may include one or more satellite receivers to receive and decode signals from location satellite systems, such as Global Positioning System (“GPS”), Global Navigation Satellite System (“GLONASS”), and/or BeiDou satellite systems. In one embodiment, location module 204 is a Qualcomm QTR2965 or Qualcomm QGR7640 receiver that connects to a GPS antenna for receiving GPS satellite signals and providing geographical coordinates (latitude and longitude) of the location of the client device 101. The wireless transceiver module 205 includes a cellular modem, e.g., compliant with 3G/UMTS, 4G/LTE, 5G or similar wireless cellular standards, a Wi-Fi transceiver, e.g., compliant with IEEE 802.11 standards or similar wireless local area networking standards, and a Bluetooth transceiver, e.g., compliant with the IEEE 802.15 standards or similar short-range wireless communication standards. In one embodiment, the wireless transceiver module 205 is a Sierra Wireless HL- 7588.
The audio EO module 206 includes an audio codec chipset with one or more analog and/or digital audio input and output ports and one or more digital-to-analog converters and analog-to- digital converters and may include one or more filters, sample rate converters, mixers, multiplexers, and the like. For example, in one embodiment, a Qualcomm WCD9326 chipset is used, but alternative audio codecs may be used. In one embodiment, video module 207 includes a DSP core for video image processing with video accelerator hardware for processing various video compression formats and standards, including for example, MPEG-2, MPEG-4, H.264, H.265, and the like. In one embodiment, video module 207 is integrated into an SoC
“multimedia processor” along with processor 201. For example, in one embodiment, client device 101 includes an integrated GPU inside the Qualcomm MSM8953 but alternative embodiments may include different implementations of video module 207.
In one embodiment, the touchscreen module 208, is a low-power touchscreen sensor integrated circuit with a capacitive touchscreen controller as is known in the art. Other embodiments may implement touchscreen module 208 with different components, such single touch sensors, multi- touch sensors, capacitive sensors, resistive sensors, and the like. In one embodiment, the touchscreen module 208 includes an LCD controller for controlling video output to the client device’s LCD screen. For example, in one embodiment, touchscreen module 208 includes a device used for LCD control. LCD controller may be integrated into a touchscreen module 208 or, in alternative embodiments, be provided as part of video module 207, as a separate module on its own, or distributed among various other modules.
In one embodiment, sensor module 209 includes controllers for multiple hardware and/or software-based sensors, including, accelerometers, gyroscopes, magnetometers, light sensors, gravity sensors, geomagnetic field sensors, linear acceleration sensors, rotation vector sensors, significant motion sensors, step counter sensors, step detector sensors, and the like. For example, in one embodiment, sensor module 209 is and Invensense ICM-20608. Alternative implementations of sensor module 209 may be provided in different embodiments. For example, in one embodiment, sensor module 209 is an integrated motion sensor MEMS device that includes one or more multi-axis accelerometers and one or more multi-axis gyroscopes. Client device 101 may also include one or more I/O modules 210. In one embodiment, I/O module 210 includes a Universal Serial Bus (USB) controller, a Controller Area Network (CAN bus) and/or a LIN (Local Interconnect Network) controller.
In one embodiment, client device 101 also includes a touchscreen 211. In alternative embodiments, other user input devices (not shown) may be used, such a keyboard, mouse, stylus, or the like. Touchscreen 211 may be a capacitive touch array controlled by touchscreen module 208 to receive touch input from a user. Other touchscreen technology may be used in alternative embodiments of touchscreen 211, such as for example, force sensing touch screens, resistive touchscreens, electric-field tomography touch sensors, radio-frequency (RF) touch sensors, or the like. In addition, user input may be received through one or more microphones 212. In one embodiment, microphone 212 is a digital microphone connected to audio module 206 to receive user spoken input, such as user instructions or commands. Microphone 212 may also be used for other functions, such as user communications, audio component of video recordings, or the like. Client device may also include one or more audio output devices 213, such as speakers or speaker arrays. In alternative embodiments, audio output devices 213 may include other components, such as an automotive speaker system, headphones, stand-alone “smart” speakers, or the like.
Client device 101 can also include one or more cameras 214, one or more sensors 215, and a screen 216. In one embodiment, client device 101 includes two cameras 2l4a and 2l4b. Each camera 214 is a high definition CMOS-based imaging sensor camera capable of recording video one or more video modes, including for example high-definition formats, such as 1440r, l080p, 720p, and/or ultra-high-defmition formats, such as 2K (e.g., 2048x 1080 or similar), 4K or 2l60p, 2540p, 4000p, 8K or 4320p, or similar video modes. Cameras 214 record video using variable frame rates, such for example, frame rates between 1 and 300 frames per second. For example, in one embodiment cameras 2l4a and 2l4b are Omnivision OV-4688 cameras.
Alternative cameras 214 may be provided in different embodiments capable of recording video in any combinations of these and other video modes. For example, other CMOS sensors or CCD image sensors may be used. Cameras 214 are controlled by video module 207 to record video input as further described below. A single client device 101 may include multiple cameras to cover different views and angles. For example, in a vehicle-based system, client device 101 may include a front camera, side cameras, back cameras, inside cameras, etc.
Client device 101 can include one or more sensors 215. For example, sensors 215 may include one or more hardware and/or software-based sensors, including, accelerometers, gyroscopes, magnetometers, light sensors, gravity sensors, geomagnetic field sensors, linear acceleration sensors, rotation vector sensors, significant motion sensors, step counter sensors, step detector sensors, and the like. In one embodiment, client device 101 includes an accelerometer 215a, gyroscope 2l5b, and light sensor 2l5c. FIG. 3, provides an illustrative embodiment of a client device implemented as a dash camera system according to the invention.
Referring back to FIG. 1, another component of system 100 is a mobile device 104. Mobile device 104 may be an Apple iOS based device, such as an iPhone, iPad, or iPod, or an Android based device, such as for example, a Samsung Galaxy smartphone, a tablet, a PDA, or the like.
In one embodiment, mobile device 104 is a smartphone with one or more cameras, microphone, speakers, wireless communication capabilities, and sensors. For example, mobile device 104 may be an Apple iPhone 5, 6, 7, 8 or X. The wireless communication capabilities of mobile device 104 preferably include wireless local area networking communications, such as 802.11 compatible communications or Wi-Fi, short-range low-power wireless communications, such as 802.15 compatible communications or Bluetooth, and cellular communications (e.g., 4G/LTE, 5G, or the like). In addition, mobile device 104 preferably includes an application program or app running on a processor. One of ordinary skill in the art is familiar with mobile operating systems and mobile apps. Mobile apps are typically made available and distributed through electronic means, such as for example, via electronic“stores” such as the Apple App Store or the Google Play Store, or directly from apps providers via their own websites. It should be noted that mobile device app is not required for operation of the system, for example, camera device 101/106 may include a voice-enabled interface, a chat-bot interface, or the like. However, several embodiments include the use of a mobile app.
A mobile app on mobile device 101 provides a user interface to a user account on cloud system 103 and to client device 101. In one embodiment, mobile app includes functionality similar to auxiliary camera 106. For example, mobile app uses one or more cameras on mobile device 104 to record video events in accordance to one embodiment of the disclosure. The video recording, buffer management, and other methods and techniques described herein may be also
incorporated into mobile app in one or more embodiments of the invention.
Now referring to FIG. 4a-4e, a user interface for an app in mobile device 104 according to one embodiment is described. In one embodiment, the mobile app includes one or more panes 401. For example, FIG. 4a shows a graphical user interface (GUI) for a clips pane 40 la in a mobile app in mobile device 104 according to one embodiment. The mobile app can receive video clips from multiple sources and store them locally. For example, video clips can be received from cloud system 103. Client devices 101, auxiliary cameras 106, and mobile devices 104 of the user and other users can upload video clips to cloud system 103. Video clips can also be directly sent to mobile device 104, for example from a client device 101 or an auxiliary camera 106. Video clips can also be locally generated on mobile device 104. In an alternative embodiment, only metadata for a clip is provided to the mobile app while the video data for the clip is stored remotely. For example, video data objects (such as for example files, data records, data objects, or the like) may be stored on cloud servers 102 or in local memory of client devices 101, auxiliary cameras 106, or other mobile devices 104 and remotely accessible over the Internet. According to one embodiment, one or more types video clips from one or more of these sources can be made available through the clips pane 40 la of mobile app as illustrated in FIG. 4a. Clips pane 40 la includes a listing of video clips that can be accessed by the user via mobile device 104. In one embodiment, clips are added to the clips pane 40 la along with an alert to the user on the mobile device 104. For example, every time a clip is generated by a client device 101, client device causes a clip alert to be displayed to the user’s mobile device 104 and the generated clip is listed on clips pane 40la available for access by the user. For each available video clip, a descriptor 402a-n and a clip type icon 403 a-n are provided. In one embodiment, clip type icon 402 provides a visual indicator of the source of the video clip. For example, clip type icons 402a-b indicate that those clips were automatically generated via the auto-tagging method (as further described below) and clip type 402c indicates that that clip was user-generated. In additional embodiments, these and other clip types may be used. For example, in one embodiment, a multi-clip type icon may be used to indicate availability of multiple clips related to the same event, such as for example, multiple clips generated from different camera devices providing different viewpoints of the same event as further described below. Descriptors 402 provided text associated with the video clip, such as, for example, a user-generated description or an auto-tag descriptor as further described below. As one of ordinary skill in the art would understand, other icons 403 for different clip types and descriptors 402 may be used in a clips pane 40 la in accordance with this disclosure. A user of the mobile app can cause mobile device to playback a video clip listed in the clips pane 40 la by clicking on or touching the video clip listing on the clips pane 40 la. The mobile app causes a media player, either built-in or provided through the operating system of the mobile device 104, to play the selected video clip.
According to one embodiment, live camera feeds from multiple sources can be displayed on the mobile device 104 through the camera pane 40lb of mobile app as illustrated in FIG. 4b. In one embodiment, the camera pane 40 lb includes a camera feed window 410, a camera control interface 411 and a camera selection interface 412. Alternative embodiments may include a subset or additional elements in camera pane 40lb. F or example, camera selection interface 412 may be not included in a single-camera embodiment. Additional embodiments are described below with references to FIG. 4D and FIG. 4E. Camera feed window 410 displays the video feed from the currently selected camera. In one embodiment, the camera feed window 410 is enlarged to take the entire screen upon user selection, becoming for example the window of an associated media player app. For example, once the appropriate video playback is selected, the screen may be automatically enlarged or upon tilting of the mobile device 104 to a side, the camera feed window 410 (in“landscape” orientation) takes up the full mobile device display. Cameras may be selected using the camera selection interface 412. For example, camera selection interface 412 may display a selection option 4l2a-n for each of 1 - n available cameras. In one embodiment, icons are used to depict each of the available cameras, such as a home camera (e.g., an auxiliary camera 105), a vehicle camera (e.g., from a client device 101), and a phone camera (e.g., the camera on the mobile device 106). Any number of additional cameras may be made available and the selection interface 412 modified to allow selection, such as via a drop-down menu, a pop-up“edit” menu, a picker menu, a rolling menu, or the like.
In one embodiment, real time camera feeds are provided to the mobile app with the same approach used for providing video clips based on a playlist file or manifest file as further described below. For real-time feeds, the playlist files are dynamically updated to include each newly generated video data object or file captured by the relevant camera. For each new video file, the file location is provided in the updated playlist and the playlist file is updated via the cloud system 103 or directly from the source of the video feed. For example, in one embodiment, playlist files for streaming video are dynamically updated as described in the HTTP Live Streaming specification (as for example described in Internet Draft draft-pantos-http- live-streaming-23 submitted by Apple, Inc. to IETF on May 22, 2017) incorporated herein by reference in its entirety. Alternative streaming techniques may be used in other embodiments, including, for example, MPEG-DASH (ISO/IEC 23009-1), Adobe’s HTTP Dynamic Streaming, Microsoft’s Smooth Streaming, or the like.
In one embodiment, camera pane 40lb includes camera control elements 411. For example, a recording or manual tagging control element 41 la is provided for the user to instruct the currently selected camera to generate a clip for the currently displayed video (as further described below). For example, if a user is involved in a video-clip-generating event, e.g., car accident, police stop, break-in, or the like, in addition to the any video clips generated through client device 101, either manually or automatically, mobile device 104 can also be used to generate additional video clips for the given event from a different angle or perspective. Further, in one embodiment, any time the mobile app is running on the mobile device 104, one or more cameras on the mobile device 104 are recording video data and manual tagging control element 41 la is used to generate a manually-tagged video clip as further described below. Thus, mobile device 104 can be used as client device 101 or auxiliary camera device 106 according to this embodiment.
In one embodiment, camera pane 40 lb may also include additional control elements 411, such as, buttons, icons, or other selection elements or menus, to access non-live video stored in the buffer of the currently selected camera. For example, a user may remotely access an entire set of video data objects or files stored in the buffer of the user’s client device 101 (e.g., video files for the preceding 24 hours) through user control elements 411. In one embodiment, based on the user input selecting a point in time from which to begin streaming buffered video, the source camera device (e.g., client 101, auxiliary camera 106, or other camera device) generates a dynamic playlist or manifest file including the video files for the next preset time period, for example, one minute, and it is progressively and dynamically updated in increments of same amount of time (e.g., every minute) with the next set of video files. FIG. 11, further described below, provides an illustrative method according to one embodiment for accessing the entire set of video data object or files stored in a client device buffer using this playlist approach. The playlist or manifest files are generated as further described below with reference to video clip generation methods.
Now referring to FIG. 4D, a graphical user interface (GUI) for an alternative“camera pane” l40lb in a mobile app in mobile device 104 according to another embodiment is provided. In this embodiment, the camera selection interface 1412 allows the user to select source device, such as a camera in a client device 101/106, and the video for the selected camera stored in the buffer of the selected device is displayed in the camera feed window 1410. However, the camera control interface 1411 includes a time slider interface 141 lb. The control interface 1411 displays an indicator of the available buffered video 141 la, such as for example a color bar, a video timeline bar with snapshots of the video at periodic intervals, or the like. The time slider interface 141 lb includes a time stamp 141 lc representative of the time metadata in the buffered video. A user may move the slider along the video indicator bar 141 la to select the appropriate starting point for playback of the video. In one embodiment, the video indicator bar 141 la represents the entirety of the available buffered video for the selected camera device. In one embodiment, playback of the selected video takes place using manifest files as further described with reference to FIG. 11.
Now referring to FIG. 4e, another graphical user interface (GUI) for an alternative“camera pane” 150 lb in a mobile app in mobile device 104 according to yet another embodiment is provided. In this embodiment, the camera selection interface 1512 also allows the user to select source device, such as a camera in a client device 101/106, and the video for the selected camera stored in the buffer of the selected device is displayed in the camera feed window 1510. In this embodiment, the camera feed window 1510 shows a thumbnail, lower resolution, version of the video corresponding to the video stored in the buffer at the selected location or time. In this embodiment, instead of a time slider, a location slider 151 lb is provided. The camera control interface 1511 provides a map representative of the area where the video stored in the buffer was captured. For example, the metadata captured along with the video data is used to define the geo-coordinates for the stored video data and a path of travel 151 la is plotted on a map to indicate the locations represented in the stored video. The location slider 151 lb can be moved by the user along the path of travel 151 la to select the video from the buffer that was captured at the selected location.
In addition, in one embodiment, the location slider 151 lb also provides a time stamp
corresponding to the time when the selected video was captured. This allows the user to uniquely select the appropriate video when the buffer stores more than one video data object for a given location. In one embodiment, once the playback start point in the buffer is selected by the user through the camera control interface 1511, the video is played in a larger camera feed window (not shown) such as those illustrated in FIG. 4b and FIG. 4d. For example, in one embodiment, the camera control interface 1511 map and the camera feed window 1510 are swapped upon the video playback start point selection. The video thumbnail is replaced with the full resolution video in the larger size window while the map is reduced in size and the location slider 151 lb is replace with a location indicator that is updated with the location metadata 624/625 (as for example described with reference to FIG. 6b) to indicate the location of the camera device when the displayed video was captured. As those of ordinary skill will appreciate, the relative size and position of the camera feed window 1510 and synchronized map 1511 may be changed in different embodiments without departing from the teachings of the invention. Further, in alternative embodiments, other synchronized displays of video and corresponding metadata are provided, such as time stamp 622, speed 627, heading 628, altitude 626, or the like. The metadata may be displayed in a separate window or overlaid on the video and the items of metadata displayed may be based on user selection or automatically determined, based for example on availability, screen size, intended application, or the like.
Now referring to FIG. 4c, in one embodiment, a mobile app on mobile device 104 may also include a news pane 40 lc. News pane 40 lc provides information from a cloud service provider to users. In one embodiment, news pane 40 lc may provide the user with links to video clips on cloud service 103 that are related to video clips generated by the user’s device or devices. For example, links to videos from nearby camera devices generated around the same time as an event video clip of the user (e.g., a car crash, break-in, or the like) and available from other users may be provided to the user on the news pane 40 lc.
As noted above, the features described above with respect to the mobile app may also be provided via Web-based system 105 using conventional website programming techniques to implement the functionality described for the mobile app.
Referring back to FIG. 1, the operation of client device 101 is described in more detail.
Preferably, client device 101 includes two or more cameras 214. For example, in one
embodiment, a first“IN” camera 2l4a is directed at the inside of a vehicle, i.e., the cabin, driver, and passengers, and a second“OUT” camera 2l4b is directed at the road in front of the vehicle. In alternative embodiments, additional cameras 214 may be used, for example facing the back and/or sides of the vehicle, multiple interior areas of the vehicle, one or more top camera with a wide-angle lens providing a 360° view around the vehicle, or the like.
According to one embodiment, client device 101 is always turned on as long as it has sufficient power to operate. Cameras 214a and 214b are always turned on and recording video. The video recorded by the cameras 214 is buffered in the memory device 203. In one embodiment, memory device 203 is configured as a circular buffer. For example, in one embodiment, memory device 203 may be a 32 Gb FLASH memory device. Client device 101 manages the buffer in memory device 203 to store video data for a predetermined and programmable set amount of time. For example, in one embodiment, memory device 203 buffers video data from two cameras 214a and 214b for the preceding 24 hours. In one embodiment, client device 101 includes software to manage the cameras 214 to control the amount of data, e.g., bytes, generated by the cameras 214 and buffered in memory 203. In one embodiment, cameras 214 record data at various selectable video modes and rates. For example, cameras 214a and 214b can be set by client device 101 to capture video at various resolutions, including for example l440p, l080p, 720p, 360p, 240p, and the like. In addition, the frame rate for the video collected by each camera 214 can be set by client device 201. For example, in one embodiment, each camera 214 can independently change its video capture rate from 0 to 30 frames per second.
Now referring to FIG. 5, a method for collecting video for managing video buffering according to one embodiment is described. In one embodiment, various inputs are used to change the resolution and frame rate for each available camera. Upon powering up, cameras are set to default recording settings 501. Multiple inputs are received 502 from various sources. For example, in one embodiment, processor 201 receives location and/or motion data from a location module 204, acceleration data from an accelerometer sensor 215a, vehicle status data, such as for example the revolutions per minute (“RPM”) of a vehicle’s engine, vehicle battery charge level, and the like, from I/O module 201 connected to a CAN bus, time from wireless module 205 (e.g., LTE network time), image processing inputs from video module 207 (e.g., face
recognition, human body recognition, etc.), and the like. The inputs are used to determine the relevant features affecting the operation mode of the vehicle, such as for example, motion or lack of motion, presence of a user, presence of a person but not the user, or the like.
Based on the inputs received, an operational mode is determined 503. For example, the possible operational modes of a vehicle incorporating client device 101 according to one embodiment may include: default, driving, recently parked, parked, armed, low battery, and very low battery. Different embodiments can provide a subset or additional modes of operation, which may also vary depending on the vehicle or other location where the client device 101 (or auxiliary camera) may be located. A status change is determined at step 504. For example, after powering up, input data is received and the operational mode is no longer in“Default” mode. Based on the determined operational mode, the camera settings (e.g., resolution and frame rate) are changed 505 to produce more or less data for the video being recorded. Once the camera settings have been changed, recording of the video is done 506 using the camera settings. This results in video data objects, records, or files of varying size to manage the buffer, storing higher quality data with more bits during operational modes with higher likelihood of capturing video for events of interest while using lower quality data with less bits during operational modes with lower likelihood of capturing video of interest. In an alternative embodiment, as illustrated in FIG. 5, additional actions may be associated with the various operational modes. In this embodiment, the method checks 507 if the operational mode requires additional actions. If so, the actions are performed at step 508. As one of ordinary skill in the art will understand, different actions may be associated with different modes to provide additional functionality to the system within the scope of the invention. If one of the actions does not turn off the system, then recording can continue at step 506 as described above. Now referring to FIG. 6a, a method for capturing and storing video according to one
embodiment is provided. As noted above, video cameras in the various devices are preferably always on and recording video. Once video is being recorded, the method beings 601 and continues until the device is turned off or, in the case of a mobile device 104, until the mobile app stops running. For each camera, the image sensor generates video data according to the camera settings for the current operational mode as described above with reference to FIG. 5. The video data is received 602 and the video for each preset time period is encoded 603 according to a video compression and encoding standard, such as for example, MPEG-4, H.264, H.265, or any other video compression and encoding standard. The time period for each block of video may be predetermined or variable (e.g., based on user settings) and may be, for example, 2, 4, 6, or 10 seconds. In one embodiment, every two seconds of video is encoded together into a video data object, record, or file. Other embodiments may use different time periods depending, for example, on the intended use of the video, the purpose for the system, the location where the system is deployed, the amount of memory available, the processing power available, or other relevant factors. Metadata for the same time period is also captured 604 as information associated with the captured video data. As part of the metadata capture 604, a globally unique ID (“GUID”) is generated to uniquely identify the video data and metadata for the time period.
In one embodiment, the video data is encrypted 605. Any encryption algorithm may be used, such as, for example encryption algorithms compliant with the Advanced Encryption Standard (AES), Blowfish, Twofish, Data Encryption Standard (DES) (e.g., Triple-DES), RSA, or the like.
Referring back to the method of FIG. 6a, the encrypted video data and associated metadata for the given time period are stored 606 in the buffer. The resulting video data object or file will be of varying size based on the camera settings (e.g., resolution, frame rate, etc.) applied as well as any other factors, such as applied compression format and encoding. The video data object is then hashed 607 using a one-way hash function, such as SHA, MD5, or similar algorithm, to generate a unique hash for the captured video, i.e., the video data hash. Optionally, the hashing function may be applied to a file that includes both the video data and metadata. Alternatively, the metadata may be stored separately but in association with the video data and it is not included in the generation of the hash 607.
In one embodiment, a message is generated 608 including the metadata for each time period and the corresponding video data hash. Preferably, the message is then cryptographically signed 609 to guarantee the message payload originates from an authorized device. For example, a private key associated with a system-authorized device may be used to generate a one-way hash of the message payload. In an alternative embodiment, the private key is used to encrypt the payload of the message. In one embodiment, each client device 101, auxiliary camera 106, and mobile device 104, is associated with a unique cryptographic key -pair. The device securely stores the private key. The cloud system 103 retains access to the public keys for each device so it can verify that messages it receives come from authorized devices. For example, cloud system 103 maintains a set of records uniquely associating a device ID for each authorized device in the system with a corresponding public key that is applied to messages received from the device.
For example, private-public-key cryptographic signature methodologies may be used to verify that each received message includes a signature or encrypted payload encrypted with a private key from an authorized device.
In yet another embodiment, at step 607, optionally, instead of hashing the video data object, the client device uses its private cryptographic key to cryptographically sign or otherwise encrypt the video data object itself, for example, if the actual video data object is to be sent or otherwise uploaded to another device, such as cloud system 103. This could optionally be done in conjunction with step 609 as described above.
Finally, the message is sent 610 to the cloud system. Preferably, the message is sent using a secured connection, such as for example, an SSL/HTTPS connection over TCP/IP or the like. The process then repeats for the video data and metadata captured in the subsequent time period. Preferably, the time required to perform the process of FIG. 6a is less than the selected time period. For example, a device capturing video data in two-second increments (the time period) sends the metadata and video hash message to the cloud system 103 every two seconds. If at some point the data connection to the cloud is interrupted or otherwise becomes unavailable, the system may locally cache the messages for transmission upon reconnection to the cloud system 103.
In an alternative embodiment, the message signing step 609 is omitted. Instead, a device establishes a secured connection with the cloud system 103, such as an SSL/HTTPS connection, and authenticates itself to the server 102. For example, a device provides its device ID and a cryptographically signed version of its device ID, signed with the device’s private key. The server 102 retrieves the public key corresponding to the device ID provided and verifies the signed device ID for a match. Upon authorization, the server provides the device with a session token that uniquely identifies communications from that device for a given session. Thereafter messages are sent 610 over the secured connection with the metadata and video hash and also including the server-provided token.
Now referring to FIG. 6b, a data model for capturing metadata associated with a given video data object or file is provided according to one embodiment. In one embodiment, the video- object metadata 620 is periodically sent to cloud system 103 as device telemetry information. In one embodiment, the telemetry information 620 is sent after the recording of each video object, e.g., every 2 seconds, 6 seconds, 8 seconds, 10 seconds, or the like. The video-object metadata 620 may include one or more metadata items including, for example, a device ID 621, an atomic clock time stamp 622, a GPS timestamp 623, a latitude value 624, a longitude value 625, an altitude 626, a speed 627, a compass heading 628, a horizontal accuracy value 629, a vertical accuracy value 630, a software version 631, a location string value (e.g., a "geohash") 632, a connection type identifier (e.g., 2G, 3G, 4G, WiFi, etc.) 633, a wireless signal strength value 634, and/or a carrier identifier 635. One of ordinary skill in the art would understand that any combination of these metadata values may be used depending on the implementation and intended use of the metadata.
Now referring to FIG. 6c, a data model for capturing metadata associated with a given event- based video clip, such as an automatically generated video clip, a user-generated video clip, or the like, is provided according to one embodiment. In one embodiment, the event metadata 650 is generated and stored with each video clip. The event metadata 650 may include one or more metadata items including, for example, device ID 651, an atomic clock time stamp 652, a location string value (e.g., geohash) 653, an event or tag type 654, an event or tag type 655, an event or tag title 656, an event or tag latitude value 657, an event or tag longitude value 658, an event or tag altitude 659, an event or tag speed 660, an event or tag compass heading 661, an event or tag horizontal accuracy value 662, an event or tag vertical accuracy value 663, the full file name for the an event or tag clip file (e.g., manifest file) 664, a software version 665, a device type ID 664, and one or more Boolean variables to indicate whether the event or tag clip has been viewed 665a, shared 665b, deleted 665c, etc.
Now referring to FIG. 7, a method for generating event-based video clips according to one embodiment is described. Upon activation of the system, the method starts 700. The various inputs are monitored 701 while video is continuously captured. If no tagging event is detected 702, the system keeps monitoring. If a tagging event is detected 702, the relevant video data in the buffer is identified and selected 703. For example, once an event is detected 702, the video files for a predefined period of time before and after the event is identified in the buffer. In one example, 15 seconds before and after the event time is used. The amount of time, preferably between 10 and 30 seconds, may be pre-programmed or user selectable. Further, two different time periods may be used, one for time before the event and the other for time after the event. In one embodiment, the time periods may be different depending on the event detected. For example, for some events the time periods may be 30 seconds before event and 1 or 2 minutes after while other events may be 15 seconds before and 15 seconds after.
The selected video data is marked for buffering 704 for a longer period of time. For example, the video files for the selected time period are copied over to a second system buffer with a different buffering policy that retains the video for a longer period of time. In one embodiment, the selected video data being in a buffer storing video for 24 hours is moved over to a second buffer storing video for 72 hours.
Referring back to FIG. 7, a video clip is then generated 705 with the selected video data. Like every video data object, every video clip generated is associated with a globally unique identifier (GUID). In one embodiment, video clips are generated using a playlist file or manifest file as is known in the art. Each playlist or manifest file includes a GUID. For example, in one embodiment, an m3u8 playlist file is generated according to the HTTP Live Streaming specification (as for example described in Internet Draft draft-pantos-http4ive-streaming-23 submitted by Apple, Inc. to IETF on May 22, 2017). Alternative video clip generating techniques may be used in other embodiments, including, for example, MPEG-DASH (ISO/IEC 23009-1), Adobe’s HTTP Dynamic Streaming, Microsoft’s Smooth Streaming, or the like. The playlist or manifest file provides network-based location for the video data objects selected 703. For example, a Universal Resource Locator (URLs) may be provided for each of a set of video files. Using this approach, the video data can be stored in any network accessible storage. For example, video files identified in a given playlist can be stored on a camera device (e.g., client device 101, auxiliary camera 106, or mobile device 104) and network address locators are provided for each file at that location. In alternative embodiments, other video clip generation approaches may be used. For example, in one embodiment, the selected 703 video data is used to generate a single video file, such as an MPEG video file, that may be uploaded and downloaded as needed.
In one embodiment, video data objects are stored on the network-accessible buffer of the camera device and the playlist or manifest files for the generated event-based video clips identify the network addresses for the memory buffer memory locations storing the video data objects or files. Alternatively, upon identifying and selecting 703 the relevant video data objects, in addition to or as an alternative to moving the video data to the longer buffer 704, the video data may be uploaded to the cloud system 103, or alternatively transferred to a paired mobile device 104 upon establishment of a direct data connection, such as a WiFi or cellular connection. The clip generation 705 then identifies in the playlist or manifest file the network addresses for the video data stored in the cloud system 103. A combination of these approaches may be used depending on storage capacity and network capabilities for the camera devices used in the system or according to other design choices of the various possible implementations.
In one embodiment, other system components, such as the cloud system 103 or mobile device 104, are notified 706 of the event or event-based video clip. For example, in one embodiment a message including the GUID for the generated video clip is sent to the cloud system in a cryptographically signed message (as discussed above). Optionally, the playlist or manifest file may also be sent in the message. In one embodiment, the playlist or manifest files are maintained in the local memory of the camera device until requested. For example, upon notification 706 of the clip generation, the cloud system may request the clip playlist or manifest file. Optionally, the cloud system may notify 706 other system components and/or other users of the clip and other system components or users may request the clip either from the cloud system 103 or directly from the camera device. For example, the clips pane 40 la in the user’s mobile app may display the clip information upon receiving the notification 706. Given that the clip metadata is not a large amount of data, e.g., a few kilobytes, the user app can be notified almost instantaneously after the tag event is generated. The larger amount of data associated with the video data for the clip can be transferred later, for example, via the cloud system or directly to the mobile device 104, for example, via a cellular or WiFi connection. For example, upon detection of a“Baby/ Animal in Parked Car” event or a“Location Discontinuity” event, the user’s mobile device 104 may be immediately notified of the tag event using only tag metadata. Subsequently, the user can use the video clip playlist to access the video data stored remotely, for example, for verification purposes. In one embodiment, video clips are automatically transferred to the user’s authorized and authenticated mobile device 104 automatically upon detection of the mobile device 104 in proximity of the client device 101, for example after establishing a WiFi connection. A lower cost data communication approach is preferable for transferring video data avoiding expensive cellular data charges.
Once a video clip is generated 705, it may be shared with other devices owned by the same user or, if authorized, the video clip may be shared with other users of the system. For example, the GUIDs for every video clip generated by a camera device of a given user may be stored in a user clip table in the cloud system 103. For example, GUIDs for the clips from all the cameras on a multi-camera client device 101, for the clips from any auxiliary camera device 106, and for the clips generated by the mobile app on the user’s mobile device 104, may all be stored in the user clip table. The user may access the user clip table via mobile device 104. For example, mobile app may maintain a user clip table that is synchronized with the user clip table in the cloud system. Every time a new clip notification is received, the mobile app and cloud-based user clip tables are updated and or synchronized. Alternative synchronization approaches may be used, such as for example a periodic synchronization approach.
In addition to the GUID, in one embodiment, the user clip tables may also include other information or metadata for each clip of the user, such as for example, a name or descriptor, device ID where the video was captured, time and date information, tag or event information, thumbprint images, or the like. Further, the playlist or manifest file may also be stored or identified in the user clip table. In one embodiment, a user may access video clips through the mobile app on the mobile device 104 through the clip pane 40 la. Upon selection of a clip through the clip pane 40 la, the mobile app uses the clip GUID to request the corresponding playlist or manifest file from the cloud system 103, directly from a camera device (e.g., client device 101 or auxiliary camera 106), of if previously transferred, from its local storage. Using the playlist or manifest file, the mobile app can playback the video clip by requesting the relevant video objects using their network address identifiers, which may point to a remote storage (e.g., cloud 103 or client device 101/106) or local storage on the mobile device 104. In one embodiment, if the video data objects are encrypted, the user may provide an identification (e.g., biometric ID, face recognition, user ID and password, or the like) to access the decryption key.
Now referring to FIG. 11, a flow chart for a method to access the buffer of a client device 101/016 from a mobile device 104 according to one embodiment. As described above with reference to FIG. 4b, the user may access the video data stored in the buffer of a client device 101/106 from the mobile device app, for example, from the cameras pane 40lb. In one embodiment, the mobile device 104 and client device 101/106 are paired and able to
communicate with each other. For example, the pairing process described below, with reference to FIG. 10, may be used to securely pair the client device 101/106 with a mobile device 104 running one embodiment of the mobile device app. Through the pairing process, the client device 101 is able to recognize when the mobile device 104 is nearby 1300, such as when it is within range of its short-range communication radio. For example, the client device 101 may receive the Bluetooth ID from the mobile device 104, which may be recognized as a“bonded” device based on its Bluetooth ID or may otherwise be notified of its proximity via cloud system 103 upon detection of its BLE beacon as described below. According to one embodiment, as a measure of security, client devices will only communicate with recognized trusted devices that have gone through a pairing process. Once the client device 101/106 has detected a known or trusted mobile device 104 in proximity, it enters into a WiFi hotspot mode 1301. The mobile device 104 then gets the network parameters 1302 to connect to the WiFi hotspot. In one embodiment, the mobile device 104 requests the network parameters from the client device 101/106 wirelessly, for example, through the Bluetooth connection or cellular messaging via cloud system 103. Alternatively, the client device 101/106 may provide the network parameters to the mobile device 104 wirelessly, for example via Bluetooth, via cellular messaging through cloud system 103, or the like, once it detects the proximity of the mobile device 104, for example using Bluetooth, based on its BLE beacon and proximity messages from the mobile device 104, or the like. In another
embodiment, the network parameters may be provided to a mobile device 104 paired with client device 101/106 (either the primary paired device, a subset of paired devices, or all paired devices) during the initial mobile device association process, as for example described with reference to FIG. 8. In one embodiment, the network parameters include the hotspot name (“SSID”), an access password or token (e.g., based on WEP or WPA/2/Enterprise security modes), and other information. Preferably, the communication of the network parameters to the mobile device is end-to-end encrypted, for example, using a shared key or asymmetric encryption as further described below. For added security, the client device’s hotspot SSID does not need to be advertised or broadcasted since the network parameters, including the SSID, are directly provided to the mobile device 104.
The mobile device 104 uses the network parameters to join the client device’s WiFi hotspot 1303. In one embodiment, the mobile device programmatically switches to the WiFi network with the provided SSID and password. For example, on an Apple iOS device, this switch would prompt the user to select“join” to authorize the network connection, but no additional user input would be required as the password information can be programmatically provided. On an Android OS device, the programmatic switch can be accomplished without any user input. Once connected, the mobile device app can request access to the video data 1304 stored in the client device’s buffer using any of the techniques described above. In one embodiment, the client device generates playlists or manifest files 1305 for each of the contiguous video objects stored in the buffer, e.g., 2-second, 4-second, 6-second, etc. video objects. The playlists or manifest files are provided to the mobile device 104 over the established WiFi connection 1306, identifying the network addresses of the video object files stored in the client device’s buffer. While in the embodiment described above, the connection between the mobile device 104 and client device 101/106 is a WiFi connection, in other embodiments the same functionality is available through other wireless connections, including cellular, Bluetooth™, and the like. The mobile app (or other video player app) allows the user to playback and scrub back and forth 1307 through the set of video object files stored in the buffer. For example, the“cameras” pane 40 lb in mobile app may include a“live” window 410 and an additional display area for buffered video (e.g., last 24 hours, a pre-set number of days, or the like).
In one embodiment, for example, HTTP Live Stream (“HLS”) playlists are used for the method of FIG. 11. Each playlist is a combination of the 2-6 second clips, back-to-back, without significant interruption, for example, without stopping the playback between files or otherwise interrupting the playback in a way that is noticeable to the user. To include multiple different files back-to-back in the same playlist, in one embodiment, the #tag, typically used to insert advertisements in a playlist, may be placed in front of the name/location of each video object file in the buffer, essentially designating every video object to be an advertisement. While HLS playlists typically provide different streams from a single file, using the #tag approach allows the streaming of the same video feed from separate but continuous files (of a few seconds each).
This approach allows for continues smooth playback of high-quality video, without significant interruption, and provides high performance seeking through the entire buffer of video objects stored in the client device 101/106. Further, since the video objects are already stored in few- second increments (e.g., 2, 4, 6, etc.) the video objects in the client device buffer require no additional processing, avoiding any additional delay. In addition, in one embodiment, the HLS playlists are nested to allow for access to longer time periods worth of video. In this
embodiment, the client device 101/106 generates hierarchical playlists of playlists, to avoid overly long playlists. For example, in one embodiment, a master playlist may list a plurality of other playlists, each in turn listing a plurality of video objects with corresponding network locations. When scrubbing from the mobile app through time (or location as described in FIG. 4e), based on the master playlist, the mobile app switches which HLS playlist is in use at any given time. Each playlist may include, for example, a few minutes to a few hours of video from the video object files in the buffer. In one embodiment, each playlist includes video objects or files for 10-20 minutes of video. This enables the user to see long periods of time worth of video, possibly multiple days, and scrub instantly across the entire time. The same approach may be used for access event-generated clips or other video accessed using playlists described herein.
According to another aspect of the disclosure, a process for setting up a camera device, such as a client device 101, is provided. Referring to FIG. 8, a method for setting up a camera device for operation in the system according to one embodiment is described. In one embodiment, camera devices, such as client device 101, include cellular connectivity that is operational as soon as the device is powered up. Cellular connectivity provides a data connection 107/108 between the camera device and the cloud system 103 that can be used during the set-up process. When the camera device 101/106 is powered up, the set-up process begins 1000. While the following set up steps are provided in order, no particular order is required for these steps. For example, in one embodiment, a user set up step 1001 is performed. In one embodiment, the user set up step 1001 allows the camera device to recognize the user. For example, in one embodiment, a client device 101 provides instructions to a user to pose in different orientations while facing one of the cameras to record different angles of the user’s face. Optionally, a similar process may be used to recognize other user biometrics, including for example, fingerprints, voice, and irises. For example, a touch sensor may be used to record a series of images of a user’s fingerprint. Voice recognition software may be trained by having the user repeat pre-defmed commands, statements, or sentences one or more times. In one embodiment, a user’s iris is recorded from multiple angles to derive a biometric optical signature. Other embodiments may include a combination of these biometrics identifications and may further include others.
The user’s biometric signature or signatures are stored in the camera device. In one
embodiment, a cryptographic key is also generated based on a random input and stored in association with the biometric identification of the user. Optionally, if more than one user is required, for example for a vehicle with multiple possible drivers, the user set up process 1001 is repeated for each user.
Referring back to FIG. 8, another set up step involves the association of the camera device with one or more mobile devices 104. It should be noted that mobile device 104 may itself be a camera device, and thus some of the set-up steps, such as user set up step 1001 may be applicable. In one embodiment, mobile device 104 includes a mobile app installed on the device as described above with reference to FIG. 4a-4e. In one embodiment, mobile device 104 and camera device (e.g., client device 101) include short range wireless modules, such as Bluetooth transceivers. As is known in the art, short range wireless modules may transmit a unique ID that can be received by other short range wireless modules as a for of identification of devices in forming a piconet or otherwise pairing with each other. For example, Bluetooth transceivers can provide a unique 12-digit hexadecimal address (“BD ADDR”) for identification and pairing.
In one embodiment, a user may prompt the camera device to pair with the user’s mobile device 104. For example, in one embodiment, the user may utter a voice pairing command, provide a pairing command through a touchscreen, or through any other user input device available in the camera device. In one embodiment, the pairing process involves a Bluetooth paring process. In another embodiment, the camera device displays a unique pattern that is captured by the mobile device and sent back to the camera device via the connection to the could system 103. For example, camera device may display a randomly generated alphanumeric code, a QR code, a series of black and white screens in a random order, or some other random output. The random output is captured or entered into the mobile device by the mobile app and transmitted via a secured Internet connection to cloud system 103 along with a unique identifier of the mobile device, such as, for example a Bluetooth address, a MAC address, or the like. The random output and the mobile device input are compared. If they match, the camera device authenticates the mobile device unique identifier (e.g., Bluetooth address or MAC address) and from that point on is associated with the mobile device. In an alternative embodiment, instead of comparing the output of the client device with the input captured by the mobile device, both devices generate an output that is compared at the server. For example, each device uses a camera to perform face recognition of the user during the set-up process and their face recognition results are sent to the server for comparison to match the same user.
In one embodiment, a QR code is displayed on the display of the client device 101. The QR code encodes a device ID for the client device 101 and an encryption key (or seed for generation of an encryption key) for communicating with the client device 101. The mobile app on the mobile device 104 captures and interprets the QR code to obtain the device ID and encryption key. The device ID may for example include a telephone number, email address, or other means for electronic messaging with the client device 101. Using the encryption key, the mobile device 104 can send encrypted communications to the client device 101 as further described below to associate the mobile device with the client device, including for example, sending to the client device 101 a unique identifier for the mobile device 104, for example, telephone number, email address, Bluetooth address, MAC address, or the like. While described with the client device 101 being the device that displays the QR code, the same approach may be used with the mobile device 104 displaying the QR code and the client device 101 initiating the encrypted messaging using the encryption key provided by the mobile device 104.
Other“shared secret” approaches may be used for mobile device association 1002, include for example, a series of instructions to cause the user to move the mobile device while the mobile app records the outputs of one or more mobile device sensors to be matched with the provided instructions. For example, the user may raise or lower the device, shake the device, etc. in a random series causing accelerometer and/or gyroscope changes that match the requested motions. The series of sensor-detected motions can be provided via Internet connection for matching with the camera device instructions for association. Alternatively, in one embodiment, a user may provide a telephone number for the mobile device during a registration process, for example through the mobile device app. For the mobile device association step 1002, camera device may display a device ID on its screen. The user inputs the device ID on the mobile app and it is transmitted to the cloud system 103. The cloud system identifies the device ID and sends a message to the camera device 101/106 via Internet connection 107/108 including the telephone number for mobile device 104. The camera device sends a text message to mobile device 104 with a random code. The user inputs the random code via the mobile app for verification by cloud system 103 or camera device 101/106. If the random code matches the texted code, the mobile device is authenticated. Once the camera device and the mobile device are associated 1002, the camera device can trust the mobile device for subsequent interactions, based on a unique ID for the mobile device (e.g., Bluetooth address, MAC address, or the like). According to another aspect of disclosure, in one embodiment, the set-up process optionally includes the step of provisioning the mobile device 104 with a mobile app. FIG. 9 provides an exemplary flow diagram for an initial set-up process according to one embodiment. As described above, camera device 101/106 includes a wireless cellular connection to the Internet and is configured to communicate with cloud system 103 out of the box. When the camera device is first turned on, the screen displays a QR code 1101. A mobile device can use one of its existing apps to capture the QR code with its camera and interpret the code 1102. In this embodiment, the QR code provides a link or URL to a web-server, for example in cloud system 103. The link or URL may include an IP address or a domain (e.g., www.owlcam.com ) and a set of parameters encoded therein as is known in the art. One of the parameters may include, for example, a unique ID for the camera device 101/106 being set up, such as for example, a mobile device number, a telephone number, a serial number, or the like. Optionally, the link parameters may also include a randomly generated number that is different for different times the set-up process is run. Alternatively, instead of displaying a QR code, the same process may be performed providing the link and parameters in alternative forms, including for example, by displaying them on the screen as text/image, encoding them in an audio signal, transmitting them via short range communication (IR, AirDrop, Bluetooth, etc.) or the like.
Upon interpreting the QR code, the mobile device uses its existing software (e.g., a web browser) to send 1103 an HTTP request to the web server identified through the link or URL and including the parameters encoded into the link. The cloud system 103 receives the request and creates 1104 a record for the request, including the link-encoded parameters and additional metadata and network information derived from the HTTP requesting process, including information for uniquely identifying the mobile device 104 (e.g., combination of HTTP heather metadata, TCP/IP header information, or the like). In addition, cloud system 103 redirects 1105 the mobile device to a location from where the appropriate mobile app may be obtained. For example, cloud system 103, using, for example, the“User- Agent” data from the HTTP request and/or the unique device ID for the camera device 101/106, redirects the mobile device 104 to either the Apple App Store when the User-Agent indicates the mobile device to be an iOS device or to the Google Play Store if the mobile device is determined to be an Android-based device or alternatively, to other servers capable of providing the mobile app to the mobile device over a network. Similarly, the cloud system 103 may include parameters in the redirection link to the appropriate version of the mobile app determined using the device ID of the camera device 101/106.
Once redirected, the mobile device 104 obtains 1106 the proper mobile app, e.g., the app for interaction with camera device 101/106 and cloud system 103. After the downloading and installation of the mobile app on mobile device, when executed, the mobile app contacts the cloud system 103 to access 1107 the record previously generated at step 1104. For example, the mobile app may derive a unique ID for the mobile device 104 using the same parameters, metadata, or other information available from the mobile device 104 when making an HTTP request like the one made at step 1103. In one embodiment, a time limit (e.g., 2-15 minutes) may be used between the HTTP request step 1103 and the record access step 1107 to facilitate the mobile device 104 identification. Cloud system 103 determines that the same mobile device 104 is accessing the system based on that information and provides 1108 access to the previously generated record and any other additional set up parameters that may be necessary to complete the set-up process. For example, if provided, the randomly generated number may be provided as a“shared secret” for the device association process described above. Alternatively, encryption information and/or messaging information for the camera device may be provided. Referring back to FIG. 8, another aspect of the disclosure involves setting up a direct connection between a camera device 101/106 and a mobile device 104. In one embodiment, camera device 101/106 includes wireless local area network connectivity. In this embodiment, for example, a client device 101 may optionally operate as an access point (AP) for a local area network, such as Wi-Fi network. The mobile device 104 can establish a connection 109 to the client device 101 as a Wi-Fi station (STA). While a specific wireless local area network connection is described, it is understood that the present invention can be applied to a wide variety of wireless connection modes, such as, for example, Peer-to-Peer connections (e.g.,“Wi-Fi Direct,” ad hoc network, or the like). The camera device can use the MAC address authenticated through a mobile device association process 1002 to determine whether the associated mobile device is the one making the connection. The direct camera device to mobile device connection 109 may then be used to transfer settings, video data objects, video clips, biometric signatures, and the like, in a secured way between the devices. For example, in one embodiment, once the direct camera device to mobile device connection 109 is established, a data synchronization process can automatically transfer video clips and other data, such as settings, user preferences, etc. from the camera device to the mobile device and vice versa. FIG. 10 provides an exemplary flow diagram for an alternative initial set-up process according to one embodiment. According to this embodiment, the mobile app is in the device 104, obtained as shown in FIG. 9, downloaded directly from a mobile application repository (e.g.,
App Store or Google Play Store), downloaded from another website or Internet-accessible location, or otherwise. Once installed in the mobile device 104, the mobile app is used to securely pair to a client device 101/106 as shown in FIG. 10. When a client device 101/106 is activated 1200 for the first time, it is not paired to any mobile device 104. To pair a mobile device 104 according to this embodiment, the mobile device 104 broadcasts an identifying signal, such as its Bluetooth ID, WiFi identifier, or the like. The client device 101/106 detects the presence 1201 of mobile device 104, for example, the client device 101/106 scans Bluetooth signals and records the Bluetooth IDs of all the devices transmitting within range of the client device 101/106. Other means of wirelessly detecting mobile devices in proximity may be used. The client device 101/106 generates a GUID 1202 and establishes communication with the cloud system 103, for example, using its integrated cellular connection. A GUID may for example be generated from a mobile device number (e.g., International Mobile Subscriber Identity (“IMSI”), Integrated Circuit Card ID (“ICCID”), International Mobile Equipment Identity (“IMEI”), or similar), a telephone number, a serial number, or the like. The client device 101/106, using its GUID creates a communication record or“topic” 1203 on the cloud system 103 that can be used to post messages intended for pairing with the client device 101/106. The“topic” may be uniquely identified by the GUID of the client device 101/106 or by an additional or alternative topic identifier or“subject.” According to one embodiment, other system elements can post messages, including any form of data, by identifying the proper topic with the associated GUID of the recipient device. In one embodiment,“topics” are implemented as secured electronic mail messaging accounts (e.g., to:GUID@ cloudsvstemtopics.com). Alternatively, topics may be treated as“conversations” or“threads” of related messages (e.g., with the same“subject”) for a given GUID-identified device (e.g., to:GUID@cloudsystemtopics.com; subjecTtopicID) In another embodiment, the Message Queuing Telemetry Transport (“MQTT”) protocol is used with the cloud system 103 providing a message broker function. However, any other suitable messaging approach may be used within the scope of the invention, such as the Advanced Message Queuing Protocol (“AMQP”), Streaming Text Oriented Messaging Protocol
(“STOMP”), the IETF Constrained Application Protocol (“CoAP”), the Extensible Messaging and Presence Protocol (“XMPP”), OPC-UA from the OPC Foundation, or Web Application Messaging Protocol (“WAMP”), to name some examples. Their respective specifications are hereby incorporated by reference. Similar to step 1101 in FIG. 9, after creating the“topic” 1203, the client device 101/106 may display on its screen a code 1204. The mobile device 104 can use the app associated with the cloud system 103 to obtain the code 1205, for example, capturing a QR code with its camera, and interpret the code using the cloud-system-specific app software. In this embodiment, the QR code provides information for the mobile device 104 to message the client device 101/106 via the cloud system 103, for example, the client device’s GUID and may also include an identifier for the pairing“topic” generated at step 1203. This information is encoded into the QR code as is known in the art. Optionally, the code may also include a randomly generated number that is different for different times the set-up process is run. As further described below, according to another embodiment, the code may also include encryption information to allow encrypted messaging between mobile device 104 and the client device 101/106, such as for example, a public key of the client device for the mobile device to encrypt at least part of the messages it will send to the client device. Alternatively, instead of displaying a QR code, the same process may be performed providing a hyperlink and parameters in alternative forms, including for example, by displaying them on the screen as text/image, encoding them in an audio signal, transmitting them via short range communication (IR, AirDrop, Bluetooth, etc.) or the like.
With the information provided by the code, the mobile device 104 sends a pairing request 1206 via cloud system 103 to the client device 101/106, for example, identified the GUID and/or pairing topic created at step 1203. The pairing request also includes other pairing information for the client device 101/106 to communicate with the mobile device 104. For example, according to one embodiment, the pairing request includes one or more of a GUID of the mobile device (e.g., IMSI, IMEI, or similar), the Bluetooth ID for the mobile device, and encryption information, such as for example, a public key for the mobile device 104. Optionally, in one embodiment, before conveying the pairing request message to the client device 101/106, the cloud system 103 checks 1207 whether to allow a new mobile device 104 to pair with the client device 101/106 identified in the pairing request. For example, in one embodiment, the cloud system maintains a paring record for each client device 101/106 (e.g., based on the devices’ GUIDs) storing pairing information for each device, such as for example the GUID for each mobile device 104 paired with each client device 101/106. If the system determines that this client device GUID has an existing paired mobile device 104, an authorization request is sent to the existing or“primary” mobile device 104. Multiple mobile devices 104 may be allowed to pair with the same client device 101/106 if approved by the first or user-designated“primary” mobile device 104 that is already paired with the client device 101/106. The pairing record for the client device is used by the cloud system to keep track of the paired devices. Upon approval by the primary mobile device 1209, the paring request is conveyed 1210 to the client device 101/106. The client device receiving the pairing request verifies 1211 the pairing request. For example, in one embodiment, the client device 101/106 checks that it is in“pairing mode” and/or that it can otherwise pair with a mobile device 104. In addition, the client device 101/106 checks the pairing information, including whether the Bluetooth ID for the mobile device 104 is found in the list of Bluetooth IDs scanned at step 1201 indicating that the mobile device 104 is located near the client device, within range of short-range wireless
communications. The client device 101/106 stores the pairing information and once the pairing request is verified 1211, it notifies the cloud system 103 that pairing has succeed 1212.
According to one embodiment, the cloud system 103 updates its pairing record for the client device 101/106 to include the newly paired mobile device 104 and notifies the mobile device 104 of the pairing success.
At this point, communication between the client device 101/106 and the mobile device 104 are enabled 1213 and the paring process ends. The mobile device 104 and client device 101/106 can now, for example, post messages to each other on their respective“topics” via cloud system 103 using a messaging protocol, such as MQTT, for example. All messaging between the two devices enabling the functionality described above can be exchanged via the cloud-based messaging approach establish according to this embodiment. For example, the mobile device 104 can request a secured real-time feed of the video in the buffer memory of client device 101/106 using this approach and send and receive commands, for example, using RTC technologies as is known in the art. While all communications with the cloud system 103 may be encrypted, e.g., with the encryption provided by the cellular channels 107/108, using SSL, or similar communication channel encryption, the messaging between the two end-point devices can further be end-to-end encrypted. For example, the two devices can use the cloud messaging to do a Diffie-Hellman key exchange and use each other’s key to encrypt the payloads of the messages sent via the cloud messaging system. Alternatively, as described above, the QR code may include a public key for the client device 101/106 that the mobile device can use to encrypt its own public key and include it in the pairing information with the paring request. Having each other’s public keys, the two devices can then encrypt the payload of all the messages they exchange from that point on. In an alternative embodiment, the QR code can include a“shared secret” as described above with reference to FIG. 8 which can be used as a symmetric encryption key or to generate a symmetric encryption key that then is used by both devices to communicate securely.
According to one embodiment, when multiple mobile devices 104 are paired with a client device 101/106, the client device 101/106 may detect the proximity of the mobile devices using an alternative approach. According to this embodiment, the client device 101/106 implements a Bluetooth beacon protocol (e.g., iBeacon, AltBeacon, URIBeacon, Eddystone, or the like) using Bluetooth Low Energy (“BLE”) to determine proximity of mobile devices 104. Any mobile device 104 with a cloud system enabled app, as for example described above with reference to FIG. 4a-4e, upon detecting the client device’s beacon, wirelessly sends a message to the client device announcing its presence, and for example providing its mobile device GETID. Mobile devices 104 that have undergone a pairing process as described above, will recognize the beacon for the paired client device 101/106. The message may be sent via Bluetooth, WiFi, cellular, or using other wireless communications. For example, when multiple mobile devices 104 need to be detected in proximity to the client device 101/106, each paired mobile device 104 detecting the beacon sends a presence message to client device 101/106 via cloud system 103, either using a cellular or WiFi transmission. The cloud system 103 delivers the message to client device 101/106 such that the client device can determine which mobile devices 104 are in close proximity. According to one embodiment, client device 101/106 continues to periodically broadcast its beacon (the period may vary depending on conditions, such as for example battery status, presence of recognized mobile devices, operational mode, or the like). When a mobile device 104 stops receiving the beacon at the expected reception time it determines that it no longer is in proximity of the client device 101/106 and sends another proximity message to client device 101/106 via cloud system 103 to inform the client device 101/106 that it no longer is in its proximity. Proximity messages may be time stamped so that the client device can determine periods when mobile devices are in proximity or not, for example, should the proximity messages be delayed (e.g., no cellular coverage).
As those in the art will understand, a number of variations may be made in the disclosed embodiments, all without departing from the scope of the invention, which is defined solely by the appended claims. It should be noted that although the features and elements are described in particular combinations, each feature or element can be used alone without the other features and elements or in various combinations with or without other features and elements. The methods or flow charts provided may be implemented in a computer program, software, or firmware tangibly embodied in a computer-readable storage medium for execution by a general-purpose computer or a processor.
Examples of computer-readable storage mediums include a read only memory (ROM), a random-access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks. Suitable processors include, by way of example, a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of
microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine.
One or more processors in association with software in a computer-based system may be used to implement methods of video data collection, cloud-based data collection and analysis of event- based data, generating event-based video clips, sharing event-based video, verifying authenticity of event-based video data files, and setting up client devices according to various embodiments, as well as data models for capturing metadata associated with a given video data object or file or for capturing metadata associated with a given event-based video clip according to various embodiments, all of which improves the operation of the processor and its interactions with other components of a computer-based system. The camera devices according to various embodiments may be used in conjunction with modules, implemented in hardware and/or software, such as a cameras, a video camera module, a videophone, a speakerphone, a vibration device, a speaker, a microphone, a television transceiver, a hands free headset, a keyboard, a Bluetooth module, a frequency modulated (FM) radio unit, a liquid crystal display (LCD) display unit, an organic light-emitting diode (OLED) display unit, a digital music player, a media player, a video game player module, an Internet browser, and/or any wireless local area network (WLAN) module, or the like.

Claims

CLAIMS What is claimed is:
1. A method for wirelessly accessing video objects stored in a buffer of a video capturing
device, the method comprising:
detecting an authorized mobile device in physical proximity to the video capturing
device;
activating a wireless hotspot mode in the video capturing device, the wireless hotspot mode comprising network parameters to identify an access point of a wireless network and to provide secure access to the wireless network;
providing the access parameters to the authorized mobile device;
receiving a request to join the wireless network at the access point from the authorized mobile device, the request based, at least in part, on the network parameters; generating a plurality of playlists, each playlist listing a plurality of video objects from the video objects stored in the buffer of the video capturing device for enabling access to the plurality of video objects, each playlist configured to enable playback of each video object listed in sequence and without significant interruption; and
providing the plurality of playlists to the authorized mobile device via the wireless
network.
2. The method of claim 1, further comprising generating a master playlist, the master playlist listing the plurality of playlists, each of the plurality of playlists listing each of the plurality of video objects identifying them in a list as an advertisement according to a streaming protocol.
3. The method of claim 2, wherein each of the plurality of playlists is organized according to an HTTP Live Streaming protocol and further wherein each video object is identified using a #tag.
4. The method of claim 2, wherein each video object identified in the list consists of a number of seconds of recorded video of between 1 and 10 seconds.
5. The method of claim 4, wherein the list in each of the plurality of playlists identifies a
plurality of video objects corresponding to between 10 and 20 minutes of video.
6. The method of claim 2, further comprising, sending the master playlist to the authorized mobile device via the wireless network, and further wherein providing the plurality of playlists comprises receiving a plurality of requests from the authorized mobile device for two or more playlists of the plurality of playlists listed in the master playlist, the requests based on user scrubbing input on a video playback control on the mobile device.
7. The method of claim 6 wherein the scrubbing input is based on a timeline-based user
interface related to the time when the video objects where captured by the video capturing device.
8. The method of claim 6 wherein the scrubbing input is based on a map-based user interface related to the location where the video objects where captured by the video capturing device.
9. The method of claim 8, further comprising sending a video object in response to a request, the video object comprising video data, time data, and location data.
10. The method of claim 1, further comprising providing a first video object in response to a request from the authorized mobile device and automatically providing a second video object listed in sequence after the first video object in a playlist of the plurality of playlists in response to the request.
11. A method for wirelessly accessing video objects stored in a buffer of a video capturing
device, the method comprising:
receiving access parameters for accessing a video capturing device in a wireless hotspot mode, the access parameters comprising network parameters to identify an access point of a wireless network and to securely access the wireless network;
sending a request to join the wireless network to the access point in the video capturing device, the request based, at least in part, on the network parameters; receiving a plurality of playlists from the video capturing device via the wireless
network, wherein each playlist lists a plurality of video objects from the video objects stored in the buffer of the video capturing device for enabling access to the plurality of video objects, each playlist configured to enable playback of each video object listed in sequence and without significant interruption; and requesting access to a video object stored in the buffer of the video capturing device based on user scrubbing input and a playlist of the plurality of playlists.
12. The method of claim 11, further comprising receiving a master playlist, the master playlist listing the plurality of playlists, each of the plurality of playlists listing each of the plurality of video objects identifying them in a list as an advertisement according to a streaming protocol.
13. The method of claim 12, wherein each of the plurality of playlists is organized according to an HTTP Live Streaming protocol and further wherein each video object is identified using a #tag.
14. The method of claim 12, wherein each video object identified in the list consists of a number of seconds of recorded video of between 1 and 10 seconds.
15. The method of claim 12, wherein the list in each of the plurality of playlists identifies a
plurality of video objects corresponding to between 10 and 20 minutes of video.
16. The method of claim 12, wherein receiving the plurality of playlists comprises sending a plurality of requests to the video capturing device for two or more playlists of the plurality of playlists listed in the master playlist, the requests based on user scrubbing input on a video playback control on the mobile device.
17. The method of claim 16 wherein the scrubbing input is based on a timeline-based user
interface related to the time when the video objects where captured by the video capturing device.
18. The method of claim 16 wherein the scrubbing input is based on a map-based user interface related to the location where the video objects where captured by the video capturing device.
19. The method of claim 18 further comprising, displaying a map-based user interface on a
display of the mobile device, the map-based user interface providing a map with indications representative of locations where the video objects stored in the buffer of the video capturing device were captured.
20. The method of claim 19, further comprising receiving the video object, the video object comprising video data, time data, and location data, and displaying the video data on the screen of the mobile device along with a map comprising an indicator corresponding to the location data.
21. The method of claim 11, further comprising receiving the video object, the video object comprising video data, time data, and location data, and displaying the video data on the screen of the mobile device along with a map comprising an indicator corresponding to the location data.
PCT/US2019/012109 2018-01-05 2019-01-02 Scrub and playback of video buffer over wireless WO2019136107A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA3087335A CA3087335A1 (en) 2018-01-05 2019-01-02 Scrub and playback of video buffer over wireless
EP19736083.7A EP3714605B1 (en) 2018-01-05 2019-01-02 Scrub and playback of video buffer over a wireless network
US16/957,062 US11303967B2 (en) 2018-01-05 2019-01-02 Scrub and playback of video buffer over wireless

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862614170P 2018-01-05 2018-01-05
US62/614,170 2018-01-05

Publications (1)

Publication Number Publication Date
WO2019136107A1 true WO2019136107A1 (en) 2019-07-11

Family

ID=67144511

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/012109 WO2019136107A1 (en) 2018-01-05 2019-01-02 Scrub and playback of video buffer over wireless

Country Status (4)

Country Link
US (1) US11303967B2 (en)
EP (1) EP3714605B1 (en)
CA (1) CA3087335A1 (en)
WO (1) WO2019136107A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022265945A1 (en) * 2021-06-15 2022-12-22 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system
US11658955B1 (en) 2021-06-15 2023-05-23 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system
US11843636B1 (en) 2021-06-15 2023-12-12 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system
US11848930B1 (en) 2021-06-15 2023-12-19 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4080794A1 (en) 2016-01-06 2022-10-26 TVision Insights, Inc. Systems and methods for assessing viewer engagement
US11540009B2 (en) 2016-01-06 2022-12-27 Tvision Insights, Inc. Systems and methods for assessing viewer engagement
WO2018195391A1 (en) * 2017-04-20 2018-10-25 Tvision Insights, Inc. Methods and apparatus for multi-television measurements
US11304056B2 (en) * 2018-06-06 2022-04-12 Shakuntala D. R. System and method for detecting unauthorized devices in a system of telecom networks
TWI715175B (en) * 2019-05-28 2021-01-01 瑞昱半導體股份有限公司 Bluetooth device, method of operating bluetooth device, and non-transitory computer readable medium
USD937295S1 (en) * 2020-02-03 2021-11-30 Apple Inc. Display screen or portion thereof with graphical user interface
US20220046290A1 (en) * 2020-08-05 2022-02-10 Charles Isgar Experience sharing system
US20230276513A1 (en) * 2022-01-31 2023-08-31 Zebra Technologies Corporation System and Method for Silently Pairing Devices
TWI831184B (en) * 2022-04-18 2024-02-01 勝捷光電股份有限公司 Intelligent automobile networking system
US20230418618A1 (en) * 2022-06-24 2023-12-28 Evga Corporation Computer intelligent switching system and method thereof
US11985378B1 (en) 2022-11-14 2024-05-14 Realwear, Inc. Wearable media streaming system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060008256A1 (en) 2003-10-01 2006-01-12 Khedouri Robert K Audio visual player apparatus and system and method of content distribution using the same
US20060095401A1 (en) * 2004-06-07 2006-05-04 Jason Krikorian Personal media broadcasting system with output buffer
US20080192736A1 (en) * 2007-02-09 2008-08-14 Dilithium Holdings, Inc. Method and apparatus for a multimedia value added service delivery system
US20130282903A1 (en) 2012-04-20 2013-10-24 Research In Motion Limited Method, system and apparatus for accessing a communications network
US20140152832A1 (en) * 2012-11-30 2014-06-05 Cbs Interactive Inc. Techniques to provide a micro-broadcast of an event
US9516354B1 (en) * 2012-12-20 2016-12-06 Teradek LLC Bonded wireless hotspot
WO2017182815A1 (en) * 2016-04-22 2017-10-26 Orbital Multi Media Holdings Corporation Media data streaming method and apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100228740A1 (en) * 2009-03-09 2010-09-09 Apple Inc. Community playlist management
US20130120596A1 (en) * 2011-11-15 2013-05-16 Herman Yau Method of configuring a wireless network camera wirelessly
US9532088B2 (en) * 2014-06-18 2016-12-27 Arris Enterprises, Inc. Trick-play streams for adaptive bitrate streaming

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060008256A1 (en) 2003-10-01 2006-01-12 Khedouri Robert K Audio visual player apparatus and system and method of content distribution using the same
US20060095401A1 (en) * 2004-06-07 2006-05-04 Jason Krikorian Personal media broadcasting system with output buffer
US20080192736A1 (en) * 2007-02-09 2008-08-14 Dilithium Holdings, Inc. Method and apparatus for a multimedia value added service delivery system
US20130282903A1 (en) 2012-04-20 2013-10-24 Research In Motion Limited Method, system and apparatus for accessing a communications network
US20140152832A1 (en) * 2012-11-30 2014-06-05 Cbs Interactive Inc. Techniques to provide a micro-broadcast of an event
US9516354B1 (en) * 2012-12-20 2016-12-06 Teradek LLC Bonded wireless hotspot
WO2017182815A1 (en) * 2016-04-22 2017-10-26 Orbital Multi Media Holdings Corporation Media data streaming method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3714605A4

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022265945A1 (en) * 2021-06-15 2022-12-22 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system
US11658955B1 (en) 2021-06-15 2023-05-23 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system
US11743035B2 (en) 2021-06-15 2023-08-29 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system
US11843636B1 (en) 2021-06-15 2023-12-12 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system
US11848930B1 (en) 2021-06-15 2023-12-19 Whatsapp Llc Methods, mediums, and systems for verifying devices in an encrypted messaging system

Also Published As

Publication number Publication date
EP3714605A4 (en) 2021-01-06
CA3087335A1 (en) 2019-07-11
US11303967B2 (en) 2022-04-12
EP3714605B1 (en) 2023-03-08
US20200396517A1 (en) 2020-12-17
EP3714605A1 (en) 2020-09-30

Similar Documents

Publication Publication Date Title
EP3714605B1 (en) Scrub and playback of video buffer over a wireless network
US11343677B2 (en) Secured pairing of video capture device and mobile device
US11218670B2 (en) Video-based data collection, image capture and analysis configuration
US11210501B2 (en) Camera enhanced ride sharing
US11823539B2 (en) High-priority event generation and reporting for camera-based security system
US10020997B2 (en) Device communication
US11895346B2 (en) Techniques for secure video frame management
CN117156190A (en) Screen projection management method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19736083

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019736083

Country of ref document: EP

Effective date: 20200623

ENP Entry into the national phase

Ref document number: 3087335

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE